added
stringlengths
24
24
created
stringlengths
23
23
id
stringlengths
3
9
metadata
dict
source
stringclasses
1 value
text
stringlengths
1.56k
316k
version
stringclasses
1 value
2018-04-18T18:51:00.000Z
2018-04-18T00:00:00.000
4952494
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.aclweb.org/anthology/N18-2003.pdf", "pdf_hash": "b2e11c5ebebcf5e22d34a1f5972c3683cfd9c9bb", "pdf_src": "ArXiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:165", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "0be19fd9896e5d40222c690cc3ff553adc7c0e27", "year": 2018 }
pes2o/s2orc
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods In this paper, we introduce a new benchmark for co-reference resolution focused on gender bias, WinoBias. Our corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a neural coreference system all link gendered pronouns to pro-stereotypical entities with higher accuracy than anti-stereotypical entities, by an average difference of 21.1 in F1 score. Finally, we demonstrate a data-augmentation approach that, in combination with existing word-embedding debiasing techniques, removes the bias demonstrated by these systems in WinoBias without significantly affecting their performance on existing datasets. Introduction Coreference resolution is a task aimed at identifying phrases (mentions) referring to the same entity.Various approaches, including rule-based (Raghunathan et al., 2010), feature-based (Durrett and Klein, 2013;Peng et al., 2015a), and neuralnetwork based (Clark and Manning, 2016;Lee et al., 2017) have been proposed.While significant advances have been made, systems carry the risk of relying on societal stereotypes present in training data that could significantly impact their performance for some demographic groups. In this work, we test the hypothesis that coreference systems exhibit gender bias by creating a new challenge corpus, WinoBias.This dataset follows the winograd format (Hirst, 1981;Rahman and Ng, 2012;Peng et al., 2015b), and contains references to people using a vocabulary of 40 occupations.It contains two types of challenge sentences that require linking gendered pro- The physician called the secretary and told her the cancel the appointment. The secretary called the physician and told him about a new patient. The secretary called the physician and told her about a new patient. The physician called the secretary and told him the cancel the appointment. Type 2 The physician hired the secretary because she was highly recommended. The physician hired the secretary because he was highly recommended. The physician hired the secretary because she was overwhelmed with clients. Type 1 The physician hired the secretary because he was overwhelmed with clients.Male and female entities are marked in solid blue and dashed orange, respectively.For each example, the gender of the pronominal reference is irrelevant for the co-reference decision.Systems must be able to make correct linking predictions in pro-stereotypical scenarios (solid purple lines) and anti-stereotypical scenarios (dashed purple lines) equally well to pass the test.Importantly, stereotypical occupations are considered based on US Department of Labor statistics.nouns to either male or female stereotypical occupations (see the illustrative examples in Figure 1).None of the examples can be disambiguated by the gender of the pronoun but this cue can potentially distract the model.We consider a system to be gender biased if it links pronouns to occupations dominated by the gender of the pronoun (pro-stereotyped condition) more accurately than occupations not dominated by the gender of the pronoun (anti-stereotyped condition).The corpus can be used to certify a system has gender bias. 1e use three different systems as prototypical examples: the Stanford Deterministic Coreference System (Raghunathan et al., 2010), the Berkeley Coreference Resolution System (Durrett and Klein, 2013) and the current best published system: the UW End-to-end Neural Coreference Resolution System (Lee et al., 2017).Despite qualitatively different approaches, all systems exhibit gender bias, showing an average difference in performance between pro-stereotypical and antistereotyped conditions of 21.1 in F1 score.Finally we show that given sufficiently strong alternative cues, systems can ignore their bias. In order to study the source of this bias, we analyze the training corpus used by these systems, Ontonotes 5.0 (Weischedel et al., 2012). 2ur analysis shows that female entities are significantly underrepresented in this corpus.To reduce the impact of such dataset bias, we propose to generate an auxiliary dataset where all male entities are replaced by female entities, and vice versa, using a rule-based approach.Methods can then be trained on the union of the original and auxiliary dataset.In combination with methods that remove bias from fixed resources such as word embeddings (Bolukbasi et al., 2016), our data augmentation approach completely eliminates bias when evaluating on WinoBias , without significantly affecting overall coreference accuracy. WinoBias To better identify gender bias in coreference resolution systems, we build a new dataset centered on people entities referred by their occupations from a vocabulary of 40 occupations gathered from the US Department of Labor, shown in Table 1. 3 We use the associated occupation statistics to determine what constitutes gender stereotypical roles (e.g.90% of nurses are women in this survey).Entities referred by different occupations are paired and used to construct test case scenarios.Sentences are duplicated using male and female pronouns, and contain equal numbers of correct coreference decisions for all occupations.In total, the dataset contains 3,160 sentences, split equally for development and test, created by researchers familiar with the project.Sentences were created to follow two prototypical templates but annotators were encouraged to come up with scenarios where entities could be interacting in plausible ways.Templates were selected to be challenging ganized by the percent of people in the occupation who are reported as female.When woman dominate profession, we call linking the noun phrase referring to the job with female and male pronoun as 'pro-stereotypical', and 'anti-stereotypical', respectively.Similarly, if the occupation is male dominated, linking the noun phrase with the male and female pronoun is called, 'pro-stereotypical' and 'anti-steretypical', respectively. and designed to cover cases requiring semantics and syntax separately.4Evaluation To evaluate models, we split the data in two sections: one where correct coreference decisions require linking a gendered pronoun to an occupation stereotypically associated with the gender of the pronoun and one that requires linking to the anti-stereotypical occupation.We say that a model passes the WinoBias test if for both Type 1 and Type 2 examples, prostereotyped and anti-stereotyped co-reference decisions are made with the same accuracy. Gender Bias in Co-reference In this section, we highlight two sources of gender bias in co-reference systems that can cause them to fail WinoBias: training data and auxiliary resources and propose strategies to mitigate them. Training Data Bias Bias in OntoNotes 5.0 Resources supporting the training of co-reference systems have severe gender imbalance.In general, entities that have a mention headed by gendered pronouns (e.g."he", "she") are over 80% male.5 Furthermore, the way in which such entities are referred to, varies significantly.Male gendered mentions are more than twice as likely to contain a job title as female mentions. 6Moreover, these trends hold across genres. Gender Swapping To remove such bias, we construct an additional training corpus where all male entities are swapped for female entities and vice-versa.Methods can then be trained on both original and swapped corpora.This approach maintains non-gender-revealing correlations while eliminating correlations between gender and coreference cues. We adopt a simple rule based approach for gender swapping.First, we anonymize named entities using an automatic named entity finder (Lample et al., 2016).Named entities are replaced consistently within document (i.e."Barak Obama ... Obama was re-elected." would be annoymized to "E1 E2 ... E2 was re-elected." ).Then we build a dictionary of gendered terms and their realization as the opposite gender by asking workers on Amazon Mechnical Turk to annotate all unique spans in the OntoNotes development set. 7ules were then mined by computing the word difference between initial and edited spans.Common rules included "she → he", "Mr." → "Mrs.","mother" → "father."Sometimes the same initial word was edited to multiple different phrases: these were resolved by taking the most frequent phrase, with the exception of "her → him" and "her → his" which were resolved using part-ofspeech.Rules were applied to all matching tokens in the OntoNotes.We maintain anonymization so that cases like "John went to his house" can be accurately swapped to "E1 went to her house." Resource Bias Word Embeddings Word embeddings are widely used in NLP applications however recent work has shown that they are severely biased: "man" tends to be closer to "programmer" than "woman" (Bolukbasi et al., 2016;Caliskan et al., 2017).Current state-of-art co-reference systems build on word embeddings and risk inheriting their bias.To reduce bias from this resource, we replace GloVe embeddings with debiased vectors (Bolukbasi et al., 2016). Gender Lists While current neural approaches rely heavily on pre-trained word embeddings, previous feature rich and rule-based approaches rely on corpus based gender statistics mined from external resources (Bergsma and Lin, 2006).Such lists were generated from large unlabeled corpora using heuristic data mining methods.These resources provide counts for how often a noun phrase is observed in a male, female, neutral, and plural context.To reduce this bias, we balance male and female counts for all noun phrases. Results In this section we evaluate of three representative systems: rule based, Rule, (Raghunathan et al., 2010), feature-rich, Feature, (Durrett and Klein, 2013), and end-to-end neural (the current state-ofthe-art), E2E, (Lee et al., 2017).The following sections show that performance on WinoBias reveals gender bias in all systems, that our methods remove such bias, and that systems are less biased on OntoNotes data. WinoBias Reveals Gender Bias Table 2 summarizes development set evaluations using all three systems.Systems were evaluated on both types of sentences in WinoBias (T1 and T2), separately in pro-stereotyped and anti-stereotyped conditions ( T1-p vs. T1-a, T2-p vs T2-a).We evaluate the effect of named-entity anonymization (Anon.),debiasing supporting resources 8 (Re- sour.) and using data-augmentation through gender swapping (Aug.).E2E and Feature were retrained in each condition using default hyperparameters while Rule was not debiased because it is untrainable.We evaluate using the coreference scorer v8.01 (Pradhan et al., 2014) and compute the average (Avg) and absolute difference (Diff) between pro-stereotyped and antistereotyped conditions in WinoBias. All initial systems demonstrate severe disparity between pro-stereotyped and anti-stereotyped conditions.Overall, the rule based system is most biased, followed by the neural approach and feature rich approach.Across all conditions, anonymization impacts E2E the most, while all other debiasing methods result in insignificant loss in performance on the OntoNotes dataset.Removing biased resources and data-augmentation reduce bias independently and more so in combination, allowing both E2E and Feature to pass WinoBias without significantly impacting performance on either OntoNotes or WinoBias .Qualitatively, the neural system is easiest to de-bias and our approaches could be applied to future end-to-end systems.Systems were evaluated once on test sets, Table 3, supporting our conclusions. Systems Demonstrate Less Bias on OntoNotes While we have demonstrated co-reference systems have severe bias as measured in WinoBias , this is an out-of-domain test for systems trained on OntoNotes.Evaluating directly within OntoNotes is challenging because sub-sampling documents with more female entities would leave very few evaluation data points.Instead, we apply our gender swapping system (Section 3), to the OntoNotes development set and compare system performance between swapped and unswapped data.9If a system shows significant difference between original and gender-reversed conditions, then we would consider it gender biased on OntoNotes data. Table 4 summarizes our results.The E2E system does not demonstrate significant degradation in performance, while Feature loses roughly 1.0-F1.10This demonstrates that given sufficient alternative signal, systems often do ignore gender biased cues.On the other hand, WinoBias provides an analysis of system bias in an adversarial setup, showing, when examples are challenging, systems are likely to make gender biased predictions. Machine learning methods are designed to generalize from observation but if algorithms inadvertently learn to make predictions based on stereotyped associations they risk amplifying existing social problems.Several problematic instances have been demonstrated, for example, word embeddings can encode sexist stereotypes (Bolukbasi et al., 2016;Caliskan et al., 2017).Similar observations have been made in vision and language models (Zhao et al., 2017), online news (Ross and Carter, 2011), web search (Kay et al., 2015) and advertisements (Sweeney, 2013).In our work, we add a unique focus on co-reference, and propose simple general purpose methods for reducing bias. Implicit human bias can come from imbalanced datasets.When making decisions on such datasets, it is usual that under-represented samples in the data are neglected since they do not influence the overall accuracy as much.For binary classification Kamishima et al. (2012Kamishima et al. ( , 2011) ) add a regularization term to their objective that penalizes biased predictions.Various other approaches have been proposed to produce "fair" classifiers (Calders et al., 2009;Feldman et al., 2015;Misra et al., 2016).For structured prediction, the work of Zhao et al. (2017) reduces bias by using corpus level constraints, but is only practical for models with specialized structure.Kusner et al. (2017) propose the method based on causal inference to achieve the model fairness where they do the data augmentation under specific cases, however, to the best of our knowledge, we are the first to propose data augmentation based on gender swapping in order to reduce gender bias. Concurrent work (Rudinger et al., 2018) also studied gender bias in coreference resolution systems, and created a similar job title based, winograd-style, co-reference dataset to demonstrate bias 11 .Their work corroborates our findings of bias and expands the set of systems shown to be biased while we add a focus on debiasing methods.Future work can evaluate on both datasets. Conclusion Bias in NLP systems has the potential to not only mimic but also amplify stereotypes in society.For a prototypical problem, coreference, we provide a method for detecting such bias and show that 11 Their dataset also includes gender neutral pronouns and examples containing one job title instead of two.three systems are significantly gender biased.We also provide evidence that systems, given sufficient cues, can ignore their bias.Finally, we present general purpose methods for making coreference models more robust to spurious, genderbiased cues while not incurring significant penalties on their performance on benchmark datasets. Figure 1 : Figure 1: Pairs of gender balanced co-reference tests in the WinoBias dataset.Male and female entities are marked in solid blue and dashed orange, respectively.For each example, the gender of the pronominal reference is irrelevant for the co-reference decision.Systems must be able to make correct linking predictions in pro-stereotypical scenarios (solid purple lines) and anti-stereotypical scenarios (dashed purple lines) equally well to pass the test.Importantly, stereotypical occupations are considered based on US Department of Labor statistics. Type 1: [entity1] [interacts with] [entity2] [conjunction][pronoun][circumstances].Prototypical WinoCoRef style sentences, where co-reference decisions must be made using world knowledge about given circumstances (Figure1; Type 1).Such examples are challenging because they contain no syntactic cues.Type 2: [entity1] [interacts with] [entity2] and then [interacts with] [pronoun] for [circumstances].These tests can be resolved using syntactic information and understanding of the pronoun (Figure1; Type 2).We expect systems to do well on such cases because both semantic and syntactic cues help disambiguation. Table 1 : Occupations statistics used in WinoBias dataset, or- Table 2 : (Graham et al., 2014)inoBias development set.WinoBias results are split between Type-1 and Type-2 and in pro/anti-stereotypical conditions.*indicates the difference between pro/anti stereotypical conditions is significant (p < .05)underanapproximate randomized test(Graham et al., 2014).Our methods eliminate the difference between pro-stereotypical and anti-stereotypical conditions (Diff), with little loss in performance (OntoNotes and Avg). Table 3 : F1 on OntoNotes and Winobias test sets.Methods were run once, supporting development set conclusions. Table 4 : Performance on the original and the gender- reversed developments dataset (anonymized).
v3-fos-license
2021-08-05T07:12:28.302Z
2021-07-19T00:00:00.000
236921779
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://hess.copernicus.org/preprints/hess-2021-350/hess-2021-350.pdf", "pdf_hash": "a7401154fb608c9d3d72a92b554690ef1e263860", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:166", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "a7401154fb608c9d3d72a92b554690ef1e263860", "year": 2022 }
pes2o/s2orc
Remotely sensed reservoir water storage dynamics (1984-2015) and the influence of climate variability and management at global scale Many thousands of large dam reservoirs have been constructed worldwide during the last seventy years to increase reliable water supplies and support economic growth. Because reservoir storage measurements are generally not publicly available, so far there has been no global assessment of long-term dynamic changes in reservoir water volume. We overcame 10 this by using optical (Landsat) and altimetry remote sensing to reconstruct monthly water storage for 6,743 reservoirs worldwide between 1984 and 2015. We relate reservoir storage to resilience and vulnerability and analyse their response to precipitation, streamflow and evaporation. We find reservoir storage has diminished substantially for 23% of reservoirs over the three decades but increased for 21%. The greatest declines were for dry basins in southeastern Australia (-29%), the USA (-10%), and eastern Brazil (-9%). The greatest gains occurred in the Nile Basin (+67%), Mediterranean basins (+31%) and 15 southern Africa (+22%). Many of the observed reservoir changes were explained well by changes in precipitation and river inflows, emphasising the importance of multi-decadal precipitation changes for reservoir water storage, rather than changes in net evaporation or (demand-driven) dam water releases. produced based on MODIS or Landsat imagery (Khandelwal et al. 2017;Ogilvie et al. 2018;Yao et al. 2019;Zhao and Gao 2018). Reservoir volume dynamics can be estimated at either regional or global scale using existing datasets and approaches 65 to derive both height and extent from remote sensing, but this approach is only suitable for a limited subset number of reservoirs worldwide due to wide spacing of the satellite altimetry tracks (Busker et al. 2019;Crétaux et al. 2011;Duan and Bastiaanssen 2013;Gao et al. 2012;Medina et al. 2010;Tong et al. 2016;Zhang et al. 2014). Messager et al. (2016) estimated the volume of lakes and reservoirs with a surface area greater than 0.10 km 2 at global scale using a geo-statistical model based on surrounding topography information. However, these estimates were not dynamic time series, and so do not 70 enhance our understanding of the influence of climate change and human activity on global reservoir storage. In this study, we reconstructed monthly reservoir storage for 1984-2015 worldwide using satellite observations, and examined long-term trends of global reservoir water storage, and changes in reservoir resilience and vulnerability over the past three decades. We investigated interactions between precipitation, streamflow, evaporation, and reservoir water storage 75 based on comprehensive analysis of streamflow from a multi-model ensemble and as observed at ca. 8,000 gauging stations, precipitation from a combination of station, satellite and forecast data, and open water evaporation estimates. Part of our objective was to determine the extent to which climate variability and human activity each affected global reservoir water volume over the past three decades. The root-mean-square error (RMSE) of G-REALM altimetry data is expected better than 10 cm for the largest water bodies (e.g., Lake Victoria; 67,166 km 2 ) and better than 20 cm for smaller ones (e.g., Lake Chad; 18,751 km 2 ) (Birkett et al. 2010). The advantage of using satellite radar altimeter to measure surface water height is that it is not affected by weather, time of day, and vegetation or canopy cover. The G-REALM data is currently only available for lakes and reservoirs with an extent 100 greater than 100 km 2 although observations for water bodies between 50-100 km 2 are expected in future. (Table S1). In total, we archived 22,710 river gauging records. Global monthly surface runoff estimates for 1984-2014 were derived from the eartH2Observe water resources reanalysis version 2 (Schellekens et al. 2017), calculated as the mean of an ensemble of eight state-of-the-art global models, including HTESSEL, SURFEX-TRIP, ORCHIDEE, WaterGAP3, JULES, W3RA, and LISFLOOD (for model details refer to https://doi.org/10.5194/hess-2021-350 Preprint. Discussion started: 19 July 2021 c Author(s) 2021. CC BY 4.0 License. Schellekens et al. (2017)). Precipitation estimates were derived from a combination of station, satellite, and reanalysis data 110 (MSWEP v1.1) (Beck et al. 2017). The representative maximum storage capacity reported in the GRanD v1.1 database (Lehner et al. 2011) was used as a reference value to calculate absolute storage changes. The HydroBASINS (Lehner and Grill 2013) dataset was used to define basin boundaries. Global reservoir storage estimation In total, 132 large reservoirs (Group A; Fig. 1) had records of both surface water extent and height for the overlapping period 115 1993-2015. We estimated the height and area at capacity as the maximum observed surface water height and extent, respectively, and calculated reservoir storage volume (Vo in GL or 10 6 m 3 ) as: where Ao (km 2 ) is the satellite-observed water extent, Amax the maximum value of Ao, ho (m) the satellite-observed water height, hmax the maximum value of ho, and Vc (GL) the storage volume at capacity. There were 78 reservoirs with a 120 relationship between Ao and Vo for this overlapping period with a Pearson's R≥0.4 (19% between 0.4-0.6, 32% between 0.6-0.8 and 49% between 0.8-1). For these reservoirs, V0 was estimated going back to 1984 using a cumulative distribution function (CDF) matching method based on A0. Figure 1 The total storage capacity in Group A (red) and B (brown) and left unaccounted (blue) and the combined capacity of reservoirs 125 for which the data were suitable (teal) or unsuitable (pink) for long-term analysis. For 6,611 reservoirs with water extent observations only (Group B; Fig. 1), we used the HydroLAKES method (Messager et al. 2016) to estimate storage. The mean lake or reservoir depth can be estimated using the empirical equation based on water surface area and the average slope within a 100 m buffer around the water body (Messager et al. 2016). Four empirical equations were developed by Messager et al. (2016) for different lake size classes (i.e., 0.1-1, 1-10, 10-100 and 100-500 130 km 2 ) (Table S2) Time series of in situ reservoir storage volume measurements are publicly available for a small subset of reservoirs. They 140 can be used to evaluate the uncertainty in the satellite-based storage estimates. Furthermore, data records for some storages can be found in the published literature, derived from grey literature or proprietary data sources. Given the emphasis in trend analysis was on relative changes between the pre-and post-2000 periods, the evaluation of satellite-derived reservoir storage focuses on Pearson's correlation (R) values as a measure of correspondence. In this study, we regard R values ranging from 0.4-0.7 as robust, and 0.7-1 as strong. 145 Trend analysis and attribution We were able to estimate monthly storage dynamics for 6,743 out of the 6,862 reservoirs reported in the GRanD database (Lehner et al. 2011), accounting for 89.3% of the total 6,197 km 3 reported cumulative capacity (Fig. 1). There were only 132 reservoirs for which both extent and height observations were available (Group A), but this relatively small number already accounted for almost half of global combined capacity (Fig. 1). To analyse long-term changes in reservoir storage between 150 1984-2015, we removed all reservoirs that were destroyed, modified, planned, replaced, removed, subsumed or constructed after 1984 or for which more than five years of water extent observations needed to be interpolated because of lacking data (Zhao and Gao 2018). This left 4,589 of the initial 6,743 reservoirs available for analysis, i.e., 68% of reservoirs, together accounting for 45.9% of combined global capacity (Fig. 1). 155 We calculated linear trends between 1984-2015 in annual reservoir storage, observed streamflow, modelled streamflow, and precipitation for each basin (HydroBASINS Level 3). Trend significance was tested using the Mann-Kendall trend test (p<0.05). The linear trends in modelled streamflow were validated by observed data. We also analysed the correlations between precipitation/streamflow and storage in terms of both time series and linear trend. Net evaporation was calculated for each reservoir as follows: 160 where En (mm) is cumulative monthly net evaporation loss (or gain, if negative), A is reservoir surface area (km 2 ) from Zhao and Gao (2018) former could explain the latter. Trends in storage and observed streamflow for individual reservoir and river were also analysed to provide additional information about spatial distribution of trends. Unlike the analysis at basin scale above, we do not relate the trend of each individual reservoir to a corresponding river gauge. This is becaucse there is typically a limited number of gauging station upstream a reservoir, and as such these river flow gauging data cannot accurately represent overall reservoir inflows. 170 Changes in reservoir resilience, and vulnerability between 1984-1999 and 2000-2015 were analysed at the scale of river basins. The reliability, resilience and vulnerability (RRV) criteria can be used to evaluate the performance of a water supply reservoir system (Hashimoto et al. 1982;Kjeldsen and Rosbjerg 2004). The calculation requires that an unsatisfactory state can be defined in which the reservoir cannot meet all water demands, leading to a failure event. Reliability indicates the 175 probability that the system is in a satisfactory state: where d(j) is the time length of the j th failure event, T is the total time length, and M is the number of failure events. Unfortunately, a single threshold for failure events is not readily determined: firstly, because we did not have access to water demand and release data for each reservoir, and, secondly, because reservoirs are typically operated in response to more than 180 a single threshold. Instead, we assumed that the reliability of each reservoir is designed to be 90%, leaving it in an unsatisfactory state for the remaining 10% of the time. This assumption made it possible to calculate resilience and vulnerability for each reservoir for the assumed 90% threshold. Resilience is a measure of how fast a system can return to a satisfactory state after entering a failure state: Vulnerability describes the likely damage of failure events: where v(j) is the deficit volume of the j th failure events. The change in vulnerability was expressed relative to the maximum deficit volume observed. Validation of global reservoir storage estimates Monthly storage data with at least 20-year time series of 67 reservoirs via the US Army Corps of Engineers and Australian Bureau of Meteorology were collected. The R between published and estimated volumes was above 0.9 for 67% of the 67 reservoirs (31 reservoirs with capacity between 10-100 MCM, 25 ones between 100-1,000 MCM, 7 ones between 1,000-10,000 MCM, 4 ones with capacity above 10,000 MCM), and above 0.7 for 90% of them. Some validation examples,195 including robust, typical, and poor agreement are shown in Fig. 2. Annual average water levels for Lake Aswan, the largest reservoir in the world, were published as a graph (El Gammal et al. 2010); a comparison shows good agreement between the satellite-derived storage and in situ measurements with R=0.97 (Fig. S1). Assuming the estimation method for Group A is more accurate than that for Group B, the latter can be evaluated against the former. The results show that 25 of the total 39 overlapping estimated reservoirs (3 reservoirs with capacity between 100-1,000 MCM, 27 ones between 1,000-10,000 MCM 200 and 9 ones with capacity above 10,000 MCM) show strong agreement (R≥0.9) between the two methods. Changes in global reservoir storage, resilience and vulnerability The trends (p<0.05) of water volume dynamics for 4,589 reservoirs and river discharge time series from around 8,000 gauging stations between 1984 and 2015 were analysis here (Fig. 4). We found no systematic global decline in reservoir water availability. Overall, there was a positive trend in combined global reservoir storage of +3.1 km 3 yr -1 , but this was almost entirely explained by positive trends for the two largest reservoirs in the world, Lake Kariba (+0.8 km 3 yr -1 ) on the 215 Zambezi River and Lake Aswan (+1.9 km 3 yr -1 ) on the Nile River (Fig. S2). Reservoir with increasing storage trends are nearly as common as declines. 1,034 reservoirs showed decreasing trends, mainly concentrated in southwest America, eastern South America, southeast Australia and parts of Eurasia, while 948 reservoirs showed increasing trends, distributed in northern North America and southern Africa (Fig. 4a). The global reservoir storage trending pattern is similar with global river discharge tendency. In particular, a majority of rivers in southwest America, eastern South America, and southeast 220 Australia have reduced river flows (Fig. 4b). There was no apparent relationship between primary reservoir purpose (i.e., irrigation, hydroelectric power generation, domestic water supply) and overall trend, arguably a first tentative indication that climatological influences dominate changes in release management. and the vulnerability of these reservoirs have increased by more than 30% (Fig. 5). In contrast, reservoirs in western 230 Mediterranean basins, the Nile Basin and southern Africa have stronger resilience and less vulnerability than before (Fig. 5). All these changes are attributed to changes in reservoir storage, as we found there are a robust positive relationship (R = 0.64) between changes from the pre-2000 to the post-2000 period in storage and resilience, and a strong negative relationship (R = -0.79) between resilience and vulnerability (Fig. 6). This means that if a reservoir has a decreasing storage, there would be a risk of falling to low capacity more often and enduring larger deficits than before. Increasing storage has the potential to 235 create other issues, such as overtopping, dam collapse, downstream flooding caused by untimely releases during the wet Influences of precipitation and river flow on global reservoir storage We summed storage for individual reservoirs to calculate combined storage in 134 river basins worldwide. Basins losing or 245 gaining more than 5% of their combined storage over the three decades could be found on every continent (Fig. 7c). Among these, 26 (19%) showed a significant decreasing and 39 (29%) a significant increasing trend in reservoir storage (Fig. 7c). For the majority of these 65 basins, trends were of the same sign for storage, runoff and precipitation, suggesting that precipitation changes are ultimately the most likely explanation for observed trends ( Fig. 7a and b). Opposite trends in precipitation (or runoff) and storage were found for 12 out of 134 basins, with six decreasing and six increasing storage 250 trends. Most of these could be explained by spatial variation within the respective basins (Fig. S3). The linear changes in modelled streamflow were validated against changes in observed streamflow, and the Pearson's correlation between them is 0.77, which indicated modelled streamflow can reliably represent trends in river flow globally (Fig. 8b). There is a robust positive relationship (R = 0.77) between linear changes from 1984-2015 in precipitation and streamflow (basin characteristics are assumed largely unchanged in the models) (Fig. 8a). A correlation above 0.6 between them can be found 255 in all these 134 basin except the Niger Basin in Africa and the Parana Basin in South America (Fig. 9b). Linear changes in reservoir storage also have a meaningfully positive relationship (R = 0.38, p < 0.01, ρ = 0.51) with streamflow (Fig. 8c), given the heterogeneous nature of human activities. It means a decreasing trend in streamflow (typically due to precipitation changes) generally leads to a decreasing trend in storage, and vice versa, but not necessarily proportionally. Figure 9a also shows that there are 59 basins that have a robust relationship between annual storage and inflow with R ranging from 0.4-260 0.8. They are mainly located in North America, southern South America, Mediterranean, southeastern Australia, and parts of Eurasia. These regions coincide with a large number of measured reservoirs (Fig. 4a) and a large total number of Landsat images over three decades (Pekel et al. 2016;Wulder et al. 2016), and vice versa. The overall relationship between reservoir storage and inflow might therefore be expected to be stronger if more reservoirs were measured and more useable Landsat imagery was available for those basins lacking them in our present analysis.. We also found that changes in net evaporation 265 accounted for well below 10% of the overall trends in storage for each of those 65 basins, reflecting that net evaporation rarely explains more than a few per cent in observed storage changes (Fig. 10). In summary, we did not find evidence for widespread reductions in reservoir water storage due to increased releases. Sharp decreases in river flow after 2011 in eastern Brazil led to the lowest reservoir storage levels, with combined losses of almost 18% in 2015 (Fig. 11c). Reservoirs in these basins with reduced storage also predominantly showed reduced resilience and increased vulnerability (Fig. 5). 290 Discussion This study reconstructed monthly reservoir water storage dynamics from 1984-2015 at global scale based on satellite-derived water extent (Zhao and Gao 2018) and altimetry measurements (Birkett et al. 2010). Where no altimetry data were available, geo-statistical models (Messager et al. 2016) were applied to satellite-derived water extent for reservoir water volume estimation. About half (48.2%, including most large reservoirs) of total reported cumulative reservoir capacity (Lehner et al. 295 2011) around the world was measured by combining satellite-derived extent and height, while 41.1% was estimated based on geo-statistical models using remotely sensed surface area. There does not appear to be any systematic global decline in global reservoir water availability, but we found significantly decreasing trends in reservoir water volumes in southeastern Australia, southwestern USA and eastern Brazil, creating the risk that storages fall to low capacity more often (i.e., weakened resilience) and endure larger deficits (i.e., higher vulnerability). 305 Trends in reservoir storage and river flow showed spatial consistency at both individual and basin scales globally. There was reasonably strong temporal correlation between precipitation, streamflow and storage. Changes in net evaporation only accounted for a small fraction of reservoir volume changes. Reservoir storage dynamics (ΔV) are the net result of river inflows (Qin), net evaporation (En) and dam (demand-related) water releases (Qout) as: ΔV = Qin -En -Qout (8) 310 We found that ΔV responds primarily to Qin and that En does not seem to have affected ΔV. This indicates dam (demandrelated) water releases (Qout) are less likely to be the main driver of storage changes (ΔV). Accurate temporal pattern estimates were the main purpose in this study because relative water storage and long-term change are more relevant information for water resources management. Our validation results show that 90% of the reservoirs 315 evaluated show strong correlation (R≥0.7) with water volume measured in situ. In terms of absolute value, water volume estimates were bias-corrected by representative maximum storage capacity from GRanD (Lehner et al. 2011) by assuming that the maximum observed surface water extent coincides with the area at full capacity. Biases remain in some reservoirs due to uncertainties in this maximum storage capacity. Representative maximum storage capacity values reported in GRanD were collected from different sources in the following order of priority: reported maximum or gross capacity, reported 320 normal capacity and reported live or minimum capacity. These uncertainties in reported maximum capacity may have https://doi.org/10.5194/hess-2021-350 Preprint. Discussion started: 19 July 2021 c Author(s) 2021. CC BY 4.0 License. influenced our results for individual reservoirs. This could be solved easily if more accurate reservoir storage or capacity data were available. The uncertainties and limitations of reservoir storage estimates are mainly from the errors in satellite altimetry data, satellite-325 derived water extent data, and the method used to estimate bathymetry. The quality and accuracy of these altimetry measurements depend on the size and shape of water body, surrounding topography, surface waves, major wind events, heavy precipitation, tidal effects, the presence of ice and the position of the altimeter track (Birkett et al. 2010;Busker et al. 2019). The RMSE of water level estimations of a narrow reservoir in steep terrain will be many tens of centimetres (Birkett et al. 2010;Schwatke et al. 2015). DAHITI altimetry data, with RMSE between 4-36 cm for lakes (Schwatke et al. 2015), 330 should have similar accuracy as G-REALM, although its water level observations have so far received less evaluation. The classifier used to produce GSWD surface water data performed quite well, with less than 1% commission error and less than 5% of omission error (Pekel et al. 2016). But no-data classifications in GSWD data caused by cloud, ice, snow, and sensor-related issues could lead to large data-gaps in time series and underestimation of actual reservoir extents (Busker et al. 2019). In general, a no-data threshold is applied to monthly GSWD data for removing imagery with large percentage of 335 contamination before deriving lake and reservoir water extent. It helps reduce the issue to some extent, but contaminated imagery would still remain in the rest of GSWD data. Zhao and Gao (2018) developed an automatic algorithm to repair contaminated Landsat imagery, based on which a continuous reservoir surface area datasets were produced. This has increased the number of effective images by 81% on average, and improved the coefficient of determination between satellite-derived extents and observed elevation or volumes from 0.735 to 0.998 for all reservoirs, from 0.598 to 0.997 for 340 large reservoirs with extent above 10 km 2 . There are typically two ways to estimate bathymetry based on digital elevation model (DEM) for reservoirs which have no satellite altimetry measurements from space. The first approach is to develop area-elevation curve based on a DEM (Avisse et al. 2017;Bonnema and Hossain 2017). The second method is to extrapolate surrounding topography from the DEM into 345 the reservoir to estimate bathymetry (Messager et al. 2016). Although the accuracy of these methods depend on errors inherent in DEM data, the latter one has been proven to a reliable and effective way to estimate bathymetry of global lakes and reservoirs. A coefficient of determination between predicted and reference depths of R=0.5 (N=7049) has been reported for global lakes and reservoirs (Messager et al. 2016). Therefore, this geostatistical approach was considered appropriate to estimate reservoir volumes for reservoirs that had only satellite-derived water extent observations. 350 The total number of Landsat images over North America, southern South America, southern Africa, central Eurasia, and Australia over the past three decades is much larger than in the rest of the world, and particularly in tropical regions (Pekel et al. 2016;Wulder et al. 2016). Regions with sparse Landsat observations can have additional uncertainties in their long-term trend analyses, although this issue has been mitigated to some extent by the approach from Zhao and Gao (2018). In principle, the inflow of sediments into reservoirs could contribute to decreasing storage. However, Wisser et al. (2013) showed that sedimentation caused a total decrease of global reservoir water storage of only 5% over a century (1901 to 2010), and hence we expect the effect of sedimentation on our 32-year analysis to be small. Regional storage trends in the dam reservoirs found here are consistent with trends reported in a previous study for 200 lakes 360 (including a few reservoirs) across North America, Europe, Asia and Africa during 1992-2019 (Kraemer et al. 2020). Both lakes and reservoirs are influenced by changing inflow and net evaporation in response to climate variability. Although human regulation has more influences on reservoirs than on natural lakes, our results suggests that overall human impacts on storage are less than natural influences. In line with the study carried out by Kraemer et al. (2020), we also found that the distribution of global lake and reservoir storage or level long-term trends does not fully reflect the "wet gets wetter and dry 365 gets dryer" paradigm that some have predicted to occur due to anthropogenic climate change (Wang et al. 2012). Reservoirs in dry regions, such as southwest America, southeastern Australia and central Eurasia, have indeed seen derceasing combined storage, while these in wet regions, such northern North America, have increasing storage. However, at the same time we found increasing storage in dry southern Africa and decreasing storages in wet southeastern South America. Additionally, total terrestrial water storage (i.e., the sum of groundwater, soil water and surface water) derived from GRACE 370 satellite gravimetry for the shorter period 2002-2016 showed decreases in endorheic basins in Central Eurasia and the southwestern USA and increases in Southern Africa consistent with our storage changes (Wang et al. 2018). Given that reservoir storage dynamics are the net result of river inflows, net evaporation and dam (demand-related) water releases, we found a reasonable relationship between changes in river flow and reservoir storage, while changes in net 375 evaporation do not seem to have affected storage trends significantly. We also infer that human activity (i.e. increased dam water releases) do not generally need to be invoked to explain changes in reservoir storage. However, there are no water demand and supply or dam operation data available globally that could serve as direct evidence, although there have been local studies. For example, reservoir operating rules (i.e. reservoir outflow) were inferred from a combination hydrologic modeling and satellite measurements for the Nile Basin, the Mekong Basin, northwest America, and forested region of 380 Bangladesh (Bonnema and Hossain 2017; Bonnema et al. 2016; Eldardiry and Hossain 2019). It was not possible to apply the techniques used in these studies at global scale because of the resulting uncertainties in inferred reservoir inflows. To distinguish the respective influences of human activity and climate variability on reservoir dynamics, greater collaboration and public sharing of in situ data on reservoir storage, water release and downstream water use would be requireed. In some basins, satellite-derived upstream and downstream river discharge dynamics (Hou et al. 2020;Hou et al. 2018) and changes 385 in irrigation area or evaporation ) may be able to provide additional information to better understand the drivers of reservoir water security. The algorithm from Zhao and Gao (2018) could in principle be used to calculate reservoir surface water extent time series beyond 2015, but is reliant on the availability of Landsat-derived GSWD (Pekel et al. 2016). Such data could also be derived from MODIS or Sentinel 2, and help understand how reservoir water storage change from Conclusions We reconstructed monthly storage dynamics between 1984-2015 for 6,743 reservoirs using satellite-derived water height and extent. For reservoirs with water extent data only, storage was estimated from surrounding topography. Over 90% of the 395 estimated reservoir storages dynamics show robust correlations of ≥ 0.7 (67% ≥ 0.9) against publicly available observed storage volume estimates for several reservoirs in the US, Australia and Egypt. Based on the developed global dataset, we found that reservoir storage changed significantly in nearly half of all basins worldwide between 1984-2015, with increases and decreases similarly common and mostly explained by corresponding precipitation and runoff changes. Increases appeared slightly more common in cooler regions and decreases more common in drier regions. We did not find evidence 400 that changes in water releases or net evaporation contributed meaningfully to global trends. Changes in reservoir water storage appear to be predominantly determined by periods of low inflow in response to low precipitation. Future changes in precipitation variability are among the most uncertain predictions by climate models (Trenberth et al. 2014). Therefore, a prudent approach to reservoir water management appears the only available means to avoid water supply failure for individual river systems.
v3-fos-license
2021-06-29T01:15:23.235Z
2021-06-26T00:00:00.000
235659000
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://aclanthology.org/2021.acl-long.34.pdf", "pdf_hash": "bb3200346c26578b3ffa422026754f214d0833f0", "pdf_src": "ACL", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:167", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "f635f5478a1102465dfeaff0afdda857291fb613", "year": 2021 }
pes2o/s2orc
A Training-free and Reference-free Summarization Evaluation Metric via Centrality-weighted Relevance and Self-referenced Redundancy In recent years, reference-based and supervised summarization evaluation metrics have been widely explored. However, collecting human-annotated references and ratings are costly and time-consuming. To avoid these limitations, we propose a training-free and reference-free summarization evaluation metric. Our metric consists of a centrality-weighted relevance score and a self-referenced redundancy score. The relevance score is computed between the pseudo reference built from the source document and the given summary, where the pseudo reference content is weighted by the sentence centrality to provide importance guidance. Besides an F_1-based relevance score, we also design an F_\beta-based variant that pays more attention to the recall score. As for the redundancy score of the summary, we compute a self-masked similarity score with the summary itself to evaluate the redundant information in the summary. Finally, we combine the relevance and redundancy scores to produce the final evaluation score of the given summary. Extensive experiments show that our methods can significantly outperform existing methods on both multi-document and single-document summarization evaluation. The source code is released at https://github.com/Chen-Wang-CUHK/Training-Free-and-Ref-Free-Summ-Evaluation. Introduction Text summarization systems have been developed rapidly due to the appearance of sequence-tosequence frameworks (Sutskever et al., 2014;Bahdanau et al., 2015;See et al., 2017;, transformer architectures (Vaswani et al., 2017) and large-scale pre-training models (Devlin et al., 2019;. How to accurately * This work was mainly done when Wang Chen was an intern at Tencent AI Lab. evaluate the summaries generated from these systems also attracts more and more attention in this research area. One of the most accurate evaluation methods is human evaluation. However, human evaluation is expensive, time-consuming, and nonreproducible. Thus, it is necessary to develop automatic evaluation metrics for text summarization systems. Existing automatic summarization evaluation metrics can be roughly categorized into two groups: reference-based metrics and reference-free metrics. In this work, we focus on reference-free metrics. Reference-free summarization evaluation metrics have been developed in parallel in multidocument summarization and single-document summarization. The SOTA reference-free method for multi-document summarization evaluation, SU-PERT , predicts a relevance score for each (document, summary) pair to estimate the informativeness of the summary and then averages all the scores from multiple documents as the final evaluation score. For each pair, SUPERT employs the top-ranked sentences which are ranked by the position or centrality as a pseudo reference of the document and then applies BERTScore to produce a relevance score between the pseudo reference and the given summary. The SOTA single-document summarization referencefree evaluation metric, LS Score , combines a learned linguistic scorer for the summary and a cosine similarity scorer for the (document, summary) pair to produce the final score. Although SUPERT and LS Score achieve the SOTA performance on their own areas respectively, they still have several drawbacks. For example, SUPERT only considers the relevance score between the document and the summary while ignoring the other aspects such as how much redundant information is contained in the summary. Besides, SUPERT assumes that all pseudo reference sen-tences are equally-important. However, in the real world, the key information of a document is unevenly distributed over sentences. Therefore, such an assumption may introduce extra noise for the evaluation. Note that although SUPERT may employ sentence centrality to select document sentences as a pseudo reference, they ignore the sentence centrality after the selection and still treat the selected sentences equally-important. As for LS Score, although it does not require a reference during the evaluation of a summary, it requires a large-scale training dataset with reference summaries to train the linguistic scorer. Besides the intrinsic drawbacks in these SOTA methods, to our best knowledge, there is no reference-free evaluation metric showing that it can achieve the SOTA performance on both multi-document and singledocument summarization. To solve the above limitations, based on SU-PERT, we propose a novel training-free and reference-free metric for both multiple and single document summarization evaluation. Our metric is composed of a centrality-weighted relevance score and a self-referenced redundancy score. For the relevance score which is employed to estimate the informativeness of the summary, we incorporate the following new features. First, unlike previous work which only utilizes the tokenlevel representations, motivated by , we engage a hybrid way that contains both token-level representations and sentence-level representations to encode the document and the summary. The purpose of the hybrid representation is to enable our method to consider richer mapping styles (i.e., token-to-token, sentence-to-token, and sentence-to-sentence) and help to produce a more comprehensive evaluation score. Second, we utilize the sentence centrality computed from sentence-level representations of the source document to produce the importance weights of the pseudo reference sentences and tokens. Based on the weights, we compute a weighted relevance score that is more precise by considering the relative importance. Third, besides the F 1 version of our relevance score, we also propose an adaptive F β version where recall is considered β times as important as precision. β is computed based on the length ratio between the pseudo reference and the given summary. The motivation is to punish the short summary that can easily get high precision while covering very limited important information in the pseudo reference (i.e., low recall). To measure the redundancy of a summary, we design a simple but effective self-referenced similarity score. If a summary contains much redundant information, there must exist plenty of semantically similar tokens or sentences. Based on this assumption, we use the summary itself as the reference and input a (summary, summary) pair into a selfmasked BERTScore to produce a redundancy score that evaluates the averaged degree of semantic similarity of each token or sentence with other tokens or sentences. After obtaining the centrality-weighted relevance score and the self-referenced redundancy score, we combine them to predict the final evaluation score. Depending on either F 1 or F β is applied in our relevance score, we propose two variants of our method: the F 1 -based version and the F β -based version. Extensive experiments are conducted on both multi-document and single-document summarization datasets. The results show that our F 1based method already outperforms all the SOTA baselines on all datasets. Moreover, our F β -based method can further improve the performance on multi-document summarization datasets. Our contributions are summarized as follows: (1) A novel training-free and reference-free summarization evaluation metric which considers both relevance and redundancy; (2) A centrality-weighted relevance score that effectively utilizes the sentence centrality of the documents to provide importance guidance for the pseudo reference tokens and sentences. Besides the F 1 version, we also develop an F β based relevance score which pays more attention to recall; (3) A self-referenced redundancy score that utilizes a self-masked BERTScore to detect the duplicated information of the given summary; (4) To the best of our knowledge, we are the first evaluation metric that can achieve SOTA performance on both multiple and single document summarization under the reference-free setting. Preliminary Notations. We denote vectors as bold lowercase characters and matrices as bold uppercase characters. The characters that are not bold are used to denote scalars. Calligraphy uppercase characters are utilized to represent sets. Problem Definition. We formally define the reference-free summarization evaluation problem as follows. Give a set of documents D = {d 1 , d 2 , ..., d K } and a generated summary x, the goal is to predict a score to represent the overall quality of the summary. K = 1 and K > 1 indicate single-document and multi-document summarization respectively. Our Methodology The overall framework is illustrated in Figure 1. Our final evaluation score of a summary consists of an averaged centrality-weighted relevance score and a self-referenced redundancy score. Both scores are calculated on a semantic-level instead of utilizing n-gram overlapping. The averaged relevance score is computed from the relevance score between the summary and each document in the document set. The redundancy score is calculated based on the summary itself. Centrality-weighted Relevance Score Our relevance score aims to estimate the informativeness of the given summary. We first encode each document in the document set and the summary into hidden representations. Then, for each document, we select essential sentences by centrality to build a pseudo reference. Next, we compute a centrality-weighted relevance score between the summary and each pseudo reference. Finally, we average all the relevance scores as the final relevance score of the summary. We use the k-th document d k and a summary x as an example to show the workflow. Encoding. Following SUPERT , we first split the document d k and the summary x into sentences. Then, the pre-trained SBERT 1 is employed to encode the tokens of each sentence into token-level contextual hidden representations. We also apply max-pooling on all the tokens of a sentence to obtain the sentence-level hidden representation. Following previous work, when utilizing the token-level representations to compute the relevance and redundancy scores, we will filter out the non-informative tokens such as stop-words to improve the efficiency. Building Pseudo Reference. We do not choose all the document sentences of d k to evaluate the relevance of the summary. Because the whole document usually contains plenty of unimportant sentences which may introduce extra noise for the relevance evaluation. Thus, we select important document sentences to build a pseudo reference r for the evaluation. The sentence selection is based on the centrality of each sentence, which is computed by the unsupervised algorithm, PacSum (Zheng and Lapata, 2019), using the sentence-level representation. After obtaining the centrality scores of all sentences of the document, we choose the top-M 2 sentences as the pseudo reference. Besides, we normalize the centrality scores to [0, 1] and denote the normalized centrality scores of the selected sen-tences asā s = [ā s 1 ,ā s 2 , ...,ā s M ] whereā s i ∈ [0, 1] and the superscript s means sentence-level. We denote the pseudo reference building process as PacSumTopM. Computing Relevance Score with One Pseudo Reference. Instead of only using token-level representations, we also leverage the sentence-level representations to provide multi-level information. The hybrid representations of the summary x and the pseudo reference r are denoted as follows: where n and N (m and M ) are the token number and sentence number of the summary (pseudo reference). w and s represent the token and sentence hidden representations respectively. Besides the hybrid representations, we also introduce a centrality weighting scheme to weight the tokens and sentences of the pseudo reference, which is different from previous work that either treats them equally or uses the surface statistics like IDF as the weights. Based on the centrality scores of the selected pseudo reference sentences i.e.,ā s = [ā s 1 ,ā s 2 , ...,ā s M ], we assign the weights of the pseudo reference tokens as follows: whereā i:w j ∈s i indicates the token w j inherits the centrality score from its sentence s i . Since we have already removed the non-informative tokens in the token-level representations of each sentence, the remaining tokens capture the key information of the sentence and consequently it is reasonable to perform such a weight inheritance. Next, we combine token weightsā w and sentence weightsā s to get the final normalized centrality-based weights of the hybrid representations: where "[·; ·]" represents concatenation. Based on the hybrid representations (i.e., X and R k ) and the centrality-based weights of the pseudo reference tokens and sentences (i.e., a), we compute the relevance score between the summary and the pseudo reference by a weighted BERTScore . For brevity, we denote the j-th element of X as x j , the i-th element of R k as r i , and the i-th element of a as a i : where "Sim" denotes the cosine similarity and |X| equals to n + N . Recall, P recision, and F 1 are in the range of [-1, 1]. Besides the F 1 version, we also propose an adaptive F β version of relevance score as follows: where |R k | = m+M , |X| = n+N , and γ is a positive integer hyper-parameter. In our experiments, γ is set as 2 after fine-tuning on the validation dataset and is fixed for all the testing datasets. The physical meaning of β is that the Recall score is considered β times as important as the P recision score. In summarization evaluation, the coverage of the key information is always the most important quality indicator of the summary. Thus, we set the lower bound of β as 1. On the other hand, the metric should not only evaluate the key information coverage, containing less unimportant content in the summary should also be considered. Therefore, we set the upper bound of β as √ 2. As shown in Eq.12, within the range of [1, √ 2], β adaptively changes according to the ratio between |R k | and |X|. The intuition comes from that a longer pseudo reference implies more key information needs to be covered by the summary. Besides, a shorter summary can easily get high precision but covers very limited important information in the pseudo reference. Thus, we give Recall a higher weight to punish such short summaries when the pseudo reference is long. Final Averaged Relevance Score. After computing the centrality-weighted relevance score between the summary and the pseudo reference of each source document, we employ the average as the final relevance score of the summary: score rel = mean([F 1 * , ..., F k * , ..., F K * ]), (13) where * is 1 for the F 1 variant and β for the F β variant. The superscript k indicates the F * score is computed with the k-th document. Note that score rel ∈ [−1, 1] and higher is better. Self-referenced Redundancy Score In this section, we introduce our self-referenced redundancy score. We engage the summary itself as the reference to evaluate the degree of the semantic similarity between each summary token or sentence with the other tokens or sentences. The averaged semantic similarity degree is used as the redundancy score. The computation is based on a self-masked BERTScore as follows: where "j : i = j" means we do not consider the similarity between x i and itself, i.e, self-masked. Because of the symmetric property, the F 1 , precision, and recall scores are equal with each other. This is also the reason that we use precision in Eq.14 as the final redundancy score. Note that score red ∈ [−1, 1] and lower is better. Final Evaluation Score After obtaining the relevance score and the redundancy score, we apply a linear combination to produce the final evaluation score of the summary based on the document set: where 0 < λ ≤ 1 is a hyper-parameter to scale the redundancy score and score ∈ [−1, 1]. Higher score means better summary quality. In our experiments, after fine-tuning on the validation set, λ is set as 0.6 and is fixed for all the testing datasets. We denote the variants of our final method as Ours(F β )-PacSumTopM and Ours(F 1 )-PacSumTopM depending on whether the adaptive F β is employed. Table 1: Statistics of datasets. "Valid." and "Test." indicate the dataset is used for validation and testing, respectively. "|T opic|" is the number of topics. Under each topic, a set of documents is given and summaries are from different systems associating with humanannotated quality scores. "|Set|" is the number of documents in the document set. "Ave.S" and "Ave.T" represent the averaged sentence number and token number per document or summary. Note that the token number is counted after the tokenization. "|Systems|" denotes the number of summarization systems in the dataset. For TAC datasets, we compute correlation coefficients between predicted scores of an evaluation method and the annotated Pyramid scores of summaries to measure the effectiveness of the method. Following Gao et al. (2020), a correlation is computed for each topic. Then, the averaged correlation from all the topics is engaged as the final correlation of the method with human ratings. For CNNDM dataset, correlations are calculated with the human scores in three dimensions including Overall, Grammar, and Redundancy. Following , the correlation is computed between predicted scores of the 499 × 4 = 1996 (document, summary) pairs with corresponding human ratings. Baselines In this section, we briefly introduce our baselines. We choose TF-IDF, JS (Louis and Nenkova, 2013), and REPEAR as traditional reference-free baselines. All these traditional baselines do not build pseudo references and directly utilize the full content of the documents. For fairness, we also show the performance of our methods without building pseudo reference. We denote them as Ours(F 1 )-All and Ours(F β )-All since they use the whole document as a reference. We also extend several popular referencebased methods as baselines. We adapt ROUGE-1/2/L (Lin, 2004), MoverScore , and S+WMS into the referencefree scenario via building the pseudo reference with the PacSumTopM method. We add the suffix "-PacSumTopM" to these baseline names to indicate the pseudo reference building process. Besides, the SOTA reference-free summary evaluation metrics are also selected as our strong baselines, including C-ELMO/C-SBERT (Sun and Nenkova, 2019), SUPERT/SUPERT-IDF , and LS Score . C-ELMO (C-SBERT) encodes the document and the summary using the pre-trained ELMO (SBERT) and then computes their cosine similarity. SUPERT-IDF is an extension of SUPERT, which utilizes the inverse document frequency (IDF) as the importance weight of each token. For fair comparisons, we also apply the same pseudo reference building process i.e., PacSumTopM, to C-ELMO/C-SBERT/SUPERT/SUPERT-IDF and add the suffix "-PacSumTopM" to the their names. Main Results The main experimental results on multi-document summarization datasets are shown in Table 2. We find that our F 1 version (i.e., Ours(F 1 )-PacSumTopM) already consistently outperforms all the baselines, which indicates the effectiveness of our centrality-weighted relevance score and our self-referenced redundancy score. The results also and Ours(F 1 ) on TAC-2011 for different |Set| and |Systems|. Positive gaps mean our F β can improve the performance while negative gaps indicate our F β degrades the performance. When changing one of them, the other is fixed. "all" means the full size is applied, i.e., 10 for |Set| and 50 for |Systems|. demonstrate that our F β version can further improve the performance of multi-document summarization evaluation. By comparing Ours(F β )-PacSumTopM and Ours(F β )-All, we see that the pseudo reference building process can significantly improve the performance. This is also the reason why we apply the same pseudo reference building process into SOTA baselines for fair comparisons. In the remaining part of this paper, we omit the suffix "-PacSumTopM" for simplicity when we mention a method. We also test our methods on the single-document summarization dataset without further fine-tuning the hyper-parameters. The main results are displayed in Table 3. We note that our F 1 version still outperforms all the baselines, which manifests the high generalization ability of our F 1 -based method. One interesting finding is that the performance significantly drops after incorporating the F β score. To study the reason for the performance degradation on CNNDM after incorporating F β , we compare CNNDM and TAC datasets first. From Table 1, we note the main differences between them are the size of the document set for each topic (i.e., |Set|) and the number of the summarization systems (i.e., |Systems|). CNNDM has much smaller |Set| and |Systems|. We use the TAC-2011 dataset as an example to investigate whether our F β is unsuitable for smaller |Set| and |Systems|. We change |Set| and |Systems| respectively and report the gap of Spearman's ρ between Ours(F β ) and Ours(F 1 ) in Figure 2. From the results, we observe that our F β Table 4: Spearman's ρ of incorporating the centrality weighting and redundancy score into MoverScore based framework. "+Both" means these two features are simultaneously applied. can consistently improve the performance for different |Set|. For the single-document summarization setting, i.e., |Set|=1, it still obtains a positive gap. Nevertheless, when the |Systems| is small such as 4, applying our F β leads to a dramatic performance dropping. From Table 1, we also see that CNNDM and TAC-2011 have different summary lengths (73.2 for CNNDM and 120.9 for TAC-2011). However, when we limit the |Systems| of TAC-2011 to smaller numbers, the average length of generated summaries is still around 120, which indicates the performance degeneration is indeed from the change of system numbers. Therefore, we suggest using Ours(F β ) when |Systems| is large like 12 and employing Ours(F 1 ) when |Systems| is small like 4. Ablation Study For better understanding the contributions of our proposed components, we conduct ablation studies on the best-performed method on each dataset, i.e., Ours(F β ) for the multi-document summarization datasets and Ours(F 1 ) for the single-document summarization dataset. We display results of the rank-based Spearman's ρ in Figure 3. As shown in the figure, after removing one of the three components (i.e., the centrality weighting, the hybrid representation, and the redundancy score), the performance of our methods become worse in most cases. This finding demonstrates the effectiveness of our proposed components. Besides, we also note that removing the redundancy score significantly degrades the performance on the redundancy evaluation on CNNDM, which indicates our redundancy score effectively captures the redundancy degree of the summaries. Apply Centrality Weighting and Redundancy Score into MoverScore Besides basing on BERTScore, we also study whether our key features i.e., the centrality weighting and redundancy score, can work well in a : Ablation studies for Ours(F β ) on TAC datasets and Ours(F 1 ) on CNNDM. "-CentralityW." means that we remove the centrality weighting when computing relevance scores. "-HybridR." represents we only utilize the token-level representations when calculating relevance and redundancy scores. "-Redundancy" indicates we omit the redundancy score. MoverScore based framework (i.e., the relevance and redundancy scores are computed using Mover-Score). Note that our F β is not applicable to Mover-Score since it is not an F -measure. The results are listed in Table 4. We find that these two features significantly improve the performance of the original MoverScore on single-document summarization evaluation while degrading the performance dramatically on multi-document summarization evaluation. On CNNDM, the enhanced Mover-Score even outperforms Ours(F 1 ) on the "Overall" and "Redundancy" aspects, which indicates Mover-Score is a promising basis for our proposed new features. We leave solving the performance dropping of the enhanced MoverScore on multi-document setting as future work. Robustness Analysis We investigate the robustness of our method on the following factors and report the experimental results on the validation dataset (i.e., TAC-2010) in Figure 4: (1) the hyper-parameter λ for scaling the redundancy score; (2) the hyper-parameter γ in F β ; (3) the number of selected sentences for pseudo reference i.e., M ; (4) different pre-trained contextual encoding models including BERT-base 5 , BERTlarge 6 , RoBERTa-base 7 , and RoBERTa-large 8 . Since both Spearman's ρ and Kendall's τ are rank-based correlation coefficients, we omit Kendall's τ for simplicity. From this figure, we observe that the performance of our method is relatively stable for different λ and γ. We also find that a small M leads to lower correlations because much important information may be abandoned when building the pseudo references. But a large M will also degenerate the correlations since more noises are introduced. Thus, a moderate M is better. As for encoding models, we note that large encoding models obtain better performance than base encoding models. However, large models need more computation resources and time to encode the input text. Note that for our final method, we only fine-tune λ and γ on the TAC-2010 and set them as 0.6 and 2. As for M and encoding models, following the configuration of SUPERT , we directly set M as 12 and employ the BERT-large as the encoding model. All these factors are fixed for all testing datasets. Performance on Bad/Good Summaries In this section, we evaluate the ability of our method to distinguish bad and good summaries. The bad and good summaries are selected by human ratings. We use TAC-2011 as an example and choose SUPERT as a strong baseline. The corresponding distributions of the reversed rank for bad and good summaries are illustrated in Figure 5. A smaller (larger) reversed rank represents the summary is assigned with a lower (higher) score. From the figure, we find that compared with SUPERT, Our(F β ) has a better ability to assign bad sum- maries lower scores and good summaries higher scores, which demonstrates the effectiveness of our method again. Moreover, we also note that both SUPERT and Ours(F β ) are good at giving bad summaries lower scores while having difficulty in assigning good summaries higher scores. We leave solving this problem as another future work under the reference-free setting. Related Work 7 Conclusion In this paper, we propose a novel training-free and reference-free summarization evaluation metric consisting of a relevance score and a redundancy score. Experiments on multi-document and single-document summarization settings show the effectiveness of our methods. One promising future direction is to solve the performance dropping issue after applying our key features into Mover-Score and the other is to tackle the problem that current metrics struggle to assign higher scores for good summaries.
v3-fos-license
2023-11-23T16:21:42.221Z
2023-11-21T00:00:00.000
265363603
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2227-9032/11/23/3008/pdf?version=1700559066", "pdf_hash": "2dd872a579c64a628c1e33311f088bab231665ff", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:168", "s2fieldsofstudy": [ "Sociology", "Education", "Medicine" ], "sha1": "cc801263eece897bf46759c22e6c25706cc8cd84", "year": 2023 }
pes2o/s2orc
Factors Impacting Retention of Aged Care Workers: A Systematic Review Retention of care support workers in residential aged care facilities and home-based, domiciliary aged care is a global challenge, with rapid turnover, low job satisfaction, and poorly defined career pathways. A mixed-methods systematic review of the workforce literature was conducted to understand the factors that attract and retain care staff across the aged care workforce. The search yielded 49 studies. Three studies tested education and training interventions with the aim of boosting workforce retention and the remaining 46 studies explored opinions and experiences of care workers in 20 quantitative, four mixed-methods and 22 qualitative studies. A range of factors impacted retention of aged care staff. Two broad themes emerged from the analysis: individual and organisational factors facilitating retention. Individual factors related to personal satisfaction with the role, positive relationships with other staff, families, and residents, and a cooperative workplace culture. Organisational factors included opportunities for on-the-job training and career development, appropriate wages, policies to prevent workplace injuries, and job stability. Understaffing was often cited as a factor associated with turnover, together with heavy workloads, stress, and low job satisfaction. With global concerns about the safety and quality of aged care services, this study presents the data associated with best practice for retaining aged care workers. Introduction Globally, one of the most rapidly growing sectors of the care economy is aged care [1].People in developed countries can expect to live at least a decade longer than the global average of 73 years [2].Of concern, the proportion of the population requiring health and care services is increasing due to an ageing demographic [3].At the same time, the share of the population that is able to deliver services is decreasing [4].The size of the frontline aged care workforce, such as personal care attendants, nursing assistants, and allied health assistants, will need to quadruple to meet the demand of ageing societies [5].The aged care workforce is reliant on a supply of workers that predominantly consists of women with few qualifications, in labour-intensive roles [6,7].These staff provide essential management of mobility, cognitive impairment, toileting, bathing, and feeding [8].It is estimated that more than half of aged care residents live in facilities with insufficient staffing for person-centred care [9]. Upwards of 25% of aged care workers in countries such as the United Kingdom and Australia spend less than one year in care support roles [10][11][12].The median annual turnover for nursing assistants in the United States is nearly 99% [13].Staff turnover in aged care settings is negatively associated with the quality of care [13,14].Turnover is also associated with falls [15], infection rates [16], and low resident and staff satisfaction [17,18].Supervisors of the care workforce are usually registered and enrolled nurses who also have critical staff shortages [19].To improve care experiences and outcomes for older people, care workers need to be trained, safe, and adequately supported to be retained in the workforce. Evidence-informed, tailored strategies for workforce redesign are arguably needed to recruit and retain aged care workers [20] and to prevent burnout [21] and safety incidents [22].A variety of rewards are recognised and valued by staff [23].Incentives include adequate pay rates [24], safe and positive working conditions [25,26], and suitable geographical locations for employment [27].The extent to which other determinants of workforce retention, such as comprehensive orientation and ongoing training at the workplace, apply to aged care workers is currently not known [28]. There is a need to better align the motivations and needs of aged care workers to the requirements of aged care support roles, to improve the length of service tenure [23].The main aim of this systematic review is to identify, summarise, and aggregate the quantitative and qualitative evidence related to the retention of care workers in aged care homes, respite care, retirement villages, and home settings.A second aim is to identify barriers and facilitators to care staff retention in the aged care sector. Materials and Methods A mixed-methods design following the JBI convergent integrated approach [29] was used to synthesise quantitative and qualitative evidence.This approach reflected the complexity of research questions in health and social care and afforded deep understanding of the issues [30].The results were reported in accordance with the preferred reporting for systematic reviews and meta-analyses (PRISMA) guidelines [31] (Table S1) and the enhancing transparency in reporting the synthesis of qualitative research (ENTREQ) checklist [32] (Table S2).The review was prospectively registered with the International Prospective Register of Systematic Reviews (PROSPERO) (CRD:42023440055). Search Strategy A systematic literature search identified articles from Medline, Embase, PsycINFO, CINAHL, and AgeLine.Articles were limited to peer-reviewed, primary studies published in the English language from January 2012 to July 2023 to provide a contemporary overview of the evidence.No geographical limitations were placed on the search.The search was developed and conducted with a university academic librarian, with keywords informed by MeSH headings and focused on the concepts of population (i.e., nursing assistant and other relevant terms), setting (i.e., residential aged care), and outcomes of interest.These outcomes included, but were not restricted to (1) workplace satisfaction: job status, turnover or intention to leave, stress (burnout, occupation, job), attitudes, perceptions, workplace violence or danger, absenteeism, job fulfilment, conflict resolution; (2) barriers and facilitators: motivation, supports, challenges; (3) workload: rostering, supervision, understaffing, recruitment, retention, empathy or compassion fatigue; (4) professional development: education, ongoing training, job or career flexibility, professional identity, judgement, career opportunities and pathways; (5) clinical environment: working conditions, leadership, manager support, mentorship, occupational health and safety; and (6) interventions to optimise recruitment and retention.Additional articles were searched for via reference lists of included studies, relevant systematic reviews, trial registries (i.e., WHO International Clinical Trials Registry Platform, ClinicalTrials.gov,and Australian New Zealand Clinical Trials Registry), and grey literature (i.e., Google Scholar, social and aged welfare organisational websites).The search strategy for Medline is detailed in Table S3.Identified articles were imported into Covidence [33] and duplicates removed. Eligibility and Screening An aged care worker was defined as a person employed to independently deliver direct care under the overall supervision of a registered or enrolled nurse [34] or under the supervision of an allied health professional or medical practitioner, within an aged care setting (residential aged care or home-based aged care).A widely accepted definition of older person is over 60-65 years [35]; however, we did not apply age limits to inclusion criteria.Residential aged care facilities (also known as care homes, long-term care facilities or nursing homes) were defined as facilities providing 24 h nursing and care to older adults [36].Home-based care (domiciliary care) was defined as the provision of scheduled, as-needed, healthcare within an older person's home.In the case of a study including both aged care workers and other employees within aged care, the study was only included if the data related to aged care workers could be separated or the number of aged care support workers represented a clear majority (>70%) of participants.Analysis of the professional care workforce such as doctors, nurses, and physiotherapists was not performed as this has been covered in a separate manuscript [37].Datasets were limited to the past 10 years to focus on contemporary evidence concerning the aged care sector, noting changes including the removal of distinction between high-and low-level care, establishment of "home care" and means testing for contributions to care [38].The inclusion and exclusion criteria are listed in Table 1.The title, abstract and full-text articles were screened independently by two authors (C.T., J.M.) who applied the criteria from Table 1 to determine eligibility.Disagreements were discussed and resolved, involving a third reviewer (M.M.) when necessary. Methodological Quality The mixed-methods appraisal tool (MMAT) [39] was used by two authors independently (C.T., J.M.) to assess methodological quality of the selected studies.Consensus was achieved through discussion among the two reviewers, with incorporation of a third author (M.M.) as required.The design of the MMAT allows for analysis of studies across a range of methods, with different criteria to describe and evaluate mixed methods, qualitative and quantitative studies.These criteria assess factors that impact risk of bias, completeness, and transparency of studies.Following updated recommendations [40], MMAT scores were not allocated a numerical value; instead, studies were ranked low, moderate, or high quality.Due to the limited literature, all studies were included in data synthesis; however, study quality was considered in interpreting the synthesised evidence. Data Extraction and Analysis A data extraction sheet was purposefully developed, based on the JBI mixed methods data extraction form [30], and used to collect data related to the study aims, setting, design and methodological framework, sampling methods, data collection methods, demographics of the care worker populations, and key findings related to review outcomes.For interventional studies, an abbreviated template for intervention description and replication TIDieR checklist [41] was used to capture details of the intervention, who provided it, and how it was delivered.A single author (C.T.) completed this process with data checked for accuracy, completeness, and quality by another author (M.M.).Where studies included aged care support workers and other employees in aged care, only the data relating to care workers were extracted where possible. Quantitative data were tabulated, with descriptive statistics, p-values, and effect sizes for mean score differences presented where available.The results were then narratively synthesised.Originally, the research team had planned to calculate change scores where relevant data, such as workforce attrition or retention rates, were sufficiently reported; however, limited data availability prevented this.Qualitative data pertaining to the research question were extracted and managed using NVivo software (release 1.7) [42].An interpretive description approach was followed to allow for representation of subjective experiences at both individual and wider population levels [43].The data were analysed thematically, with initial reading and rereading for deep understanding [44].Key words, phrases, and sentences relevant to the research question were synthesised and codes developed.These were refined and grouped into themes and subthemes, then reviewed and discussed (C.T., M.M.) until consensus reached. Following the JBI mixed methods review methodology [30], the results of quantitative and qualitative components were aggregated.The narratively synthesised quantitative results were categorised and summarised to create subcategories to fit together with the qualitative synthesis.This generated a new set of organised findings regarding factors influencing aged care worker retention.A subgroup analysis pertaining to the care setting was not performed because there were minimal home-based care studies meeting the inclusion criteria. Identified Studies As seen in the PRISMA flow diagram (Figure 1) [31], a total of 2506 studies were identified through database searches, with an additional 29 studies found through citation searching, websites, or organisation releases.Following removal of duplicates, 2460 studies were independently screened for eligibility, with 143 assessed as potentially eligible.Some studies (n = 26) were excluded because data related to aged care workers could not be separated from professionals or administrators working in aged care.Some were excluded because data were derived from assessments that occurred prior to 2012 (n = 21), or outcomes were not related to factors influencing retention of aged care workers (n = 16).In total, 49 peer-reviewed, English language studies reporting primary research results were included (23 quantitative studies, 4 mixed-methods investigations, and 22 qualitative analyses).Grey literature was reviewed by two assessors.This led to identifying gaps in research needing to be addressed, including exploration of the specific needs of migrant care workers, the workforce challenges in respite aged care, and strategies to improve recruitment of the aged care workforce, particularly in regional and remote communities. gaps in research needing to be addressed, including exploration of the specific needs of migrant care workers, the workforce challenges in respite aged care, and strategies to improve recruitment of the aged care workforce, particularly in regional and remote communities.[31] outlining study selection process. Study Characteristics The characteristics of studies are presented in Tables 2-4.Three studies implemented strategies to improve retention.Of the studies applying an intervention, two were randomised controlled trials [45,46] and one used participatory action research with pre-post assessments [47].The use of descriptive surveys and questionnaires with a cross-sectional (n = 20) or longitudinal (n = 2) design were reported in all quantitative descriptive and mixed-methods studies.The exception was the trial by Sharma et al. (2022) [48] who used human resource records to assess the relationship between wages and aged care staff turnover.Qualitative study designs included interpretative (n = 18), long ethnographic (n = 2), grounded theory (n = 1), and phenomenological approaches (n = 1).For these studies, individual interviews were most common (n = 14), or focus groups (n = 5), or a combination of data collection methods (n = 3).Supplementary Tables S4-S6 show that purposive sampling was most favoured (n = 33).Supplementary Tables S4 and S5 highlight a wide range of outcomes relevant to care worker retention.These were captured using surveys on workplace culture, work-related injuries, job satisfaction, leadership styles, mental health of staff, intention to leave, wages, work-related stress, and the impact of the COVID-19 pandemic on care workers.[31] outlining study selection process. Study Characteristics The characteristics of studies are presented in Tables 2-4.Three studies implemented strategies to improve retention.Of the studies applying an intervention, two were randomised controlled trials [45,46] and one used participatory action research with pre-post assessments [47].The use of descriptive surveys and questionnaires with a cross-sectional (n = 20) or longitudinal (n = 2) design were reported in all quantitative descriptive and mixed-methods studies.The exception was the trial by Sharma et al. (2022) [48] who used human resource records to assess the relationship between wages and aged care staff turnover.Qualitative study designs included interpretative (n = 18), long ethnographic (n = 2), grounded theory (n = 1), and phenomenological approaches (n = 1).For these studies, individual interviews were most common (n = 14), or focus groups (n = 5), or a combination of data collection methods (n = 3).Supplementary Tables S4-S6 show that purposive sampling was most favoured (n = 33).Supplementary Tables S4 and S5 highlight a wide range of outcomes relevant to care worker retention.These were captured using surveys on workplace culture, work-related injuries, job satisfaction, leadership styles, mental health of staff, intention to leave, wages, work-related stress, and the impact of the COVID-19 pandemic on care workers.Major themes included overcoming prejudice from residents and locally born peers; the importance of peers with a similar cultural background; the benefits of working within an in-demand sector; the cost-effectiveness of working and living in a regional community; and the challenges of communication being an essential part of the role but coming from an English as a second language background. High ACA: aged care assistant; AHA: allied health assistant; CA: care aide; CaLD: culturally and linguistically diverse; CG: control group; CNA: certified nursing assistant; DCW: direct care workers; DON: director of nursing; LN: licensed nurse; HCW: home care worker; HHA: home health aide; IG: intervention group; LTCF: long-term care facility; MMAT: mixed methods appraisal tool; NA: nurse assistant; NH: nursing home; NHA: nursing home administrator; PCW: personal care worker; PSW: personal support worker; RACF: residential aged care facility; RCT: randomised controlled trial; RN: registered nurse; SNF: skilled nursing facility. Studies were conducted across the globe, with most in the United States (n = 18), followed by Australia (n = 10), Canada (n = 5), Sweden (n = 5), and Taiwan (n = 4).Others were from Hong Kong, China, Korea, Austria, France, Denmark, the Netherlands, and Slovenia, providing global insight into factors influencing aged care worker retention.Sample sizes varied.For example, they ranged from the Charlesworth et al. (2020) [53] subgroup analysis of a nationwide aged care workforce survey, investigating the effect of migrant status on casual and underemployment amongst 7114 personal care attendants from residential and home aged care services, to seven migrant aged care workers providing insight on their experiences working in regional areas in the study by Winarnita et al. (2022) [93].Supplementary Tables S4-S6 show that 77-100% participants were female.Factors related to migrant aged care workers were explored in nine studies [53,56,59,64,66,72,81,88,93].Overall, the studies included people employed as personal care attendants, certified and noncertified nurse assistants, direct care workers, patient support workers, home health assistants, and allied health assistants in addition to professional and administrative staff.Most studies (n = 47) were conducted in residential aged care facilities.Only four studies included staff working in home-based care [53,80,81,86]. Convergent Qualitative Synthesis Data synthesis was aligned to two main themes of individual and organisational factors pertaining to staff retention.Individual or personal factors were defined as variables important to the care workers because of personal beliefs and life experience, in or out of the workplace.Organisational factors were defined as variables that could be changed or influenced by the organisation with impact on the care workers.These were derived from thematic analysis of the included studies, with the ecological framework of McLeroy (1988) [94] used as a stimulus during the initial formation of themes to conceptualise and understand aged care worker retention.According to the McLeroy framework, levels of influence are factors that examine relationships of individuals within their workplaces, communities, and broader society.Detailed synthesised findings are found in Supplementary Table S7. Three intervention studies sought to improve individual factors for aged care workers (Supplementary Table S4).The randomised trial by O'Brien et al. (2019) [47] reported that group-based cognitive behavioural training promoting flexibility in response to workplace stress in care home staff significantly reduced staff absences and mental health symptoms.The participatory action research by Ericson-Lindman et al. (2017) [46] involved collaborative, group-based training with registered nurses and nursing assistants together, whereby scenarios involving troubled conscience in the workplace were explored.By learning how to constructively deal with feelings of being unable to deliver a quality of care expected of themselves, participants noted an improvement in work-related performance and social support between workers.The trial by Jeon et al. (2015) [45] applied a randomised design to assess the effect of providing care home middle managers with a 12-month supported leadership program and found that both supervisor support and management behaviour towards care workers significantly improved; however, turnover and intention to leave did not differ between groups. Organisational Factors At a macro-organisational level, local rules, regulations, opportunities, and constraints affected staff retention.There were some reports of a lack of continuous, competent staff [74,91] and perceived understaffing [77][78][79]81,84,[89][90][91][92].This was thought to limit opportunities for person-centred care [76,85,86,92] and increased turnover [85].Whilst casual employment offered flexibility [56,72,93] and higher payrates per hour, care workers expressed a desire for schedule control [77,89].Teams where care workers were empowered to manage rostering found absences less impactful [50].Having job stability was found to be important in reducing workplace stress [91] and increased retention [69].However, many care workers reported lower job stability when providing home care, compared to those employed by a residential care facility, due to clients' ability to enact personal preferences for care workers [86]. Local rules and role structures adversely affected care workers, with burnout [71] and low job satisfaction associated with long shift lengths, split shifts, or insufficient break allowances [71, 73,86].Heavy workloads [69], uncertainty of scope of practice [86], and limited capacity for older workers to perform nonphysical tasks [91] negatively impacted intention to stay [86].Additionally, work-related injuries were reported at higher rates in older, female care workers [54], in migrant workers, and where assistive devices were not used for manual handling of residents [66,70], with an increased level of intention to leave [54]. The education and training of aged care staff were key determinants of retention.There were reports of insufficient onsite training before starting work [75,77,81,85], with some care workers feeling underprepared for both the physical demands of the role, and the emotional aspects of caring for older adults, especially at end of life [78][79][80].Some migrant workers with overseas nursing qualifications reported the benefits of taking indemand, lower-skilled roles in aged care whilst waiting to finalise nursing registration in their new country [56,72,88,93].Others reported stress associated with a lack of career pathways, training, or formal education [51,73,82,86,88].When career opportunities were available, they were often difficult to access financially or geographically, or not relevant to the scope of work [86].Low local unemployment levels also impacted retention, with jobs in other industries, such as retail, offering similar salaries to aged care work, but with lower perceived work demands [58,77].Facilitators to retention included promotion opportunities [69], schedule or roster management by the care workers, and safety training [90].When a workforce was unionised with greater collective power [49] or a facility had lower proportions of residents with psychiatric illnesses [58], retention of care workers was higher. The policy setting influenced worker behaviours.Policies regarding minimum ratios of professional to care worker staff or staff to resident ratios differed globally [60,62,[68][69][70].Supportive supervision levels were reported to be greater in the presence of higher professional-to-care worker ratios [60,70].Whilst some studies reported the positive aspects of the requirement to have dedicated qualified nurses supervising care workers [60,68,87], there were significant challenges faced by nurses in managing a casualised, high-turnover workforce with need for continual training of new staff [89].Despite efforts by governments worldwide to reform the working conditions of this sector, some care workers in our review expressed frustration with working in an unregulated wage system [86].They felt that retention would improve if wage rises were associated with excellent performance [85] or length of tenure [89].Increased rates of workplace injuries [70] and intention to leave [69] were reported when care worker staff-to-resident ratios were low.This was exacerbated by the COVID-19 pandemic, where restrictions on other health professionals placed an additional burden on care workers to perform duties beyond their scope of practice [83]. Discussion Care staff are highly sought after worldwide yet retaining them in aged care roles is challenging.This review shows that the intention of personal care assistants, nurse assistants, and allied health assistants to stay working in the aged care sector was related to individual and organisational factors.Policies on wages, staff ratios and safety standards also impacted recruitment and retention.Our review identified studies from countries across European, Oceanian, Asian, and North American regions where life expectancy is typically longer [35].We acknowledge that the nature of work for older adults' care personnel in these regions will differ, as people spend more time in need of care, so knowledge of advance care planning and palliative care is required [95][96][97].It is further noted that aside from population size and life expectancy, there are differences between countries regarding social security systems, impact of religion, and local economic situations [98,99].These are some of the "key elements" of the older adult care workforce that affect turnover intention. Our review highlighted that several individual factors were related to workforce retention.More positive lived experiences working in aged care were related to better retention, as was high job satisfaction.Retention was better when care workers felt supported by peers and supervisors and when there was capacity in their workloads to provide person-centred care.In agreement with a recent review on caring self-efficacy in direct care workers [100], we found that being able to establish compassionate relationships and meet the needs of residents was a key driver to remaining in the workforce [73,82,84].Reducing sources of care worker stress also helped [73,75,[84][85][86]89].Evaluation of effectiveness of retention strategies was limited due to only three studies implementing an intervention. Organisational factors also played a role.Retention was stronger when managers had positive leadership styles.Local procedures regarding staff rosters, shift lengths, split shifts [71, 73,86], and enabling staff to contribute to roster management were facilitators [58,64,91].The study by Brown et al. (2016) [50] described a staff responsibility approach where aged care workers were responsible for rostering.There was an enhanced ability to manage workloads, and this was associated with lower turnover rates compared to other care homes.As with recent government reports [36,101], we found that workplace health and safety was a predictor of retention, with workplace culture impacting whether occupational recommendations were adhered to [66,70,73,76].For employers, organisational factors such as creating a positive workplace culture, ensuring good communication between leadership and staff, and ensuring a safe workplace through appropriate equipment-and facility-specific training aided care worker retention. Across the world, there is a shift in aged care service provision to be more personcentred and value-based [18,102,103].The quality of care remains dependent on recruiting and retaining very large numbers of the care workers providing the bulk of direct care to residents [104][105][106].In recent years, care quality has come under increased scrutiny, and practice and policy concerns have been raised about staffing levels, recruitment, and retention [107].Some have proposed that a need exists for better regulation of this workforce, with safety and quality of care at risk [36].The recent Australian Royal Commission into Aged Care Quality and Safety (2021) [36] claimed that over 30% of people accessing residential or home-based aged care services experience substandard levels of routine personal or comprehensive care.Within our review, some care workers also shared the sentiment that care quality is not meeting basic expectations [84,85,89].Some care staff expressed frustration with services not covering staff absences [77][78][79]84,[89][90][91], insufficient pre-employment training [75,[77][78][79]85], or poor cooperation across teams [69,75,77,90].Some care workers felt underprepared for the emotional engagement required to deliver care in an empathetic and meaningful way.In some countries, the aged care workforce is multicultural and includes first-or second-generation migrants in low-paid roles [108].Similar to other research in this area [109,110], migration of people suited for aged care work was hindered by visa pathways that channelled them into "low-skilled" nonprofessional care roles [72,81,93].Migrant aged care workers are more likely to be on casual contracts [53,56].Often, they seek more work hours, hold multiple jobs, and work at a lower skill level than afforded by their overseas qualifications [10,53,111].A recurring theme was that the cultural diversity and cultural competence in the aged care sector needs to be optimised to accommodate care worker needs and to give staff opportunities for education and training [112]. There were several limitations of our review.Due to the low yield and heterogeneity of quantitative data, we were unable to complete a meta-analysis.Also, the review yielded few articles on retention of staff in home-based care, despite the preference of many older people to live in their own home as they age [113,114] and the known issues of staff shortages in this sector [110].It was anticipated that findings would include examination of the impact of vaccine requirements during the COVID-19 pandemic, yet this did not emerge in the results.Also, we were not able to perform a detailed policy analysis due to variations across countries and exclusion of grey literature; this is recommended for future investigations. Conclusions Retention of aged care workers is a growing challenge worldwide.This systematic review summarised and aggregated contemporary evidence regarding retention of aged care workers, with analysis of retention strategy effectiveness limited by a low yield of interventional studies.This review highlighted the need for better support of care workers to keep them in employment.As well as optimising pay, workloads, and conditions, there is a need for reform of education and training, better career pathways, and more optimal support of worker wellbeing. Table 1 . Inclusion and exclusion criteria. Table 2 . Overview of quantitative studies. Associations between high (vs low) retention rates of NAs and greater leadership/staff empowerment scores; low NH administrator turnover; high NH occupancy rates; presence of a union; greater hours per day allocated to residents. Musculoskeletal injuries found to be experienced at high rates by NAs working in NHs; in mainly older, female workers who perceive work to be more stressful; associated with an increased intention to leave and a perceived health status of "not good". Table 3 . Overview of mixed methods studies. Table 4 . Overview of qualitative studies.
v3-fos-license
2018-12-11T10:08:27.636Z
2014-02-01T00:00:00.000
55682940
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.19026/rjaset.7.359", "pdf_hash": "59b99094e9d0ef91202ef846f3546edbe6064095", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:170", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "59b99094e9d0ef91202ef846f3546edbe6064095", "year": 2014 }
pes2o/s2orc
Evaluation of Multi Layers Web-based GIS Approach in Retrieving Tourist Related Information Geo-based information is getting greater importance among tourists. However, retrieving this information on the web depends heavily on the methods of dissemination. Therefore, this study intends to evaluate methods used in disseminating tourist related geo-based information on the web using partial match query, firstly, in default system which is a single layer approach and secondly, using multi layer web-based Geographic Information System (GIS) approaches. Shah Alam tourist related data are used as a test collection and are stored in a map server. Query keyword is tested using both default and multi layer systems and results are evaluated using experiments on sample data. Precision and recall are the performance measurement technique used. Findings show that multi layer webbased GIS provide enhanced capability in retrieving tourist related information as compared to default system. Therefore, in the future, web-based GIS development should utilize multi layers approach instead of the single layer method in disseminating geo-based information to users. INTRODUCTION Geo-based information is gaining greater importance among tourists as it allows meaningful experience with places (Achatschitz, 2006;Tussyadiah and Zach, 2012). Furthermore, the use of geo-based information is not limited to tourists but it is also used in everyday life. Acquisition of this information through the use Information and Communication Technology (ICT) has effect on everyone especially tourists. Publishing web-based Geographic Information System (GIS) maps is one of the methods used to encourage the public to acquire geo-based information using ICT. Development of geoportals that is more receptive towards tourists' requests has become abundant today (Dickinger et al., 2008;Sigala, 2009). However, the current online maps which run on a webbased GIS do not reflect users' needs (Khan and Adnan, 2010;Kyem and Saku, 2009;Plosker, 2006;Richmond, 2002). These maps need to be improved not only in terms of usability as discussed in Khan and Adnan (2010) but also in terms of their efficiency and effectiveness (Dickinger et al., 2008;Kyem and Saku, 2009;Pan and Fesenmaier, 2006a;Plosker, 2006). Currently, researches are either focus on usability of web-based GIS applications (Khan and Adnan, 2010;Radwan, 2005;Voldán, 2010;You et al., 2007), visualization (Pontikakis and Twaroch, 2006), geoportal collaborations (Hao et al., 2010;Sigala, 2009), shortest path (Hochmair, 2009), map related information search (Chen et al., 1998) or Geographic Information Retrieval (GIR) technique and performance (Pu et al., 2009). Little concern is given in evaluating the performance of the current web-based GIS applications (Markowetz et al., 2004;Matic, 2006;Simão et al., 2009). In addition, the current off the shelf web-based GIS products have further encouraged users to use the applications without giving thoughts to the current system's performance (Plewe, 1997;Tsou, 2004;Tsou and Michael, 2003). The integration of GIR component in web-based GIS has given initial thoughts in evaluating the performance of the latter system (Larson, 1996;Martins et al., 2005;Purves and Jones, 2006;Voldán, 2010;Zhang et al., 2012). As a result, GIR evaluation technique can be used to evaluate the performance of web-based GIS (Clough et al., 2006;Martins et al., 2005). Therefore, this study intends to evaluate the performance of web-based GIS in disseminating tourist related geo-based information on the web. Two systems are evaluated. First is the single layer system which is provided by the current off the shelf web-based GIS developer. The other is a customized multi layers system. Both are evaluated and results are presented. GEO-BASED ONLINE INFORMATION SEARCH Maps are tourist's best friend (Richmond et al., 2003). They are an essential tool for tourist in any phase of vacation planning process: planning, during and post vacation. However, it is often overlooked by tourism providers. Tourist often finds that destination websites are equipped with textual and graphic composition but lack of spatial information such as location map. His geographic needs are often neglected. There are four concepts of query which are clearly dominant (Andreas and Volker, 2007). These concepts are habitation, accommodation, spare time and information. General object such as 'hotel' is more frequent in the query rather than specific brand of car dealers. Users who intent to buy or rent are the dominant users who make geographic related query and this leads to the concepts of goods and services. These geographic queries have a high value of commercial impact. Andreas and Volker (2007) further defined four features of geographic information needs. First, is the intention. It describes the relation between a user and a specific place. A user might want to know something about the place. Second, is coverage. It describes the specific coverage of an area. Third, is shape. It deals with looking for a specific document which can be in a form of point, polygon or polyline. Finally, fourth is distance. It describes the interpretations of nearness. Useful distance measures are determined by the means of transportation, existing traffic routes and user's intention. Therefore, tourist spatial needs involve two basic principles. First, is Tobler's first law of geography which says, "Everything is related to everything else, but near things are more related than distant things". (Tobler, 1970) Second, is Egenhofer and Mark (1995) principle that says "topology matters, metric refines". These two principles apply to the spatial needs of tourists and are discussed by a number of authors (Egenhofer and Mark, 1995;Fodness and Murray, 1999;Nicolau, 2008;Fesenmaier, 2002, 2006b;Richmond et al., 2003;Smith et al., 2007;Tobler, 1970). Map represents things in space (Brown, 2001;Richmond et al., 2003;Robinson and Petchenik, 1976). History has proven that travellers described places they have visited or new world in a form of map. This will help others imagine what can be found in the places they visited (Richmond et al., 2003). Maps are tourist's best friends. It performs valuable task in any of tourist's journey. It is needed in various stages of a journey regardless of whether it is planning, during or post journey stage (Richmond et al., 2003). It also provides sense of space which is related to the concept of naive geography. With the existence of current technology map making has become much easier and this concept should be used as the basis to design intelligent Geographic Information System (GIS). Naive geography is a study of formal models of commonsense geographic world (Egenhofer and Mark, 1995). Each stage in vacation planning requires the tourist to use map differently. During the planning stage, map is used to determine the location of accommodation available at destination. Tourist will then look for places of interest around the selected accommodation. They will also look for other services such as food and transport services (Pan and Fesenmaier, 2006a;Richmond et al., 2003). On the other hand during vacation, maps are used as reference. With a map in hand, tourist will feel confident and has some orientation of his location at every moment. Maps provide direction and validate tourist's expectation at actual location (Robinson and Petchenik, 1976). Finally, after a vacation, maps can provide memorable journey virtually in the mind of a tourist (Richmond et al., 2003). Thus, no doubt that maps and tourist are inseparable (Richmond et al., 2003). However, new invention and advances in communications have provided a new dimension in tourism sector. The Internet and the World Wide Web have given a new path of communication in tourism through online maps. The following sections describe the Internet and the World Wide Web in tourism context and the current online maps. Online information search by tourist is discussed thoroughly by Pan and Fesenmaier (2006b) and Luo et al. (2004). According to a study carried out in the United States by Pan and Fesenmaier (2002) on online vacation planning, findings have shown that tourists follow a hierarchical structure of events in searching for information. They often have different semantic mental models as compared to the content which are offered online. However, the items that they searched for are similar even though they may take different steps. These items include information on accommodation, attraction, food and transportation services. Studies by other scholars also confirm that these items are among the types of information required by tourist starting from planning until the end of a vacation (Ekmekcioglu et al., 1992;Pan and Fesenmaier, 2006a;Pan et al., 2006;Richmond et al., 2003). Internet is the cheapest and fastest method of getting information on any destinations. Tourists are adopting the internet as a major source of information for vacation planning (Lake, 2001;Pan and Fesenmaier, 2006a). In a qualitative research it is found that Malaysian tourists prefer to refer to the internet, friends and relatives since they consider these as reliable sources of information prior to a vacation overseas (Zaridah et al., 2005). Demographic characteristics are significantly related to choices of information source (Luo et al., 2004). Besides textual and graphic information, tourist also looks for spatial information such as where is the nearest shopping mall from the hotel that they want to stay, what type of eateries are located within walking distance from where they are going to stay, among others (Pan and Fesenmaier, 2006b). This is consistent with one of the elements of naive geography which states that "people use multiple conceptualizations of geographic space" (Egenhofer and Mark, 1995). Naive geography is concerned with the formal world of geographic common-sense. It is used as the basis in Geographic Information System (GIS) model design developed for people who are unfamiliar with the system. Furthermore the statement that says "maps are more real than experience" (Egenhofer and Mark, 1995) shows that map which has interactive and effective query capabilities can become resourceful. Evaluations of online maps have been carried out by scholars to assess their usability in tourism sector. Focus on the evaluations varies. While Dickinger et al. (2008) evaluate online maps using tagger's performance, Zhang et al. (2012) focus online map service websites and Pontikakis and Twaroch (2006) propose schematic maps as alternative to maps without topographic elements. In addition, performance of Google Map as a tourist's map has also been evaluated. Results of these studies show that there is a need to offer an online GIS map that is able to produce accurate results for spatial information. With the increasing number of people travelling around the world and the increasing demand for online geo-based information, reliance on the web-based information has also increased. Therefore, providing an excellent web-based Geographic Information System (GIS) on the internet will be a challenging task to those who wants to provide geo-based information to tourists. WEB-BASED GEOGRAPHIC INFORMATION SYSTEM (GIS) Web-based GIS, an extension of Geographic Information System (GIS), consists of four major system components which include client, web server with application server, map server and data server (Peng and Tsou, 2003). These components are integrated to process spatial data on the web. Client is a place for users to interact with spatial features and carry out spatial analysis. Users can use this platform to perform request to the web server. When a request is posted the output will be displayed on the client side. Client-side scripting is used to produce dynamic html and as a result interactivity is increased between the client and the server. Among popular client-side scripting include JavaScript and VBScript. Browser plug-in is also used together with the scripts to enable user to view spatial data. This is only required when certain map server software is used. Java applet is another type of client software for displaying spatial data. It resides at the web server and is downloaded from the server and executes it at the client side (Peng and Tsou, 2003;Plewe, 1997;Tsou, 2004). A web server and application server, on the other hand, perform vital function in responding and processing the client requests. Peng and Tsou (2003) identified several ways for a web server to respond to client's request. First, is by sending existing HTML document and secondly is by sending Java applets. Finally, the request is passed to another program and invokes other sub program. Application server on the other hand, acts as a medium between a web server and a map server. Its functions include establishing, maintaining, terminating communication between these two servers. In addition, it interprets clients' requests and passes them to the map server, manages concurrent requests and balances loads among map servers and data servers. Furthermore, it also manages the state, transaction and security of spatial data. The map server fulfils spatial queries, conducts spatial analysis, generates and finally, delivers the requested maps. According to Peng and Tsou (2003) the output of a map server can be in two forms. First, filtered feature data is sent to the client for further analysis. Secondly, simple images file in jpeg or png format is sent to client. A data server is the component that serves spatial and aspatial data. Such example is the SQL server. A data server acts as a medium between a map server and databases. Databases are made of relational database and geo database. It is normally allocated at a separate location from that of the map and web server. This is to ensure safety of the data (Peng and Tsou, 2003;Tsou, 2004). Web-based GIS relies on client-server architecture which can be of two types (Peng and Tsou, 2003;Plewe, 1997;Takino, 2003;Tsou, 2004). One is thick client, thin server and the other is thin client, thick server. The former allows users to manipulate and process data easily and fast. GIS data is sent to the client and stored on the client side using applet. Once the network connection is disconnected, the data will be lost. On the other hand, with thin client, thick server, data processing is carried out entirely by the server. The effectiveness of rendering process depends on the efficiency of the server. Multiple requests for the same location in a map and multiple requests for the different locations in a map at the same time might hamper the performance of the server. Thus, developer of a webbased GIS needs to put effort in designing the scripting of handling requests (Nurul Hawani et al., 2005;Takino, 2003). This is to ensure not only fast and efficient, but also accurate response (Mata, 2007;Tsou, 2004). Numerous studies have been conducted on enhancing server capabilities. Since its inception in the early 1990s with the introduction of Xerox Map Viewer, web-based Geographic Information System (GIS) or web-based GIS have been gaining popularity. Scholars now have seen the benefits of using web-based GIS in their discipline (Dragicevic, 2004). In one study, a web-based GIS system is implemented using server/client system in which image pictures are created one by one on GIS server machines when requested by a web client. In this system, data conversion and exchange occurs frequently on the server system. This results in difficulties to respond to web clients' requests within a short waiting time. A distributed data processing model for Web-based GIS by utilizing web client's hardware and software resources is then proposed by Takino (2003). This system transfers spatial index files which contain the structure and actual address of the spatial data on data site to clients' PCs when they access a web server. This model speeds up the processing capabilities of the server when multiple clients request for different spatial data sections. Tsou (2004) has integrated image processing tools in a web-based GIS model to assist in environmental monitoring and natural resource management. The model has enabled regional park rangers and local natural resource managers to utilize the capabilities of web-based GIS in monitoring and managing the resources. GIS software and remote sensing data are quite expansive for a non expert user to own. Furthermore, installing the software poses another hurdle to this user. Therefore, web-based GIS application can overcome these barriers. Tsou (2004) has managed to display satellite images in his webbased GIS model. To achieve this, three levels of GIS services which consist of data archive, information display and spatial analysis are combined together in the system architecture. However, Tsou (2004) did not proceed further in evaluating the system model in terms of its effectiveness and efficiency in retrieving the spatial data required. Kyem and Saku (2009), on the other hand, look at the use of web-based GIS in public participation within the local and indigenous communities. Potential benefits of a web-based GIS application in public participation are discussed in depth in this research. Even though this research only provides theoretical background, but it is an eye opener to other scholars to develop models to suit the needs. As a starting point, Kyem and Saku (2009) suggested the use of Google Earth to facilitate detailed observations and the creation of maps to address the community concerns about developments in their areas. Web-based GIS promotes online communication and interaction between community members beyond boundaries. Forums and group discussions are conducted online. This has cut down cost of space and travelling time. Furthermore, communication and interaction can be carried out 24/7. However, Kyem and Sake (2009) did not discuss on the methods of evaluating the web-based GIS model they suggested in terms of effectiveness and efficiency of the model. Sidlar and Rinner (2007) analyze the use of argumentation map as a decision support system tool. Based on a quasi-naturalistic case study, the study looks into the aspects of general usability of an argumentation map. By focusing more on learnability, memorability and user satisfaction with this tool's functionality, it is found that users are generally satisfied. In this research, additional components which consist of map navigation, display of discussion on contributions and online status of participants are also included. Even though this research contributes participatory spatial decision support systems to the knowledge, it does not look into the efficiency and effectiveness of the system. Researches discussed above focus more on their models' usability rather than the effectiveness and efficiency of the systems. Therefore, it is the main intention of this research to integrate geographic information retrieval technique and evaluation method into web-based GIS research area in order to facilitate the performance evaluation process of the web-based GIS system. TEST COLLECTIONS AND METHOD Test collection in this study only covers tourist related information features in Shah Alam. A query list of 92 keywords was obtained from 31 bachelor Degree of Urban Studies and Planning Programme students in the University of Malaya. Relevant judgment was obtained from five experts who have known Shah Alam for more than 10 years. Shah Alam is the capital city of Selangor, one of the states in Malaysia. Being one of the earliest planned city in Malaysia, Shah Alam has her own way of being distinctive from Kuala Lumpur, the capital of Malaysia. Besides housing the state's departmental offices and buildings, Shah Alam offers shopping for arts and crafts, traditional clothes and materials. There is not many traffic jams in Shah even though there are many cars on the road. Thus, this has encouraged people who are intolerant to traffic congestions to shop around here. Spatial data in this collection consists of point and polyline features and is distributed into eight different layers. Point feature represents accommodation, attraction, eateries, Auto Teller Machines (ATMs), facilities and landmark. Likewise polyline feature represents railroad and street. Thus, test collections of Shah Alam City Centre are divided into eight layers. Each layer has attributes related to it. The attributes of each document constitute the aspatial data. These categories are in accordance to the study conducted by Plosker (2006) and Pan and Fesenmaier (2006a). Details of the documents are described in Table 1. The two systems that are evaluated include single layer and multi layers web-based GIS systems. Both systems are of thin client, thick server. Single layer system is a web-based GIS system that requires a user to choose specific layer before a search function can be carried out. This system can only perform one request at a time. In addition, this system is case sensitive and use only exact match for results returned. This system reflects, as mentioned earlier, the current off the shelf web-based GIS products provided by several wellknown GIS developers. On the other hand, multi layers system is produced through customization of the one of the current off the shelf web-based GIS products. Slightly modified in terms of case sensitivity and layers integration, this system has the ability to perform multiple layers search function using a single step. In addition, this system is also able to carry out partial match query function. Finally, system testing is used as the main evaluation method for experiments. Precision and recall are the most known formal performance measurement based on systems returning a set of results in information retrieval (Clough et al., 2006;Martins et al., 2005). Information needs may vary from user to user in which some users may require high recall and low precision and vice versa (Salton and McGill, 1983). Usually at recall levels 0, 10, 20 and 30%, respectively the interpolated precision score is equal to 33%. As recall level increases, the precision level decreases (Baeza-Yates and Ribeiro-Neto, 1999). However, a good system is the one that can exhibit both a high recall and a high precision (Salton and McGill, 1983). Precision refers to the ratio of the total number of relevant documents retrieved by the sum of documents retrieved (Bucher et al., 2005;Martins et al., 2005;van Rijsbergen, 1979). It is represented by: where, rd = Total number of relevant documents retrieved nr = Sum of documents retrieved Likewise, recall is defined as the ratio of the total number of relevant documents retrieved to the sum of relevant documents [both retrieved and not retrieved] (Bucher et al., 2005;van Rijsbergen, 1979). It is represented by: In addition the F measure which also known as Harmonic mean, combines recall with precision and is usually used in problems when the negative results outnumber the positive ones. Thus, F measure equally weighs precision and recall and is given by: In this study, evaluation process of both web-based GIS systems is measured by precision, recall and F measure. Experiments are conducted using the Shah Alam City Centre's test collections. The scores collected from each experiment are then calculated and compared between the two systems; the single layer and the multi layers systems. Results are presented in the next section. EXPERIMENTAL RESULTS Results of evaluations carried out on Shah Alam test collections show that multi layers system (38%) produce higher results returned as compared to single layer system (25%). In addition, recall for multi layers system (0.9<r<1.00) has a higher and smaller range than the one produced by single layer system (0.10<r ≤1.00) which has a lower and wider range of recall score. Furthermore, the former system is able to produce a double percentage of full score of recall (98%) as compared to the latter system which is only 48%. Nevertheless, both systems are able to produce a perfect score of precision. This is important since tourist information needs to be precise and accurate in terms of providing geographical locations as discussed by Egenhofer and Mark (1995). F measures of both systems, however, show a big gap between the two. Similar to recall score, F measures for multi layers system (0.9<r<1.00) has a higher and smaller range than the one produced by single layer system (0.18<r ≤1.00) which has a lower and wider range of recall score. In addition, single layer system is only able to produce 48% of a perfect F measure score as compared to 93% by the multi layers system. Table 2 shows the evaluation results on Shah Alam City Centre test collections using the single layer system as compared to multi layers system. Results have proven that multi layers web-based GIS is able to perform better as compared to the single layer system. It has 100% precision and 98% recall. Thus, it is considered an efficient system in terms of producing required results as discussed by Salton and McGill (1983), Mata (2007) and Tsou (2004). However, the efficiency of this system is not based on time efficiency as discussed by Takino (2003). Three distinguish limitations of the single layer system can be drawn from this evaluation. First, is its case sensitivity? Any query keyword must match the one in its database. Second, it uses an exact match query function. This system is not able to handle partial match query. A query on 'attractions' will not return any results if its database stores the word 'attraction' only. Finally, it is tedious to take many steps in order to perform a single query. Users will definitely feel unhappy to use this system because of these reasons or system inabilities (Joachims et al., 2007). Therefore, enhancements using the multi layers approach to the single layer system are required to enable the system to perform better in the future. CONCLUSION Results have shown that multi layers system which consists of case insensitivity, multi layers integration and partial match query function, is required to yield better results. It has shortened the steps taken to perform search function of geographic related information by tourists as mentioned by Andreas and Volker (2007). These findings have answered the main objectives of this research. Based on the above evaluations, it shows that there is a significant difference between precision, recall and F measure among the two systems. Results have proven that multi layers approach is able to produce the highest score in precision, recall and F measure. These findings have also contributed to the body of knowledge in the area related to web-based GIS or online maps and online information search by tourists which were previously conducted by scholars such as Pan and Fesenmaier (2006b), Dragicevic (2004), Dickinger et al. (2008), Zhang et al. (2012), Pontikakis and Twaroch (2006), Tussyadiah and Zach (2012) and Luo et al. (2004). Although it performs better than the single layer system, multi layer web-based GIS approach has two limitations. First, is incorporating irrelevant document in its results return. Due to its ability in performing partial match query, this system is able to search and present results of a query that has a prefix and suffix of a word. For example the word 'art' can be a part of 'GoKarts' or 'department'. In this case, the results return is irrelevant even though it bears the query keyword. Second, is that it is unable to capture word that has similar meaning. For example the word 'eating' can be replaced by food or eateries in the system. This will enhance the system capabilities in handling query. Furthermore, results return according to the query keyword list is still below 40%. Therefore further enhancement needs to be carried out by scholars in the web-based GIS community. ACKNOWLEDGMENT This study was supported in part by the University of Malaya Research Grant (UMRG) under Grant No. FR212-2007A.
v3-fos-license
2023-01-25T16:06:08.118Z
2023-01-22T00:00:00.000
256227513
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/24/3/2191/pdf?version=1674378287", "pdf_hash": "fc6b0cdf20ec0b575e7b3820b121cd1c5cdfb48f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:171", "s2fieldsofstudy": [ "Biology", "Materials Science" ], "sha1": "442899e9e7d80b0d0f6c4bbc2d0a5177f8336d92", "year": 2023 }
pes2o/s2orc
Antimicrobial Natural Hydrogels in Biomedicine: Properties, Applications, and Challenges—A Concise Review Natural hydrogels are widely used as biomedical materials in many areas, including drug delivery, tissue scaffolds, and particularly wound dressings, where they can act as an antimicrobial factor lowering the risk of microbial infections, which are serious health problems, especially with respect to wound healing. In this review article, a number of promising strategies in the development of hydrogels with biocidal properties, particularly those originating from natural polymers, are briefly summarized and concisely discussed. Common strategies to design and fabricate hydrogels with intrinsic or stimuli-triggered antibacterial activity are exemplified, and the mechanisms lying behind these properties are also discussed. Finally, practical antibacterial applications are also considered while discussing the current challenges and perspectives. Introduction Hydrogels are three-dimensional, hydrophilic networks formed by flexible polymer chains swollen by water or biological fluids and able to store a large amount of it while maintaining the 3D structure that can be cast into practically any shape or form [1]. Hydrogels can be divided into two groups: natural hydrogels (e.g., alginate, collagen, fibrin, hyaluronic acid, chitosan, agarose or starch) and synthetic hydrogels (e.g., poly (ethylene glycol), poly(vinyl alcohol) or polymethyl methacrylate). Although having some drawbacks related to their not sufficient processing reproducibly and changeable composition, natural hydrogels exhibit high bioaffinity and biocompatibility. Biomedical applications of hydrogels have constantly been evolving during the last years, driven by discoveries in the field of biology and chemistry. Compared with other types of biomaterials, hydrogels have increased biocompatibility, tunable biodegradability, and tunable porous structure and thus permeability; their disadvantages are mainly associated with low mechanical strength and fragile nature [2]. Novel strategies to construct these materials led to achieving more and more complex and functional hydrogels, including composite hydrogels containing a plethora of various nano/microstructures. Therefore, these advantageous materials are finding increasing application in many biomedical fields, including drug delivery [3][4][5][6], tissue engineering [7][8][9], 3D printing [10][11][12][13][14], and, more recently, in biosensing and actuating applications [15][16][17]. The most promising applications of hydrogels are summarized in Figure 1. Still, one of the most prominent applications of hydrogels is related to wound healing [18][19][20][21][22], where they serve as advanced "moisture donor" dressings, increasing collagenase production and facilitating autolytic debridement [23]. Anyway, the use of hydrogels Still, one of the most prominent applications of hydrogels is related to wound healing [18][19][20][21][22], where they serve as advanced "moisture donor" dressings, increasing collagenase production and facilitating autolytic debridement [23]. Anyway, the use of hydrogels facilitates oxygen transmission to the wounds and allows for absorbing and retention of exudate within the gel mass [24]. In addition to these extremely useful advantages in the treatment of wounds, hydrogels can have one more crucial from the point of view of this literature review: they can provide antimicrobial action preventing or slowing down the development of microbial infections, which are one of the major problem related to wound healing constituting one of the most prominent factors preventing wounds from healing, especially chronic ones, requiring challenging treatment [25]. Natural hydrogels are considered extremely useful biomaterials due to their effective inhibition of bacterial infections. Bacterial wound infection develops through three consecutive phases [26]: (i) contamination-bacteria are transferred into the wound from the surrounding environment, (ii) colonization-bacteria are replicated and start to adhere to the wound-forming biofilm, however without interfering with the ongoing wound healing process, (iii) infection-the replication rate of bacteria begins to overload the immune system leading to various consequences. Some common Gram-negative, as well as Gram-positive bacterial pathogens that have been isolated from infected wounds, include Staphylococcus aureus, Pseudomonas aeruginosa, Proteus mirabilis, Escherichia coli, Corynebacterium spp., Acinetobacter baumannii, Anaerobic Cocci [27,28]. It is worth noting that wound infections are often found to be polymicrobial. The porous structure of hydrogels-possible to be controlled by tuning the crosslinking density-can provide a matrix for loading of antimicrobial factors (antibiotics, extracts, nanoparticles, etc.) and, at the same time allow for tuning the release rate (depending on the diffusion coefficient of the factor through the gel network [2,29]) of the antimicrobial agent. Concurrently, biocompatibility and biodegradability can also be designed Bacterial wound infection develops through three consecutive phases [26]: (i) contamination-bacteria are transferred into the wound from the surrounding environment, (ii) colonization-bacteria are replicated and start to adhere to the wound-forming biofilm, however without interfering with the ongoing wound healing process, (iii) infection-the replication rate of bacteria begins to overload the immune system leading to various consequences. Some common Gram-negative, as well as Gram-positive bacterial pathogens that have been isolated from infected wounds, include Staphylococcus aureus, Pseudomonas aeruginosa, Proteus mirabilis, Escherichia coli, Corynebacterium spp., Acinetobacter baumannii, Anaerobic Cocci [27,28]. It is worth noting that wound infections are often found to be polymicrobial. The porous structure of hydrogels-possible to be controlled by tuning the crosslinking density-can provide a matrix for loading of antimicrobial factors (antibiotics, extracts, nanoparticles, etc.) and, at the same time allow for tuning the release rate (depending on the diffusion coefficient of the factor through the gel network [2,29]) of the antimicrobial agent. Concurrently, biocompatibility and biodegradability can also be designed to prove the high potential of natural hydrogels to be used as antimicrobial biomaterials, particularly wound dressings. This review is an up-to-date compilation of the latest information and reports that appeared in the literature on the biocidal properties of various types of natural hydrogels. What makes it a handful and useful is a collection of over 250 carefully selected articles on the biocidal properties of hydrogels, in the way that the review can serve as a source of knowledge for both specialized scientists and students starting their exploration of these issues. What makes it a handful and useful is a collection of over 250 carefully selected articles on the biocidal properties of hydrogels, in the way that the review can serve as a source of knowledge for both specialized scientists and students starting their exploration of these issues. Hydrogels with Intrinsic Antimicrobial Activity The role of each of the discussed hydrogels in the wound healing process is a complex function of many interactions, including stimulation of cell proliferation and angiogenesis, activation of macrophages and neutrophils to initiate the healing process, inhibition of metalloproteinases, regulation of the oxidation-reduction environment or increase in microbiological purity. Unfortunately, to this day, the exact description of the bactericidal properties of the hydrogels themselves is not fully explained. Since hydrogels can either be intrinsically antibacterial or be doped with antibacterial components, it is worth taking a moment to look at the intrinsic antimicrobial properties of non-doped hydrogels. Polysaccharide Hydrogels Among the group of polysaccharide hydrogels, there is one prominent and most frequently studied example-chitosan, in which proper bactericidal effect is widely used. Chitosan is a natural polymer obtained from chitin (which is the building block of crustaceans and insects) through enzymatic or chemical deacetylation. Both chitin and chitosan, owning to their excellent biochemical properties such as biocompatibility, biodegradability, non-toxicity, and processability, have found many promising biomedical applications [30]. Due to the presence of amine groups, chitosan polymers are positively charged at pH below 6 and thus can interact electrostatically with negatively charged regions on the microbial membrane [31]. In this way, chitosan can bind and disrupt the normal functions of the bacterial membrane by provoking the leakage of intracellular components as well as inhibiting the transport of nutrients into the cells [32][33][34][35]. Some studies reported the dependency of selected antibacterial and antifungal activities from both intrinsic physicochemical properties of chitosan (e.g., molecular weight, degree of deacetylation, hydrophobicity) and external factors (pH of medium, concentration, type of bacteria) [36][37][38]. These factors are shown in Figure 2. However, it should be noted that apart from the above-mentioned mode of interaction (involving the electrostatic attraction of cationic groups of chitosan with negatively charged zones on the bacterial surface), other possible mechanisms of antibacterial action have also been proposed [31]. In the case of Gram-negative bacteria, it was suggested that chitosan could affect the permeability of the outer envelope by forming ionic bonds, which However, it should be noted that apart from the above-mentioned mode of interaction (involving the electrostatic attraction of cationic groups of chitosan with negatively charged zones on the bacterial surface), other possible mechanisms of antibacterial action have also been proposed [31]. In the case of Gram-negative bacteria, it was suggested that chitosan could affect the permeability of the outer envelope by forming ionic bonds, which prevent the transport of nutrients into the cell and increase internal osmotic pressure [39]. Another proposed mechanism assumes that chitosan is able to penetrate inside the microbial cell and interact with DNA intracellularly, halting its transcription and RNA synthesis. In this scenario, chitosan can penetrate various types of membranes, including murein cross-linked walls as well as cytoplasmic membranes, and thus its cell-destroying efficiency would depend on the molecular weight [40]. This mechanism, although it seems interesting, should now be considered speculative as it has not been unequivocally confirmed. Another mechanism is based on the chelating capabilities of chitosan, which explains its activity at pH above 6, where free (non-protonated) amino groups can form bonds with metals such as Ca 2+ or Mg 2+ present in the cell walls. Formed bonds block the production of toxins and thus inhibit the overall growth of microorganisms. In conclusion, it should be clearly noted that among the above-mentioned mechanisms of chitosan action, the most frequently cited and suggested mechanism is the interaction of chitosan with the outer surface of bacteria or fungi as a result of electrostatic attraction, followed by cracking of the surface and the release of intracellular components [41,42]. An injectable hydrogel based on two natural polymers, chitosan and konjac glucomannan, linked together via Schiff base linkages showed good antibacterial activity against Gram-positive Staphylococcus aureus and Gram-negative Escherichia coli bacteria with 96% and 98% killing efficiency, respectively [43]. Chitosan/bacterial cellulose semiinterpenetrating hydrogels were successfully prepared by mixing both components and their subsequent cross-linking with glutaraldehyde. The resulting hydrogels showed antibacterial properties against the same bacteria as before, and the antibacterial properties were dependent on the chitosan-to-cellulose ratio [44]. Due to its bactericidal properties, low toxicity, and high biocompatibility, chitosan was approved as a food additive many decades ago-its application in this area has been thoroughly reviewed by M. Kong et al. [41]. Biomedical uses of chitosan (and particularly chitosan-based hydrogels) as antibacterial materials have also emerged over time and were mostly related to wound dressing, tissue engineering, and drug delivery. However, in the most currently tested applications, chitosan hydrogels are always doped with various types of agents, which will be discussed in the next parts of the article. The main disadvantages of chitosan are its insolubility in a neutral (so often close to physiological) aqueous medium and insufficient antibacterial activity, which hinders its use as an effective antimicrobial agent [45]. Nevertheless, the intention of the authors was to note the fact that chitosan itself is also a bactericidal material, regardless of whether its effect is enhanced (sometimes tremendously) by the addition of various types of active substances. One of the strategies is to introduce quaternary ammonium groups into various natural hydrogels as these groups have intrinsic bacterial membrane disruption activity, therefore, can be used for enhancing the antibacterial performance. There is an interesting review published recently describing this approach in a comprehensive manner [45]. In addition to chitosan, there are other polysaccharide-based hydrogels with antibacterial properties, such as for example gelatin and hyaluronic acid. Gelatin, a product derived from the hydrolysis of collagen, is widely used in biomedical applications (e.g., drug release, wound dressings, cell culture, and scaffolds for tissue engineering) because of its good biocompatibility, biodegradability, and non-immunogenicity [46]. In addition, gelatin contains arginine-glycine-asparagine (RGD) units, which are known to promote cell adhesion, migration, and proliferation [47]. Hyaluronic acid is a natural linear polysaccharide found in numerous tissues (including the extracellular matrix), playing an important role in cellular signaling and wound healing [48] as it prevents the proliferation of bacteria and acts as an anti-inflammatory agent. Antimicrobial Peptide Hydrogels Antimicrobial peptides are a group of small natural (produced by multicellular organisms) or synthetic polypeptide molecules. They play the role of the innate immune system of various organisms [49][50][51]. Therefore, they are an excellent platform for creating natural hydrogels with antibacterial properties. Currently, at least several hundred proteins of this type have been discovered [52] being excellent drug candidates for clinical exploitation due to their numerous advantages, including biocompatibility, biodegradability, and ease of synthesis and modification. The antibacterial mechanism of antimicrobial peptides is different from that of antibiotics. Therefore it is much more difficult for bacteria to produce drug resistance [53]. Some antimicrobial peptides may self-assemble by themselves into supramolecular hydrogels, which usually enhance their antimicrobial ability [50]. For example, the high inherent antibacterial activity of a β-hairpin based peptide hydrogel has been demonstrated by Sallick et al. [51]. During the experiments, gels were proved to be effective against a wide range of pathogens, including Gram-positive (Staphylococcus epidermidis, Staphylococcus aureus, and Streptococcus pyogenes) and Gram-negative (Klebsiella pneumoniae and Escherichia coli) bacteria. The suggested mechanism of antibacterial action based on membrane disruption leading to cell death upon cellular contact with the gel surface was proposed [51]. Usually, antimicrobial peptides are covalently connected to various hydrogelator fragments/molecules and-after gelation -form an integral part of the resulting hydrogel. Fluorenylmethyloxycarbonyl (Fmoc) group is widely used as a capping group for the generation of short peptide-based hydrogelators; some of them are already commercially available. Long-range aromatic stabilization via π-π stacking of Fmoc groups is a major driving force promoting the gelation of these peptides [54]. For example, Schnaider et al. obtained a diphenylalanine peptide that, upon self-assembling, showed high antibacterial activity against Escherichia coli. It was shown that nano-assemblies completely inhibited bacterial growth, triggered upregulation of stress-response regulons, induced substantial disruption to bacterial morphology, and caused membrane permeation and depolarization [55]. Later, McCloskey et al. designed various ultrashort Fmoc-peptide hydrogelators/peptides able to form soft hydrogels. The majority of the fabricated supramolecular hydrogels demonstrated selective action against biofilms of Gram-positive (Staphylococcus aureus, Staphylococcus epidermidis) and Gram-negative (Escherichia coli, Pseudomonas aeruginosa) pathogens, proving a high potential for their use in biomaterial applications, including antibacterial agents [56]. Porter at al. showed that modification of terminal functional groups to the amino or carboxylic acid or both could affect the antibacterial selectivity against Staphylococcus aureus [57]. Interestingly, functionalization of Fmoc-protected dipeptides with pyridinium nitrogen at the C-terminal may provide a class of positively charged hydrogelators forming supramolecular hydrogels at low concentrations (∼0.6%) and show remarkable bactericidal efficacy against both Gram-positive (Bacillus subtilis and Staphylococcus aureus) and Gram-negative bacteria (Pseudomonas aeruginosa and Escherichia coli) [58]. Finally, it should also be taken into account that in some cases, antimicrobial resistance has already been observed, including intrinsic resistance to antimicrobial peptides by the exposure of positively charged lipids on their membrane [59]. Hydrogels Loaded with Antimicrobial Agents When loaded with antimicrobial agents, hydrogels can tremendously increase their antimicrobial efficacy. Among antimicrobial agents/factors which can be loaded into a hydrogel matrix, four main groups can be considered: antibiotics, biological extracts, nanoparticles, and antimicrobial peptides, which are separately discussed in this chapter. Hydrogels Loaded with Antibiotics The biocompatible nature of most natural hydrogels makes them a convenient starting point to engineer a wide range of antimicrobial systems based on various active molecules, among which the most commonly used group are, of course, antibiotics [60]. Antibiotics can fight bacterial infections by killing bacteria (bactericidal effect) or slowing or stopping their growth (bacteriostatic effect) [61,62], acting selectively in various concentrations and affecting the cellular structures and the metabolic processes of various microorganisms [61,63]. The controlled hydrophilic/hydrophobic nature of many natural hydrogels provides a good environment for the incorporation of various small molecules, including antibiotics. Particularly, local antibiotic therapy is increasingly being recognized for its role in preventing and treating bacterial infections. Medical dressing for topical use (mainly wound treatment) based on antibiotic-loaded hydrogels has become more and more popular and relevant as they can provide multiple properties simultaneously [64,65]. Bacterial 1. Sulphonamides (e.g., sulfadiazine) which interfere with the synthesis of the key metabolites, 4. The incorporated antibiotics can act via four main mechanisms, either affecting bacteria structure or their metabolic pathways; these mechanisms are presented in Figure 3 [68]. trations and affecting the cellular structures and the metabolic processes of various microorganisms [61,63]. The controlled hydrophilic/hydrophobic nature of many natural hydrogels provides a good environment for the incorporation of various small molecules, including antibiotics. Particularly, local antibiotic therapy is increasingly being recognized for its role in preventing and treating bacterial infections. Medical dressing for topical use (mainly wound treatment) based on antibiotic-loaded hydrogels has become more and more popular and relevant as they can provide multiple properties simultaneously [64,65]. Bacterial infections impede the wound-healing process, leading to complications such as chronic wounds and ulcerations [66]. The incorporated antibiotics can act via four main mechanisms, either affecting bacteria structure or their metabolic pathways; these mechanisms are presented in Figure 3 [68]. Regardless of the type of antibiotic, the healing efficiency of an antibacterial wound dressing that delivers antibiotics depends on the drug release profile, the physicochemical properties of the drug, and the type and properties of the hydrogel matrix. However, for an antibiotic to be effective, it must reach the concentration necessary to inhibit the growth or kill the pathogen at the site of infection while remaining safe for human cells [68]. The hydrophilic nature of the hydrogel provides such a suitable substrate for the storage and controlled release of small molecules of antibiotic formulation, such as gentamicin [58], ciprofloxacin [59], vancomycin [60], amoxicillin [61] as the most popular representatives of antibiotics loaded into various hydrogel matrixes, and are discussed below separately. Gentamicin Gentamicin is one of many aminoglycoside antibiotics commonly recommended by doctors and medical specialists worldwide [69,70]. It is a traditional broad-spectrum aminoglycoside antibiotic used to treat skin, soft tissue, and wound infections [71][72][73]. The Regardless of the type of antibiotic, the healing efficiency of an antibacterial wound dressing that delivers antibiotics depends on the drug release profile, the physicochemical properties of the drug, and the type and properties of the hydrogel matrix. However, for an antibiotic to be effective, it must reach the concentration necessary to inhibit the growth or kill the pathogen at the site of infection while remaining safe for human cells [68]. The hydrophilic nature of the hydrogel provides such a suitable substrate for the storage and controlled release of small molecules of antibiotic formulation, such as gentamicin [58], ciprofloxacin [59], vancomycin [60], amoxicillin [61] as the most popular representatives of antibiotics loaded into various hydrogel matrixes, and are discussed below separately. Gentamicin Gentamicin is one of many aminoglycoside antibiotics commonly recommended by doctors and medical specialists worldwide [69,70]. It is a traditional broad-spectrum aminoglycoside antibiotic used to treat skin, soft tissue, and wound infections [71][72][73]. The antibiotic shows the greatest effectiveness against aerobic Gram-negative but is also used in conjunction with other antibiotics to treat infections caused by Gram-positive and Gramnegative organisms (e.g., Staphylococcus aureus, Streptococci, Escherichia coli, Pseudomonas aeruginosa) [74,75]. This antibiotic penetrates inside the bacterial cell and there-by binding with the bacterial ribosome-creates abnormal bacterial proteins, the incorporation of which into the cell membrane results in an increase in its permeability and cell death [76,77]. Biodegradable hybrid wound dressings loaded with gentamicin have been studied to evaluate gentamicin release and in vivo wound healing using a guinea pig burn model, compared to the neutral non-adherent dressing material Melolin ® and Aquacel ® Ag. It was concluded that the obtained hydrogel-based dressings exhibited promising results and did not require frequent bandage changes [78]. Hydrogel wound dressing consisting of hyaluronic acid cross-linked with gentamicin via EDC/NHS bioconjugation strategy. This hydrogel was able to treat bacterial infection locally. Tuning crosslinking density was important to control a sustained release of gentamicin for up to nine days with a range of adhesive and cohesive properties [48]. The antibacterial efficiency of hydrogels based on pullulan (extracellular microbial polysaccharide produced by different strains of Aureobasidium) and cysteamine with incorporated gentamycin have been evaluated by Li et al. [79]. Based on the release of gentamycin and the inhibition of bacterial proliferation against Staphylococcus aureus and Escherichia coli, it was hypothesized that antibiotics containing hydrogels could protect the wound surface from bacterial invasion. Bakhsheshi-Rad et al. prepared electrospinning-derived chitosan-alginate nanofibers as a platform for releasing gentamicin. The nanofibers loaded with gentamicin displayed superior antibacterial activity compared with the nanofibers having a lower amount of gentamicin. The results proved that gentamicin-loaded chitosan-alginate nanofibers have the potential to be used in the future as antibacterial wound dressing, including their ability of gentamicin delivery through the wound area [80]. Ciprofloxacin Ciprofloxacin is a chemotherapeutic agent from the fluoroquinolone group [81,82]. It works against Gram-positive and Gram-negative bacteria [83][84][85]. Its mechanism of action is based on the inhibition of two bacterial enzymes which are involved in the replication, transcription, and recombination of bacterial DNA: topoisomerase type II (DNA gyrase) and topoisomerase IV [86,87]. Since disruption of these processes leads to bacterial death, ciprofloxacin is a bactericidal drug [88]. Hydrogels fabricated by cross-linking of chitosan with bifunctional PEG glyoxylic aldehyde were loaded with ciprofloxacin and provided its sustained release for up to 24 h (≥80%) and displayed efficient activities (≥80%) against Escherichia coli for up to 12 h. The hydrogels were demonstrated to be nontoxic and cytocompatible toward mammalian cells (NIH-3T3) [66]. Biocompatible and degradable dual drug-delivery systems based on hyperbranched copolymers obtained via thiol-ene click chemistry were proposed. It was demonstrated that the simultaneous delivery of two antibiotics might be achieved, despite the fact that hydrophilic novobiocin is entrapped within the hydrophilic hydrogel, while the hydrophobic ciprofloxacin is encapsulated within the dendritic nanogels. Such hybrid hydrogels enable the quick release of novobiocin and prolonged release of ciprofloxacin. In vitro cell infection assays demonstrated that the antibiotic-loaded hybrid hydrogels can be used to treat bacterial infections, exhibiting better antibacterial activity against Staphylococcus aureus when compared with commercial antimicrobial band aids [89]. Cacicedo et al. prepared bacterial cellulose-chitosan films for ciprofloxacin delivery. The incorporation of ciprofloxacin into hybrid cellulose-chitosan films enabled its sustained release for more than 6 h. The presence of chitosan allowed for a slower controlled release of the antibiotic when compared with plain bacterial cellulose. The ciprofloxacin-loaded films demonstrated a good antibacterial performance against Pseudomonas aeruginosa and Staphylococcus aureus. In this case, a synergic effect of chitosan and ciprofloxacin antimicrobial activity was observed; in addition in vitro studies revealed the lack of cytotoxicity of the obtained hydrogel films in human fibroblasts [90]. The development of an antibacterial bioadhesive hydrogel with the addition of micelles containing ciprofloxacin has become a solution to the problem. Analysis of the release profile provided information on the effectiveness of the action in the first 24 h of use. Hydrogels doped with ciprofloxacin showed excellent antibacterial properties against Pseudomonas aeruginosa and Staphylococcus aureus [64]. The release of ciprofloxacin from trimethyl chitosan (TMC)/sodium carboxymethyl xanthan gum (CMXG) hydrogel slowing multiplication of Gram-positive and Gram-negative bacterial strains characterized by a larger degree of inhibition zone as compared to ampicillin or gentamicin [91]. Antibacterial bioadhesive hydrogel loaded with micelles containing ciprofloxacin for the management of corneal injuries was reported by Khalil et al. [92]. The corneal cells treated with the hydrogel were characterized by a decrease in colony-forming units (CFU) and a higher corneal epithelial viability after 24 h as compared to non-treated corneas and corneas treated with hydrogel without ciprofloxacin. It was concluded that the fabricated hydrogel presents a promising suture-free solution to seal corneal wounds while preventing infection. It was also demonstrated that ciprofloxacin could be self-assembled with a hydrophobic tripeptide (Leu-Phe-Phe) into antibacterial nanostructured hydrogels with high drug loading efficiency (2 mg ml −1 and 30% w/w) and a prolonged release while its antimicrobial activity against Staphylococcus aureus, Escherichia coli, and Klebsiella pneumoniae can be achieved [93]. Vancomycin Vancomycin is a glycopeptide antibiotic with a bactericidal action [94] mainly used in the case of serious Staphylococcal and Streptococcal infections in patients who are resistant or allergic to penicillins and cephalosporins [95]. The mechanism of action is based on blocking cell wall biosynthesis by inhibiting peptidoglycan polymerization by binding directly to D-alanyl-D-alanine terminal peptides and inhibiting cross-linking by transpeptidase [96]. Additionally, it influences the permeability of cell membranes and RNA synthesis. This antibiotic is mainly aimed at inhibiting the multiplication of Gram-positive bacteria, while strains of Gram-negative bacteria are resistant to this drug and are not able to interfere with the barrier-the cell wall [96]. Hyaluronic acid hydrogel loaded with vancomycin and gentamicin was synthesized and used for the eradication of chronic methicillin-resistant Staphylococcus aureus orthopedic infections using a mammal model (sheep). The increased efficacy was attributed to two important features, drug release pharmacokinetics and hydrogel's degradability [97]. Similarly, hyaluronic acid hydrogels loaded with various antibacterial agents (cefuroxime, tetracycline, amoxicillin, and acetylsalicylic acid) were tested as antibacterial and anti-inflammatory agents against Staphylococcus aureus. Drug-loaded hydrogels were demonstrated to have remarkable antibacterial and anti-inflammatory activities by their significant (p < 0.05) reduction in both Staphylococcus aureus bacterial infection and the levels of painful TNF-α, IL-6, and IL-8 pro-inflammatory cytokines [98]. Posadowska et al. fabricated an injectable gellan gum-based nanoparticles-loaded system for the local delivery of vancomycin in osteomyelitis treatment [99]. The obtained material was shown to be a biocompatible system displaying easiness of injection at low extrusion force, self-healing ability after disruption, adjustable vancomycin release, and antimicrobial properties against Gram-positive bacteria Staphylococcus aureus and Staphylococcus epidermidis. Amoxicillin Amoxicillin is a beta-lactam antibiotic derived from penicillin. It is active against Gram-positive bacteria, including nonpenicillin-resistant streptococcal, staphylococcal, and enterococcal species. It also has activity against some Gram-negative bacteria, including Helicobacter pylori, as well as anaerobic organisms [100]. Chang et al. reported chitosan/polyγ-glutamic acid nanoparticles incorporated into pH-sensitive hydrogels as an efficient carrier for amoxicillin delivery. It was shown that nanoparticles could penetrate cell−cell junctions and interact with Helicobacter pylori-infected sites in the intercellular spaces. Additionally, the composite hydrogel protected amoxicillin from the actions of gastric juice [101]. In a more recent study, starch-based composite hydrogel loaded with amoxicillin was tested as a potential platform for the controlled release of amoxicillin and inhibition of bacterial growth. In vitro bacterial growth inhibition was verified via disc diffusion assay against bacteria of clinical interest, such as Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. Significant bacteria growth inhibition effects of amoxicillin-loaded hydrogels were evidenced, revealing the future potential for oral administration and the local treatment of bacterial infections [102]. Chitosan-alginate hydrogels prepared by a simple, solvent-free approach for simultaneous and sustained releases of three antibiotics (ciprofloxacin, amoxicillin, and vancomycin) have been most recently reported by Khan et al. The antibiotics were loaded during the gelation process to maximize loading efficiency. The results revealed that the physicochemical properties of the combination of the drugs-loaded play an essential role not only in the morphology, structural changes, and release profiles but also in antibacterial activity [103]. Chitosan-PEG hydrogels exhibiting biocompatible, antibacterial, anti-inflammatory, and self-healing properties were obtained by del Olmo et al. Antibacterial activity was evaluated against Staphylococcus aureus and Escherichia coli, which are common microorganisms identified in infected wounds. Hydrogels were loaded with various antibiotics, including cefuroxime, tetracycline amoxicillin antibiotics, as well as acetylsalicylic acid for the subsequent sustainable release. All the drugs were confirmed to enhance the antibacterial and anti-inflammatory activity [104]. The same research group demonstrated similar outcomes using an even simpler hydrogel formulation, i.e., based on chitosan crosslinked with genipin [105]. Other Antibiotics Various cefotaxime sodium-loaded hydrogel formulations (including pectin, xanthan gum, and guar gum) were prepared, characterized, and tested against wound pathogens such as Staphylococcus aureus, Escherichia coli, and Pseudomonas aeruginosa, using either pure drug or Fucidin ® cream as control. Further, in vivo studies were performed to demonstrate the efficacy of eradication of Gram-negative (Pseudomonas aeruginosa) or Gram-positive (methicillin-resistant Staphylococcus aureus) bacteria from an infected rat wound model. It was shown that some cefotaxime gel formulations could be a very promising and innovative topical alternative for the treatment of skin infections caused by Cefotaxime-susceptible bacteria [106]. Gelatin methacryloyl hydrogels have been fabricated and tested for the localized delivery of cefazolin. It was demonstrated that cefazolin delivered from the hydrogel induced a dose-dependent antibacterial efficacy against Staphylococcus aureus, chosen as a model bacteria [107]. A photo-cross-linked hydrogel based on gelatin served as a platform designed for ceftriaxone release. Antibacterial testing carried out on the hydrogel system by the disc diffusion method resulted in the inhibition of Staphylococcus aureus strains [108]. Alginate hydrogels (in the form of loaded alginate microparticles or films) incorporating neomycin or propolis were tested as potential dressings for diabetic ulcers. Microbial penetration tests revealed that all the hydrogels were able to act as barriers to the penetration of microorganisms, which is an important characteristic of wound dressings [109]. In another study, neomycin was incorporated into composite hydrogels composed of poly (vinyl alcohol) (PVA), carboxymethyl cellulose, and cellulose nanofibers. The resulting hydrogel exhibited improved biodegradability, biocompatibility, pH-responsiveness, and effectivity against Escherichia coli and Staphylococcus aureus, which was attributed to the controllable release of neomycin [110]. The sustained release of levofloxacin from thermosensitive chitosan-based hydrogel was reported by Cheng et al. [111]. Antibacterial activity of levofloxacin-containing hydrogel against Staphylococcus aureus and Staphylococcus epidermidis, chosen as two common aerobic bacteria found in wound infections of postoperative endophthalmitis. The results of antibacterial studies demonstrated a long-term antibacterial property of the developed levofloxacin-doped hydrogel. The same group has developed another chitosan -based hydrogel loaded with levofloxacin for the topical treatment of keratitis (i.e., inflammation of the eye's cornea). This time, Staphylococcus aureus was selected as a model bacterial threat to check whether the obtained hydrogels could effectively fight them. It was demonstrated that the obtained hydrogel significantly inhibited bacterial growth indeed, with no histological evidence of the bacteria in the studied ex vivo rabbit model of keratitis. Moreover, the hydrogel exhibited remarkable anti-inflammatory effects and sustained release of levofloxacin [112]. In another study, the delivery of levofloxacin from hyaluronic acid nanohydrogels for the treatment of bacterial intracellular infections studied by Montanari et al. The minimal inhibitory concentration values for Staphylococcus aureus and Pseudomonas aeruginosa were compared with pure and hydrogel-loaded levofloxacin-loaded nanohydrogels indicating the increase in antibacterial efficacy of the latter. An additional advantage from the point of view of in vivo applications is the possibility of ease sterilization of the obtained materials without losing the desired properties [113]. Multicomponent chitosan-based fabricated by free radical graft copolymerization and loaded with clarithromycin. In vitro drug release assessments demonstrated that clarithromycin can be sustainably released in a simulated gastric medium (pH = 1.2) for a prolonged period of time. It was concluded that the obtained multi-composite could be used as an effective drug delivery system to cure Helicobacter pylori-related infection. However, no antibacterial tests were performed [114]. A Doxycycline drug delivery platform based on carboxymethyl chitosan conjugated with caffeic acid and its composite with polyacrylamide was reported by Moghaddam et al. The cytotoxicity results indicated that the synthesized hydrogels are relatively nontoxic, and the bacterial inhibition zones around doxycycline-loaded hydrogels against Staphylococcus aureus and Escherichiacoli demonstrated their antibacterial action [115]. Composite gelatin-based injectable antimicrobial conductive hydrogels for wound disinfection and infectious wound healing were thoroughly studied by Liang et al. against methicillin-resistant Staphylococcus aureus. The very detailed biomedical evaluation was run using a plethora of approaches by use of an infected mouse model: the wound closure rate, the length of dermal tissue gap, number of blood vessels and hair follicles in hematoxylin-eosin staining, the related cytokines and regeneration of blood vessels in immunofluorescence were all further studied. All the results demonstrated the superior wound healing effect of the synthesized doxycycline-loaded hydrogel (but also its drug-free analog) in infectious skin tissue defect repair, indicating their great potential for infected wound healing [116]. Wound dressing application of chitosan-gelatin hydrogels embedded with ampicillinloaded hyaluronic acid nanoparticles was investigated by Özkahraman et al. In vitro ampicillin release studies showed that more than half of the antibiotic load was released from the hydrogels after five days, but most of that amount after five hours. The antibacterial performance of hydrogels against Staphylococcus aureus and Escherichia coli was demonstrated using an agar disc diffusion test [117]. Composite hydrogel composed of poly(vinyl alcohol), egg white (matrix), and montmorillonite nanoclay (reinforcement) was fabricated by Delir et al. by freeze-thaw technique. In vitro, clindamycin release studies showed the cumulative fractional release of clindamycin was decreased either by increasing the weight percentage of the incorporated montmorillonite into the wound dressings or by decreasing the pH of the release medium. It was confirmed that more than 90% of Staphylococcus aureus bacteria could be eliminated using the clindamycin-loaded hydrogel [118]. It is worth mentioning here that freezingthawing is one of the frequently used approaches (also commercial) to fabricate particularly PVA-based hydrogels as PVA gelation through repeated freezing-thawing cycles occurs without an externally added crosslinking agent, resulting in ultrapure hydrogels [119]. The schematic idea of this method is shown in Figure 4. montmorillonite into the wound dressings or by decreasing the pH of the release medium. It was confirmed that more than 90% of Staphylococcus aureus bacteria could be eliminated using the clindamycin-loaded hydrogel [118]. It is worth mentioning here that freezingthawing is one of the frequently used approaches (also commercial) to fabricate particularly PVA-based hydrogels as PVA gelation through repeated freezing-thawing cycles occurs without an externally added crosslinking agent, resulting in ultrapure hydrogels [119]. The schematic idea of this method is shown in Figure 4. Another PVA-based hydrogel prepared by the freeze-thaw technique was obtained by Qing et al. [120]. Apart from PVA, it was composed of N-succinyl chitosan and loaded with lincomycin (which, similarly to clindamycin, belongs to the lincosamides [121]). The incorporation of this antibiotic resulted in a remarkable antibacterial activity against Escherichia coli and Staphylococcus aureus (78% of the latter was inhibited with 75 μg/mL lincomycin), with more than 74 % of the drug released within 24 h. The nontoxic nature of the composite hydrogels was also shown using an MTT assay suggesting the promising potential of the obtained hydrogels for wound dressing [120]. Gelatin-polyacrylamide hydrogel loaded with tetracycline hydrochloride was demonstrated to have high antibacterial activity against Staphylococcus aureus and Escherichia coli, reducing the wound area (96%) better than the antibiotic-free hydrogel component (86%) after 14 days [122]. Silk fibroin/sodium alginate hydrogel scaffolds loaded with teicoplanin (semisynthetic glycopeptide antibiotic with similar activity to vancomycin) and additionally, phenamil (a blocker of Na + translocation) were evaluated in a rat model of chronic bone infection. The released antibiotic maintained its high antibacterial activity (>75%) against methicillin-resistant Staphylococcus aureus for 35 days and in various physiologically relevant pH environments (5.5, 7.4, and 8.5). The obtained results indicated that the fabricated scaffolds could eradicate the infection and, at the same time, improve bone regeneration [123]. Chitosan-based hydrogel loaded with whey protein-based microsized particles containing chloramphenicol have been demonstrated to have remarkable ex vivo antibacterial activity against Staphylococcus aureus with doubling antibacterial activity after 24 h, comparing chloramphenicol-loaded and chloramphenicol-free samples [124]. Alginate/hyaluronic acid-based hydrogels were modified with grafting phenylboronic acid to the alginate side chain and have shown antibacterial and anti-inflammatory properties as a result of the effective loading of amikacin (aminoglycoside antibiotic often used for treating severe infections caused by various Gram-negative multidrug-resistant bacteria) and naproxen. Both drugs were pre-loaded into the micelles, preserving Another PVA-based hydrogel prepared by the freeze-thaw technique was obtained by Qing et al. [120]. Apart from PVA, it was composed of N-succinyl chitosan and loaded with lincomycin (which, similarly to clindamycin, belongs to the lincosamides [121]). The incorporation of this antibiotic resulted in a remarkable antibacterial activity against Escherichia coli and Staphylococcus aureus (78% of the latter was inhibited with 75 µg/mL lincomycin), with more than 74 % of the drug released within 24 h. The nontoxic nature of the composite hydrogels was also shown using an MTT assay suggesting the promising potential of the obtained hydrogels for wound dressing [120]. Gelatin-polyacrylamide hydrogel loaded with tetracycline hydrochloride was demonstrated to have high antibacterial activity against Staphylococcus aureus and Escherichia coli, reducing the wound area (96%) better than the antibiotic-free hydrogel component (86%) after 14 days [122]. Silk fibroin/sodium alginate hydrogel scaffolds loaded with teicoplanin (semisynthetic glycopeptide antibiotic with similar activity to vancomycin) and additionally, phenamil (a blocker of Na + translocation) were evaluated in a rat model of chronic bone infection. The released antibiotic maintained its high antibacterial activity (>75%) against methicillinresistant Staphylococcus aureus for 35 days and in various physiologically relevant pH environments (5.5, 7.4, and 8.5). The obtained results indicated that the fabricated scaffolds could eradicate the infection and, at the same time, improve bone regeneration [123]. Chitosan-based hydrogel loaded with whey protein-based microsized particles containing chloramphenicol have been demonstrated to have remarkable ex vivo antibacterial activity against Staphylococcus aureus with doubling antibacterial activity after 24 h, comparing chloramphenicol-loaded and chloramphenicol-free samples [124]. Alginate/hyaluronic acid-based hydrogels were modified with grafting phenylboronic acid to the alginate side chain and have shown antibacterial and anti-inflammatory properties as a result of the effective loading of amikacin (aminoglycoside antibiotic often used for treating severe infections caused by various Gram-negative multidrug-resistant bacteria) and naproxen. Both drugs were pre-loaded into the micelles, preserving the structural integrity and good rheology of the final hydrogels and controlling the rate of drug release at the site of inflammation, in vitro antibacterial activity caused by amikacin release, reached 90% for Staphylococcus aureus and 98% for Pseudomonas aeruginosa [125]. Innovative hydrogel is laden with antibiotics (DAC ® , "Defensive Antibacterial Coating"), such as, among others, ceftazidime was tested in patients with periprosthetic infection of the hip joint. It was shown that the use of antibacterial hydrogel coatings loaded with the antibiotic might shorten the hospitalization time of patients and bring new solutions in the treatment of infections associated with implants in the field of orthopedics [126,127]. Defensive Antibacterial Coating, DAC is also a promising way of fighting post-operative site infection after internal osteosynthesis for closed fractures [128]. A hydrogel with high antibacterial activity against four various bacteria was obtained by Hu et al. by crosslinking oxidized dextran with simultaneous loading with two antibiotics: tobramycin and ornidazole. The injectable hydrogel exhibited thixotropic and self-healing properties and was acid-responsive due to the contribution of acid-labile Schiff base linkage during gel formation. The tobramycin-loaded hydrogel gel showed high antibacterial activity against aerobic pathogens (Staphylococcus aureus and Pseudomonas aeruginosa). However, it was less effective against anaerobic ones (Clostridium sporogenes and Bacaeroides fragilis). Interestingly, the ornidazole-loaded hydrogel showed opposite outcomes [129]. As can be seen from the concisely discussed bibliography presented above (including especially the recent literature reports), natural hydrogels are commonly tested as carriers of a wide range of antibiotics. Strategies for introducing antibiotics into hydrogel formulations include, among others, their physical loading, pre-adsorption on carriers (e.g., nanoparticles), or encapsulation in micelles, which are later dispersed in the hydrogel. What can be noticed is that the majority of research focuses only on in vitro demonstration of antibacterial activity; in vivo studies (including, for example, animal models) are rare. Hydrogels Loaded with Biological Extracts or Natural Compounds Hydrogels, as soft and biocompatible carriers, can also be used to deliver a whole range of plant-derived ingredients and their mixtures. These biological mixtures/extracts are characterized, on the one hand, by usually low general toxicity, and on the other, by unusual and often unexplored biological activity [130]. Often biological extracts have excellent biocidal properties. Therefore their incorporation in the hydrogel matrix is a very good opportunity to control and tune these properties. However, their therapeutic efficiency is often limited by various factors, including the lack of targeting capacity and poor bioavailability [131]. Biological extracts may come from plants and animals; some of these extracts have a long history of well-documented applications, while others were discovered in recent years [130,132,133]. Herbal extracts loaded within a hydrogel matrix may affect their structural, biological, and functional properties [133]. A whole range of compounds of natural origin possess biological attractiveness and have antibacterial properties making them potentially useful in various pharmaceutical and biomedical preparations [134,135] as their use is associated with antibacterial activity. Anyway, they are considered safe and thus can compete with antibiotic preparations [136]. The most studied families of chemical compounds derived from biological extracts include inter alia alkaloids, flavonoids, terpenoids and tannins, polyphenols, and their most frequently chosen sources are shown in Figure 5. Due to its antimicrobial activity, topical nutrition, debriding action, minimalization of inflammation, stimulation of angiogenesis, granulation, wound contraction, and epithelialization, honey has been incorporated into wound dressings, many of them available commercially. The antimicrobial activity of honey is related to its acidity, low water content, and presence of a wide range of antimicrobial components, including hydrogen peroxide, antibacterial peptide defensin-1, flavonoids, and phenolic acids [137]. Gelam honey was incorporated into an agar-based hydrogel formulation, which was then cross-linked and sterilized using electron radiation 25 kGy. Its antibacterial activity against Staphylococcus aureus was studied with the conclusion that the topical application of the obtained formulations might have a favorable influence on the various phases of burn wound healing [138]. Injectable self-healing chitosan-based hydrogel with inherent antibacterial activity were fabricated by Zhong et al.by the dynamic boronate ester cross-linkages. At the same time, in-situ encapsulation of epigallocatechin-3-gallate, a green tea derivative, was successfully achieved. The resulting hydrogels showed dual bioactivity, i.e., antibacterial (against Escherichia coli and Staphylococcus aureus) and antioxidant, as well as good biocompatibility. Additionally, the in vivo wound healing studies using the skin model re- Due to its antimicrobial activity, topical nutrition, debriding action, minimalization of inflammation, stimulation of angiogenesis, granulation, wound contraction, and epithelialization, honey has been incorporated into wound dressings, many of them available commercially. The antimicrobial activity of honey is related to its acidity, low water content, and presence of a wide range of antimicrobial components, including hydrogen peroxide, antibacterial peptide defensin-1, flavonoids, and phenolic acids [137]. Gelam honey was incorporated into an agar-based hydrogel formulation, which was then cross-linked and sterilized using electron radiation 25 kGy. Its antibacterial activity against Staphylococcus aureus was studied with the conclusion that the topical application of the obtained formulations might have a favorable influence on the various phases of burn wound healing [138]. Injectable self-healing chitosan-based hydrogel with inherent antibacterial activity were fabricated by Zhong et al.by the dynamic boronate ester cross-linkages. At the same time, in-situ encapsulation of epigallocatechin-3-gallate, a green tea derivative, was successfully achieved. The resulting hydrogels showed dual bioactivity, i.e., antibacterial (against Escherichia coli and Staphylococcus aureus) and antioxidant, as well as good biocompatibility. Additionally, the in vivo wound healing studies using the skin model revealed the good performance of the hydrogels, showing their potential as wound dressings [139]. Feng et al. obtained composite hydrogel based on PVA, chitosan, and tannic acid by cryogenic treatment and freeze-drying method (cf. Figure 4), which let them obtain highly porous hydrogel with favorable swelling due to higher absorption capacity. It was shown that the addition of tannic acid improved the antibacterial activity of hydrogel, which was tested against Escherichia coli and Staphylococcus aureus [140]. In another study, the pH-controlled release of tannic acid and its antibacterial activity against Escherichia coli was investigated by Ninan et al. by its incorporation into agarose scaffolds cross-linked with zinc ions [141]. Interestingly, the obtained biomaterials displayed comparable antibacterial activity to one of the classic antibiotics-gentamicin. Essential oils are distilled or extracted from various plants and are used in the food, medicine, and cosmetics industries due to their well-known antibacterial properties. The most abundant and, at the same time, the key components of essential oils are terpenes, which provide antibacterial properties as a result of well know bacteria-killing mechanisms [67]. Immobilization of essential oils increases their stability, bioactivity, and antibacterial potential but also reduces the volatility of essential oils. Hydrogels based on carboxymethyl chitosan were synthesized and loaded with selected essential oils: eucalyptus essential oil, ginger essential oil, and cumin essential oil to prepare antibacterial materials. Among the developed hydrogels, the eucalyptusloaded hydrogel exhibited optimal antibacterial activities of 46% against Staphylococcus aureus and 63% against Escherichia coli, along with high cell viability (>92%) and accelerated wound healing in mouse burn models by promoting the recovery of dermis and epidermis [142,143]. Mahmood et al. synthesized gellan gum hydrogels loaded with lavender oil and ofloxacin for wound healing application [144]. In vitro drug release studies showed sustained drug release of lavender oil (71%) and ofloxacin (85%) over the period of 48 h, while the in vitro antimicrobial analysis confirmed good antibacterial activity against Gramnegative (Escherichia coli) and Gram-positive (Staphylococcus aureus) bacteria, suggesting that these hydrogels can be potentially used as effective scaffolds for the treatment of infected wounds. Moreover, in vivo wound healing experiments using a full-thickness wound model on rats showed almost complete (98%) wound closure in loaded hydrogels when compared to the blank hydrogels [144]. Ionically crosslinked hydrogels, based on chitosan and poly(vinyl alcohol) loaded with tea tree oil and silver ions, were synthesized by Low et al. to demonstrate its feasibility for delivering the above-mentioned antimicrobial agents to treat common wound-infecting pathogens (Candida albicans, Staphylococcus aureus, Psudomonas aeruginosa) [145]. The tea tree oil contains over a hundred different components, including terpenes and associated alcohols, which heavily contribute to its broad spectrum antibacterial, antifungal, antiviral, and anti-inflammatory activities [146]. Therefore, combining tea tree oil and silver ions into the hydrogel matrix improved antimicrobial activity by lowering the required effective concentrations [145]. Altaf et al. tested poly(vinyl alcohol)/starch hydrogels loaded with tea tree oil, clove oil, and Oregano oil for wound dressing applications. In vitro antibacterial analysis of the loaded hydrogels showed good antibacterial activity against Escherichia coli and multiresistant Staphylococcus aureus. The antibacterial result was highest for hydrogel loaded with clove oil, but unfortunately, the origin of this superior antibacterial activity was not elucidated [147]. Recently, the thyme oil-enriched cellulose-based hydrogels were investigated by Lu et al. exhibiting remarkable antibacterial activity against Escherichia coli and Staphylococcus aureus, suggesting their effectiveness as wound dressing materials for the treatment of bacteriainfected wounds. In vitro release test showed thyme oil burst release for the first 24 h followed by a slow and sustained release [148]. Thymol enriched bacterial cellulose hydrogel was tested as antibacterial dressing material, showing its effectiveness against a range of bacteria, including Pseudomonas aeruginosa, Escherichia coli, Klebsiella pneumoniae and Staphylococcus aureus. The incorporation of thymol into cellulose matrix has proved remarkable in vivo wound healing efficiency in rats with third degree burn wounds [149]. Research on various materials with incorporated thymol for applications as antibacterial wound dressing has been reviewed recently [150]. Curcumin is a biologically active substance extracted from turmeric, and it has high antioxidant activity. It has an analgesic effect and anti-inflammatory and anti-cancer properties. It has been tested in vitro and in vivo on curcumin-loaded wound dressings. Outcomes demonstrated therapeutic effects [151][152][153]. The commonly known curcumin as an anti-inflammatory and antioxidant added to hydrogel materials improves the wound healing process in diabetics [154,155]. Fathollahipour et al. prepared thermally crosslinked PVA-based hydrogels containing honey or sucrose for the purpose of erythromycin delivery. It was observed that the addition of honey to the hydrogel significantly slows down the release of the drug. Antibacterial tests showed the inhibitory action of erythromycin-loaded PVA hydrogels against Pseudomonas aeruginosa and Staphylococcus aureus [156]. The possible use of hydrogels loaded with herbal medicines has already been exploited in diverse areas, from product development to disease treatment, including confirmation of their translative potential in clinical trials [133]. For example, a randomized, double-blind, placebo-controlled clinical trial was conducted in Mexico with ambulatory patients [157]. The extract of the Mimosa tenuiflora(a popular remedy used in Mexico for skin lesions treatment)was incorporated into a composite hydrogel for the treatment of venous leg ulceration disease. It was shown that the size of the ulcer in patients treated with the extractloaded hydrogel had been reduced, while those treated with the non-loaded hydrogel exhibited no significant improvement. Hydrogels based on carboxymethylcellulose with loaded grapefruit seed extract have shown amazing antimicrobial activity. The results of the research prove that nanocomposite films increase the biocidal activity against Escherichia coli and Staphylococcus aureus [158]. The addition of carrageenan (natural linear sulfated polysaccharides usually extracted from red algae) to various hydrogel matrixes, including agar, alginate, chitosan, and gelatin, has proven to have a positive outcome in the treatment of difficult-to-heal-wounds-andnon-healing diabetic ulcers area of skin [159][160][161]. Apart from applications of carrageenan in pharmaceutical and industrial applications (e.g., as an emulsifier, stabilizer, or thickening agent), its inflammatory and immunomodulatory properties make carrageenan a promising antibacterial agent. Chrysin is a bioflavonoid naturally found in martyr flowers, passionflowers, some kind of geranium, silver linden, bee propolis, or honey [162]. Nanofibers of chrysin included in the hydrogels show anti-inflammatory, anticancer, and antioxidant properties [163,164]. Another flavonoid, hesperidin (present in vegetables and fruits), was successfully loaded into alginate/chitosan or dendritic-based-the resulting biomaterials showed angiogenic, anti-inflammatory, and antibacterial properties [165,166]. As seen from the presented survey on natural extracts/compounds-loaded hydrogel formulation, integrating herbal medicines with hydrogel scaffolds may become a promising and nature-based approach in antimicrobial treatment. Anyway, many herbal medicines and synthetic drugs (but not all) may be loaded together to provide a synergistic boost. Nevertheless, delivering herbal medicines using hydrogel vehicles may be more complicated than expected because-due to the complex matrix-more optimization is required before success can be achieved [133]. Histologic evaluation, however, has shown that the therapeutic effect of the extractloaded hydrogel is not significantly different from that of the blank hydrogel. This reveals that the successful incorporation of herbal medicines into hydrogels for clinical use may necessitate careful optimization. Hydrogels Loaded with Inorganic Particles Inorganic antibacterial materials such as silver (Ag), gold (Au), and copper (Cu) have been used for ages. All these elements were added to enhance the antibacterial activity in a particular formulation. Zinc oxide (ZnO), titanium dioxide (TiO 2 ), or nickel oxide (NiO) was also used for this purpose. The placement of such nanoparticles showed a low degree of toxicity. In nature, silver occurs in the free state and in minerals, but most of the silver mined is mixed with copper, gold, zinc, and lead ores. In the early 18th century, silver nitrate was used for the treatment of infected ulcers or burn wounds [167]. Interestingly, high concentrations of silver performed to interact with skin cells, which are the necessary concentration to alter cellular respiration, are 25 times greater than that needed to inhibit the growth Pseudomonas aeruginosa [168]. There are many medications available on the pharmaceutical market for treating wounds, especially silver-coated dressings or disinfectants [169]. Silver nanoparticles (Ag NPs)are a substitutive active ingredient for traditional methods of antibiotic therapy. They have a large surface area to volume ratio and thus are active at negligible concentrations showing a high antibacterial potential [170]. With the rise of resistance to commonly used antibiotics and disinfectants, scientists are responding to the threat by developing new antimicrobial materials to prevent or control infections caused by these pathogens. Inorganic antimicrobial agents are based on the antimicrobial nature of metals or metal oxides such as silver, gold, zinc, and copper, which are incorporated into hydrogels by physical adsorption and mixing. It is assumed that the antibacterial properties may result from the formation on the surface of metals or their oxide's highly reactive oxygen forms (ROS). When ROS are formed, bacteria are eliminated. This is related to the formation of oxidative stress, oxidative lesions, and membrane lipid peroxidation [171][172][173]. Another hypothesis says that the irregular edges of the nanoparticles damage the structure of the bacteria, as a result of which it degrades [174]. Nanoparticles can also affect the inhibition of ATPase activity (reducing the amount of ATP), which results in damage to bacteria as a result of the body's reactions. Their occurrence is dictated by the combination of ionized forms of metal elements with proteins present in the body. The main problem turns out to be the low efficiency of using antibacterial metals and metal oxides in aqueous solutions. An attempt has been made to combine antimicrobial agents with natural hydrogel materials (collagen, gelatin, fibroin, and keratin) that are compact and stay in place at the wound site [175,176]. Thanks to their porous structure, they are highly permeable and can store and release medicinal substances at the desired rate. In addition, hydrogels are highly biocompatible and can be applied directly to the wound [4]. The main metal ions that have antibacterial properties are silver, gold, zinc, copper, mercury, and cadmium ions. Recently, metal oxides in the form of nanoparticles, such as titanium dioxide (TiO 2 ), zinc oxide (ZnO), nickel oxide (NiO), and iron oxide (Fe 3 O 4 ), have also gained a lot of interest because metal oxides in inorganic nanoparticles can interact with the surface of the material. At the forefront of antibacterial agents are silver ions, which are 1000 times stronger than zinc and copper ions [177,178]. The main bacterial targets of silver nanoparticles (along with gold and magnetite ones) that have been used in the treatment of skin infections are schematically shown in Figure 6 [68]. ions. Recently, metal oxides in the form of nanoparticles, such as titanium dioxide (TiO2), zinc oxide (ZnO), nickel oxide (NiO), and iron oxide (Fe3O4), have also gained a lot of interest because metal oxides in inorganic nanoparticles can interact with the surface of the material. At the forefront of antibacterial agents are silver ions, which are 1000 times stronger than zinc and copper ions [177,178]. The main bacterial targets of silver nanoparticles (along with gold and magnetite ones) that have been used in the treatment of skin infections are schematically shown in Figure 6 [68]. Silver in this field is used in the form of nanoparticles, silver salts, and metallic silver. The antibacterial effect of AgNPs has been tested and confirmed by many scientists [179][180][181][182][183][184][185][186][187][188][189][190][191][192]. As a result of studies, it turned out that silver nanoparticles effectively destroyed such types of bacteria as Staphylococcus aureus, Pseudomonas aeruginosa, Escherichia coli, Bacillus subtilis, Vibrio cholera, Salmonella typhi, Enterococcus faecalis and others [193][194][195][196][197]. Shivastava et al. tested the multi-drug-resistant bacteria of the S. typhi group by growing them on LB agar plates that had been enriched with silver nanoparticles [198]. The presence of AgNPs resulted in the inhibition of bacterial growth by 60% at a concentration of 5 μg/mL and by 90% at a concentration of 10 μg/mL. Complete inhibition of bacterial growth was observed at a concentration of 25 μg/mL. Anisha et al. developed an antimicrobial sponge composed of chitosan, hyaluronic acid, and AgNPs to treat diabetic foot disease [199]. The obtained results indicate that such a hydrogel dressing can be used in the treatment of diabetic foot disease infected with antibiotic-resistant bacteria. Similar studies were conducted by Ruffo et al., synthesizing a biocompatible hydrogel called Hy-DrO-DiAb for the treatment of diabetic foot ulcers [200]. Other studies have shown that silver nanoparticles are a good alternative to silver sulfadiazine in the treatment of burns Silver in this field is used in the form of nanoparticles, silver salts, and metallic silver. The antibacterial effect of AgNPs has been tested and confirmed by many scientists [179][180][181][182][183][184][185][186][187][188][189][190][191][192]. As a result of studies, it turned out that silver nanoparticles effectively destroyed such types of bacteria as Staphylococcus aureus, Pseudomonas aeruginosa, Escherichia coli, Bacillus subtilis, Vibrio cholera, Salmonella typhi, Enterococcus faecalis and others [193][194][195][196][197]. Shivastava et al. tested the multi-drug-resistant bacteria of the S. typhi group by growing them on LB agar plates that had been enriched with silver nanoparticles [198]. The presence of AgNPs resulted in the inhibition of bacterial growth by 60% at a concentration of 5 µg/mL and by 90% at a concentration of 10 µg/mL. Complete inhibition of bacterial growth was observed at a concentration of 25 µg/mL. Anisha et al. developed an antimicrobial sponge composed of chitosan, hyaluronic acid, and AgNPs to treat diabetic foot disease [199]. The obtained results indicate that such a hydrogel dressing can be used in the treatment of diabetic foot disease infected with antibiotic-resistant bacteria. Similar studies were conducted by Ruffo et al., synthesizing a biocompatible hydrogel called HyDrO-DiAb for the treatment of diabetic foot ulcers [200]. Other studies have shown that silver nanoparticles are a good alternative to silver sulfadiazine in the treatment of burns and do not cause toxic effects [201][202][203]. The effect of hydrogel dressings containing silver nanoparticles in their structure on the speed of healing burn wounds was studied by Boonkaew et al. [204], who checked the effectiveness of two dressings-Acticoat and PolyMemSilverW in comparison with self-made hydrogel dressings containing AgNPs. Their results showed high effectiveness of the hydrogel dressing (comparable to Acticoat) against most pathogens studied (Pseudomonas aeruginosa, Acinetobacterbaumannii, Candida albicans, Staphylococcus aureus, and E. faecalis), in contrast to the tested PolyMemSilverW, which demonstrated lower antibacterial activity. In the research work of Mi et al., the effect of a two-layer dressing made of chitosan and enriched with AgNPs was investigated [205]. In vitro and in vivo tests confirmed the antibacterial effect of the dressing. Moreover, the hydrogel dressing had a high air permeability coefficient, which additionally accelerated the wound healing process. Ag NPs, in combination with guar gum-based hydrogels, exhibited strong antimicrobial activity and cytocompatibility [206,207]. Unfortunately, often the problem with AgNPs is that large amounts are released too quickly, which makes silver, despite its antibacterial properties, dangerous to health. To avoid side effects, Ag NPs were only used topically, so the antibacterial effect was negligible [208]. One study developed a hydrogel made of very short peptides that have the ability to combine into larger structures and form hydrogels [197]. Silver nanoparticles were synthesized in situ using UV radiation. Such a process made it possible to control the amount of AgNPs released-prolonged release for 14 days, and effective inhibition of bacterial growth was observed. The problem of too-high toxicity can be solved by the use of modified silver nanoparticles and the synthesis of silver nanoparticles as a result of in-situ reduction or supramolecular complexation. Rui Liu et al. designed a complex hydrogel consisting of gelatin and cellulose modified with -COOH groups; after the synthesis, silver nanoparticles were incorporated within the hydrogel [209]. Higher effectiveness of antibacterial ions was observed as a result of a prolonged process of silver-controlled release [210]. The goal of Haidari et al. was to introduce small silver nanoparticles into the interior of a temperature-sensitive hydrogel based on block copolymer F127. A high level of dispersion of ultra-small silver particles in the cross-linked structure of the hydrogel was obtained by depositing Ag NPs on twodimensional structures (e.g., on graphene). The high dispersion and lower reactivity of the nanoparticles resulted in a much higher level of interaction with bacterial membranes, leading to damage to the biofilm produced by the bacteria and their death [211]. Jiang et al. tested the highly compatible polysaccharide glucomannan konjac in combination with chitosan and silver nanoparticles for use as a wound dressing [212]. Studies have proven the ability of this hydrogel structure to drain the wound from the fluid secreted by it, which visibly reduces inflammation. In addition, the hydrogel material itself reduced the toxicity of the silver nanoparticles themselves through their controlled release. The smart hydrogel proposed by Haidari et al. also had the ability to control the release of an antibacterial substance [213]. The AgNPs present in this structure were able to respond to pH changes-when the pH changed from acidic to basic, the system caused the release of the antibacterial substance (AgNPs) contained in the hydrogel. Studies have shown the effective elimination of gram-positive and gram-negative bacteria. Gold nanoparticles, although they are less frequently used as antibacterial agents, still have a number of antimicrobial properties. Au NPs have the ability to connect to the bacterial cell membrane and get inside the bacteria, leading to its death. Brown et al. conducted a study that showed that gold nanoparticles alone do not have antibacterial properties, but in combination with ampicillin, they have the ability to eliminate bacteria that are resistant to drugs [214]. For example, methicillin-resistant bacteria of the group S. aureus, P. aeruginosa, Enterobacter aerogenes, and E. coli. Daniel-da-Silva et al. [215] developed a hydrogel made of gelatin charged with gold particles and then cross-linked with genipin. The formed structure, under the influence of external stimuli (in this case, temperature), encapsulated gold nanoparticles. In contrast to Ag NPs, gold nanoparticles turn out to be more beneficial for bone regeneration because they are not toxic to osteoblastic cells. This was proved in Ribeiro's study by comparing the effect of the silk fibroin/nanohydroxyapatite hydrogel modified with Au NPs and Ag NPs [216]. It turns out that in the treatment of bones, antibacterial hydrogels can be used for all concentrations of Au NPs, while the safe concentration of Ag NPs is only 0.5 wt.%. Reddy et al. came up with the idea of combining Au and Ag nanoparticles in bimetallic (Ag, Au) hydrogel nanocomposites [217]. Their goal was to increase the antimicrobial activity of silver hydrogel nanocomposites. Varaprasad et al. even managed to create dual-metallic (Ag 0 -Au 0 ) nanoparticles with mint leaf extract to obtain an antibacterial substance that was active against Bacillus and E. coli bacteria [218]. In another study, a dressing based on silver nanoparticles (Ag NP) was used to culture 3D fibroblast cells in vitro, where silver nanoparticles are released as aggregates and localized in the cytoplasm of fibroblasts [219]. Ag NPs have been proven to significantly reduce mitochondrial activity without damaging the cell. Zinc and zinc oxide nanoparticles, similar to silver nanoparticles, are widely known for their antibacterial properties and are widely used in cosmetics for problematic skin care [220]. One of the mechanisms of action of zinc oxide is the formation of free radicals on the surface of ZnO NPs [221]. As a result of the reaction of free radicals with microorganisms, organic matter is oxidized to carbon dioxide, and bacteria are eliminated. A sprayable thermosensitive hydrogel containing a complex of zinc and metformin, which inhibits ROS production by activating autophagy, was used by Zhengwei et al. as a drug delivery system for the treatment of skin injuries [222]. On this basis, it can be concluded that the beneficial antibacterial effect of zinc oxide nanoparticles, the combination of which with biocom-patible hydrogels, will facilitate direct application to the skin [223][224][225]. Majumder dealt with the preparation of a biomimetic hydrogel dressing on silk fabric in his research [226]. To give it biocidal properties, he decided to sonochemically cover it with ZnO nanoparticles. Studies suggest that the dressing has adequate mechanical properties and significant antibacterial properties. Moreover, phase-contrast microscopic studies showed that the adherence, growth, and proliferation of L929 fibroblast cells seeded on oxide nanoparticles functionalized on a hydrogel-grafted silk fibroin fabric dressing was much higher than that of pure silk fibroin [226]. In addition, tests were carried out on hydrogels that contained dual ionic cross-linked hydrogels in their structure [227,228]. On their basis, it was found that the complex hydrogel containing zinc and calcium ions facilitated the proliferation and migration of human fibroblasts and umbilical vein endothelial cells and thus supported the process of renewal of damaged epithelial cells and blood vessels. The antibacterial properties of inorganic nanoparticles are of increasing interest. Among other nanoparticles that find wide antibacterial use in combination with hydrogels are TiO 2 , CeO 2 , CdSe, FeO, ZnS, and Cu [229][230][231][232][233][234][235][236][237][238][239][240]. In addition to silver, gold, and zinc nanoparticles, copper, for example, has biocidal properties. Giavaresi et al. conducted research on a soft hydrogel based on hyaluronic acid enriched with copper ions in the process of creating blood vessels [241]. Studies have shown an improvement in the healing of bones implanted during bone grafting as a result of the use of a new technique consisting in stimulating tissue vascularity with the use of biocompatible Hyal-50% material and copper ions. In a study by Villanueva et al., a starch hydrogel enriched with various concentrations of CuNPs and coated with a silica layer was shown to have antimicrobial activity against Gram-negative and Gram-positive bacterial species [242]. The antibacterial properties of said hydrogel have been proven for at least four cycles of use. Other studies have also confirmed the antibacterial effectiveness of hydrogels enriched with copper nanoparticles [242][243][244][245][246][247]. Iron oxides also have antibacterial properties. Sathiyaseelan et al. investigated the antimicrobial properties and wound healing rate using a chitosan/PVA nanocomposite sponge loaded with FeO nanoparticles. A significant increase in cell proliferation was demonstrated, and the tested CS/PVA-PD-FeO NPs sponge was found to be effective in the treatment of diabetic foot infections [248]. Moreover, many other scientists are studying the effect of iron oxide nanoparticles in hydrogel dressings with a controlled drug delivery system [249,250] and the effect on the stabilization of these hydrogels [251]. Conclusions and Future Perspectives This review paper summarized the actual state of-the-art of various types of natural hydrogels used as antimicrobial materials. The growing number of publications on hydrogels with biocidal properties clearly shows that the advantages associated with their use in biomedicine are so important that they arouse the interest of many research groups in the world. Some of the scientific achievements have already been patented or successfully commercialized. Currently, commercial products are obtained mainly on the basis of hydrogels without antibacterial loadings; if there are any, they are usually silver nanoparticles. The latest innovations related to antimicrobial hydrogels' fabrication address many urgent biomedical challenges, showing their remarkable potential to prevent or even combat various microbial infections. Various strategies, comprising functionalization with different groups or incorporation of antimicrobial agents (including antibiotics, natural products, and nanoparticles; all concisely discussed in this review article), were used to impart bactericidal properties to a plethora of various hydrogel-based formulations. The vast majority of scientific work focuses on antibacterial hydrogels, while applications related to antiviral activity are very rare. This is probably due to the fact that research on viruses requires a stricter security regime, including access to higher biosafety levels laboratories (BSL-2 and BSL-3), while in the case of bacteria, lower biosafety levels are acceptable (BSL-1 and BSL-2). However, the ongoing COVID-19 pandemic will undoubtedly lead to more research into the antiviral properties of hydrogels. Looking from a financial perspective, the biomaterial market is estimated to grow from 109 billion USD in 2020 to 216 billion USD by 2025 [252], which will definitely be a great incentive to turn basic research into ready-made solutions. One of the reported limitations hindering wider applications is the lack of enough strength to withstand the repetitive motion of the skin represents an important limitation [253]; therefore, the efforts on designing new hydrogels should also be focused on the simultaneous possibility of obtaining durable and self-healing hydrogels. Other important problems include [51]: (i) the lack of clinical animal models, as most studied experimental animal models are healthy young animals-there is little discussion on some old animals or animals with diseases, (ii) high susceptibility to damage during transport and storage, with possible drug leakage as well as deterioration of their structure and function. (iii) Frequent lack of matching between hydrogel degradation rate, active ingredient release rate, and the wound regeneration rate. Despite these temporarily invincible challenges, further development of simple, lowcost hydrogel-based antimicrobial biomaterials will undoubtedly be one of the key future directions of pharmaceutics and medicine. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2019-04-15T13:11:40.590Z
2017-04-18T00:00:00.000
39745467
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.4172/2471-8726.1000135", "pdf_hash": "29fa9575ef9f45f5c75aa1df4c2f2ba17999349d", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:175", "s2fieldsofstudy": [ "Medicine" ], "sha1": "ee1620db9ded09c3037868747f32715b73d7c57c", "year": 2017 }
pes2o/s2orc
No Shaping Endodontic Treatment of a Mandibular Canine Utilizing the Gentle Wave Procedure: A Case Report Due to limitations in standard endodontic therapy there have been great endeavors to effectively clean, shape and disinfect the root canal system while preserving the natural anatomy of the tooth [1]. Maintaining the natural canal anatomy and preservation of tooth structure has been correlated to higher clinical success rates [2-5]. A major issue in tooth preservation is the need to maximize cleaning and disinfection. One of the main goals of root canal therapy is to remove pulp tissue, layers of infected dentin and biofilms attached to the root canal surface. Yet this requires enlargement of root canals for mechanical instrumentation and access of irrigants [6,7]. Instrumentation to a minimum of size #35 with larger apical preparations and tapers has been reported as having greater irrigation effectiveness, cleaning and disinfection [3-5,8]. While this level of instrumentation may improve cleaning, it is reported to increase the chance of complications including apical transportation, ledge formation, instrument separation and root fractures [9]. Even still, after mechanical instrumentation, 50% to 65% of the root canal system may remain untouched, regardless of the system used for cleaning and shaping [5,7,10]. Introduction Due to limitations in standard endodontic therapy there have been great endeavors to effectively clean, shape and disinfect the root canal system while preserving the natural anatomy of the tooth [1]. Maintaining the natural canal anatomy and preservation of tooth structure has been correlated to higher clinical success rates [2][3][4][5]. A major issue in tooth preservation is the need to maximize cleaning and disinfection. One of the main goals of root canal therapy is to remove pulp tissue, layers of infected dentin and biofilms attached to the root canal surface. Yet this requires enlargement of root canals for mechanical instrumentation and access of irrigants [6,7]. Instrumentation to a minimum of size #35 with larger apical preparations and tapers has been reported as having greater irrigation effectiveness, cleaning and disinfection [3][4][5]8]. While this level of instrumentation may improve cleaning, it is reported to increase the chance of complications including apical transportation, ledge formation, instrument separation and root fractures [9]. Even still, after mechanical instrumentation, 50% to 65% of the root canal system may remain untouched, regardless of the system used for cleaning and shaping [5,7,10]. In a review of current research, it was shown that utilizing a novel endodontic system, the GentleWave® Procedure (Sonendo®, Laguna Hills, CA), may provide endodontics with a means for maximizing cleaning and disinfecting while preserving the natural anatomy of the tooth. The GentleWave Procedure utilizes Multisonic Ultracleaning™ technology, in which advanced fluid dynamics, acoustics, and tissue dissolution chemistry are applied to clean and disinfect the entire root canal system, even in areas that conventional endodontic treatment cannot reach [11][12][13][14]. Haapasalo et al. reported seven times faster tissue dissolution with the GentleWave System than standard root canal techniques, including the use of sonic and ultrasonic devices [14]. Sodium hypochlorite penetration in the apical third was four times more effective with the GentleWave® System than active ultrasonic activation yet the system has been shown to cause minimal dentin erosion [12,15]. In addition, the GentleWave System has been reported to be effective in removing separated hand files from the apical and middle thirds of molar root canal systems without the need for increased dentin removal [16]. Clinical studies evaluating the GentleWave Procedure have demonstrated high success rates of 97% at 6 and 12-months post GentleWave Procedure [17,18]. In this case report, maximum conservation of tooth structure was employed on a mandibular canine utilizing a no shaping endodontic treatment with the GentleWave Procedure. History A 57-year-old male presented for root canal therapy evaluation. Previous azithromycin prescription was taken one month prior. The patient's medical history was non-contributory. with anterior crowding. Assessments showed sensitivity to percussion, no painful response to palpation, and no mobility or soft tissue lesions. Radiographic analysis revealed a small periapical lesion on the single root ( Figure 1a). Based on the clinical and radiographical findings, a diagnosis of pulpal necrosis and symptomatic apical periodontitis was made. Procedure Standard anesthesia protocol was administered via nerve block; (1 carpule carbocaine 3% without epinephrine and 1 carpule articane 4% with 1:100 epinephrine). The tooth was isolated with a rubber dam with clamping on the right first bicuspid due to crowding. A dental operating microscope was utilized throughout the procedure. A minimal, straightline, conservative endodontic access was performed with removal of just enough dentin to remove any pulp horns. Once the pulp chamber was exposed, a single orifice was visualized. Patency was obtained and the working length was determined utilizing hand file size 10/.02. After working length measurements were taken, it was determined that the canal size was adequate for obturation and no further instrumentation was required prior to the GentleWave® Procedure (Sonendo®, Laguna Hills, CA). In an attempt to preserve tooth structure, maintain as much of the original canal anatomy as possible, and reduce the possibility of complications associated with standard endodontics, no shaping was utilized for this case. This could be accomplished due to the ability of the GentleWave Procedure to provide complete cleaning and disinfection without entering the root canal. A temporary build-up was placed utilizing a gingival protectant (Kool-Dam™, PulpDent®, Watertown, USA) to maintain a sealed environment for optimum MultiSonic™ Ultra Cleaning during the GentleWave Procedure. During the procedure, the GentleWave Procedure Instrument merely rests on top of the temporary buildup while pulp tissue remnants, debris, smear layer, and bacteria are removed from the entire root canal system (Figure 2). After the GentleWave Procedure, visualization through the microscope revealed a clean pulp chamber and canal that was free of debris and tissue which was subsequently dried with paper points. Obturation was completed utilizing a single cone technique with guttapercha and BC Sealer (Brasseler USA®, Savannah, GA). A coronal seal was placed and the cavity was filled with composite. The patient was advised to return to their general dentist for comprehensive dental care. There was no reported discomfort during or after the procedure. After two-days, the patient was contacted for a follow-up inquiry and indicated that they experienced no post procedure discomfort. Results A pre-procedure radiograph of tooth #27 exhibits a deep composite filling along the distal portion of the crown (Figure 1a). A small radiolucent area surrounding the periapical region is visible radiographically and on cone-beam computed tomography (Figures 1a and 3). Endodontic therapy was completed without the use of files or instrumentation for shaping as the GentleWave Procedure was applied for cleaning and disinfection of the entire root canal system. After implementation of the Multisonic UItracleaning technology through the system, a patent canal was observed and subsequently obturated. A post-procedure radiograph of the non-instrumented canal is shown in Figure 1b with no voids and a flush fill depicting a clinically significant obturation [19,20]. No discomfort was reported by the patient during the procedure or during the following two-days post-procedure. Discussion Standard endodontic treatment methods may not be adequate for debridement and disinfection of the root canal system [3,21,22]. After instrumentation, the root canal system typically contains tissue remnants, bacteria, and dentin shavings that inhibit the ability of irrigants to reach all areas of the root canal system, especially in the apical 3 mm [23]. Although these irrigants have limited access to the apical third with standard root canal treatment, reports indicate that larger size files and tapers allow for increased penetration of irrigants, albeit at a greater risk of complications [3,4]. Ledge formation is one of the most commonly observed procedural errors during root canal instrumentation with incidences of up to 52% [24,25]. The formation of a ledge may exclude the possibility of achieving an adequately shaped canal that reaches the ideal working length causing insufficient shaping and disinfection of the root canal system, as well as incomplete obturation of the canal. Consequently, there is a reported causal relationship between ledge formation and unfavorable endodontic outcomes [25][26][27][28][29][30]. Even after decades of innovation, instrument separation still continues to challenge endodontics with reported worldwide rates of 5% [31]. As separated instruments typically block the apical third of the canal, they impede the cleaning and disinfection of the root canal system thereby limiting the success of endodontic treatment [16]. Weakening of the root is also common in endodontics, especially when the root canal is oval shaped making it difficult to instrument [32,33]. In analysis of endodontically treated teeth, vertical root fractures were the reason for extraction in up to 13.4% of teeth [34,35]. Root perforations, which occur in 2% to 12% of endodontically treated teeth, are shown to be one of the eminent causes of endodontic failures [36][37][38][39]. Bacterial infection from the root canal system or the periodontal tissues prevents healing and promotes inflammatory sequela where the supporting tissues have been exposed through the perforation and may cause eventual loss of the tooth [37,38]. As the level of complications has been shown to increase when the size and taper utilized for shaping of the root canal system increases, alternative methods, such as the GentleWave Procedure, should be employed for cleaning and disinfection while preserving the natural anatomy of the tooth. In this case report, a mandibular canine which presented with tenderness to percussion and extensive coronal restoration was cleaned during a no shaping endodontic treatment using the GentleWave Procedure. This case report demonstrates the ability of the GentleWave Procedure to clean anterior teeth without the use of instrumentation for shaping, thereby conserving the natural tooth structure and decreasing the chance of complications as seen in standard endodontic treatment. Additional recall with the patient may be warranted to assess long-term healing and clinical outcomes. Further research is needed to provide evidence for a larger population in no shaping endodontic treatment with the GentleWave Procedure.
v3-fos-license
2021-10-19T13:42:48.022Z
2021-10-18T00:00:00.000
239022452
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jeccr.biomedcentral.com/track/pdf/10.1186/s13046-021-02130-2", "pdf_hash": "fb4d780be67763f7134e5c9141c2a3fc4d3e3956", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:176", "s2fieldsofstudy": [ "Medicine" ], "sha1": "fa41a2d3c9ab2aa16f38448ce8a8cbd7fa7d80c0", "year": 2021 }
pes2o/s2orc
Resistance to anti-EGFR therapies in metastatic colorectal cancer: underlying mechanisms and reversal strategies Cetuximab and panitumumab are monoclonal antibodies (mAbs) against epidermal growth factor receptor (EGFR) that are effective agents for metastatic colorectal cancer (mCRC). Cetuximab can prolong survival by 8.2 months in RAS wild-type (WT) mCRC patients. Unfortunately, resistance to targeted therapy impairs clinical use and efficiency. The mechanisms of resistance refer to intrinsic and extrinsic alterations of tumours. Multiple therapeutic strategies have been investigated extensively to overcome resistance to anti-EGFR mAbs. The intrinsic mechanisms include EGFR ligand overexpression, EGFR alteration, RAS/RAF/PI3K gene mutations, ERBB2/MET/IGF-1R activation, metabolic remodelling, microsatellite instability and autophagy. For intrinsic mechanisms, therapies mainly cover the following: new EGFR-targeted inhibitors, a combination of multitargeted inhibitors, and metabolic regulators. In addition, new cytotoxic drugs and small molecule compounds increase the efficiency of cetuximab. Extrinsic alterations mainly disrupt the tumour microenvironment, specifically immune cells, cancer-associated fibroblasts (CAFs) and angiogenesis. The directions include the modification or activation of immune cells and suppression of CAFs and anti-VEGFR agents. In this review, we focus on the mechanisms of resistance to anti-EGFR monoclonal antibodies (anti-EGFR mAbs) and discuss diverse approaches to reverse resistance to this therapy in hopes of identifying more mCRC treatment possibilities. Background Metastatic colorectal cancer (mCRC) accounts for almost half of the newly diagnosed colorectal cancer cases and is associated with poor prognosis [1]. Epidermal growth factor receptor (EGFR) is a key factor in cellular proliferation, differentiation and survival [2], which drives the use of EGFR-targeted therapy in malignancy treatment [3]. The advent of cetuximab and panitumumab, two monoclonal antibodies (mAbs) directly targeting EGFR, can prolong survival for 10-20% of mCRC patients [4]. According to the CRYSTAL trial, the application of cetuximab and FOLFIRI in first-line treatment can reduce the risk of progression by 15% and increase overall survival (OS) by 8.2 months in patients who have KRAS WT mCRC compared with patients taking FOL-FIRI alone [5]. Although treatment with anti-EGFR monoclonal antibodies (anti-EGFR mAbs) and chemotherapy has a large effect on mCRC, its clinical application is limited because of drug resistance. The clinical benefit in responders treated with anti-EGFR mAbs has been shown to only last 8-10 months [6,7]. As treatment progresses, approximately 80% of responders develop drug resistance [8]. The mechanisms of resistance to anti-EGFR mAbs have been elucidated previously. Gene mutations downstream of the EGFR signalling pathway, including RAS/RAF/ MEK and PI3K/AKT/mTOR, significantly contribute to drug resistance [9][10][11]. The activation of compensatory feedback loops of EGFR, such as erb-b2 receptor tyrosine kinase 2 (ERBB2), MET and insulin-like growth factor 1 receptor (IGF-1R), has been shown to interfere with EGFR inhibitor treatment [12][13][14]. In recent years, the intrinsic mechanisms of metabolism, autophagy [15], cancer stem cells (CSCs) [16] and epithelial-to-mesenchymal transition (EMT ) [17] have also been confirmed to be correlated with poor progression despite anti-EGFR mAb treatment. Extrinsic alterations of tumours may appear during treatment with cetuximab and panitumumab [18]. Currently, it is believed that microenvironment remodelling can reduce the cytotoxicity of anti-EGFR mAbs by impairing antibody-dependent cellular cytotoxicity (ADCC) and secreting growth factors [19,20]. Consequently, strategies to reverse resistance to anti-EGFR mAbs have been explored in experimental studies and clinical trials. These strategies include different aspects, such as new EGFR-targeted inhibitors, combinations of multitargeted inhibitors, metabolic regulators, immune therapy and new cytotoxic drugs. Here, we review the mechanisms underlying resistance to anti-EGFR mAbs and discuss the current studies on improving the efficiency of targeted therapy, increasing the number of available mCRC therapies. Intrinsic mechanisms of resistance to targeted therapy and related strategies Intrinsic alterations of tumours greatly contribute to resistance to anti-EGFR targeted therapy. Known intrinsic mechanisms are genetic mutations inducing EGFR and compensatory feedback loop signalling activation. Recently, metabolic remodelling, CSCs and EMT have also been confirmed to promote resistance to targeted therapy (Fig. 1). Accordingly, different strategies have been used to reverse the resistance: (i) development of new EGFR targeted inhibitors, (ii) combination of anti-EGFR mAbs with multitargeted inhibitors, (iii) metabolic regulators and (iv) new cytotoxic drugs (Tables 1 and 2). EGFR ligands and EGFR EGFR is part of the EGFR tyrosine kinase family [61] and is activated by multiple ligands, such as EGF, TGF-α, HB-EGF, epiregulin (EREG) and amphiregulin (AREG) [62][63][64]. The expression of EGFR ligands in primary tumours is potentially related to anti-EGFR therapy efficiency [65,66]. KRAS WT mCRC patients with higher expression of AREG and EREG seemed to obtain less survival benefit from cetuximab [64]. EGFR somatic sequence changes, including G465R, G465E, S468R and S492R, located at the extracellular domains (ECDs) of the EGFR-mAb interaction interface, confer resistance to cetuximab and panitumumab by preventing mAb binding [10, 67,68]. In addition, R198/R200 methylation and mutation in the kinase domain of EGFR (V843I) correlated with disease progression in the presence of cetuximab [69]. Thus, the development of new mAbs that can bind to different or mutated EGFR ECDs is expected to improve the efficiency of anti-EGFR mAbs. MM-151, an oligoclonal antibody that binds multiple regions of the EGFR ECD, was confirmed to inhibit EGFR signalling and cell growth in a preclinical study and decrease mutations in circulating cell-free tumour DNA (ctDNA) of CRC patients [41]. Another FDA-approved EGFR antibody, necitumumab, can bind to S468R, the most common cetuximab-resistant variant of EGFR domain III [70]. Progression-free survival (PFS) and OS of patients taking necitumumab plus mFOLFOX6 were comparable to those of the cetuximab and FOLFOX regimens [21]. Considering the limitations of cetuximab and panitumumab in clinical use, it is necessary to generate more effective anti-EGFR antibodies. Sym004, a novel 1:1 mixture of two nonoverlapping anti-EGFR mAbs, showed significant advantages of abrogating EGFR ligand-induced phosphorylation and suppressing downstream signalling of all individual EGFR mutants both in cetuximab-resistant cell lines and in a tumour xenograft model [23,71]. A multicentre, phase 2 clinical trial further confirmed that Sym004 improved the OS of anti-EGFR-refractory mCRC by 5.5 months [22]. In addition, GC1118 is a novel, fully humanized anti-EGFR IgG1 antibody that displays inhibitory effects against patientderived xenografts from CRC tumours with a KRAS mutation [40], especially in those with elevated expression of high-affinity ligands [72]. Compensatory feedback loop signalling The RAS/RAF/MEK/ERK and PI3K/PTEN/AKT axes are the main downstream signalling pathways of EGFR. Upregulated receptor tyrosine kinases (RTKs), including ERBB2, MET and IGF-1R, activate the PI3K/AKT axis or reactivate the ERK pathway independently of EGFR [73][74][75][76]. Alterations in these pathways, such as gene mutation, gene amplification, gene loss and abnormal phosphorylation, are of great significance in primary and secondary resistance to anti-EGFR mAbs [11,77]. Combining EGFR-targeted inhibitors with these targeted agents shows potential to reverse resistance to anti-EGFR mAbs. RAS mutations and RAS regulators RAS is a master element at the centre of EGFR signalling pathways [78]. Mutations within RAS put the RAS protein in a constitutively active state independent of upstream signals driven by growth factor receptor [79], leading to the failure of EGFR-targeted therapies. Mutations in RAS usually occur in KRAS, NRAS and HRAS, and the KRAS mutation is the most common of these genomic alterations, occurring in 40% of mCRC [9,80]. Mutations in exons 2, 3 and 4 of KRAS and exons 2, 3, and 4 of NRAS are powerful predictors for cetuximab and panitumumab response in mCRC [81,82]. However, codon 13 mutations (G13D) in KRAS do not predict nonresponse with complete accuracy [82]. Some missense and nonsense mutations at codons 20, 27, 30, or 31 have also been reported, whereas the function of these mutations on GTPase activity and the outcome of CRC still needs further exploration [83][84][85]. RAS was the first driver gene found, and effective RAS inhibitors have been investigated for over 30 years [86]. For example, sotorasib is a small molecule that selectively and irreversibly targets KRAS (G12C). Nevertheless, drugs again other KRAS mutations in codons 12, 13 and 61 still remain to be developed [87]. Therefore, it is important to find other therapies to improve the therapeutic outcome of these patients. In 2011, Wheeler and colleagues first reported that the addition of dasatinib to cetuximab showed a powerful antiproliferative effect on KRAS mutant cell lines compared to either agent alone in vitro and in vivo Fig. 1 Intrinsic mechanisms of resistance to anti-EGFR mAbs in metastatic colorectal cancer. The intrinsic mechanisms include abnormal activation of oncogenic signalling pathways, aberrant gene expression, metabolic disorders, increased autophagy function and cancer stem cells. For example, genomic alterations and proteic phosphorylation induce activation of the RAS/RAF/MEK/ERK and PI3K/AKT/mTOR cascades. ERBB2/ MET amplification and abnormal IGF-1R activation stimulate compensatory feedback loop signalling of EGFR. The phenotype shift of cancer stem cells (CSCs) into epithelial-to-mesenchymal transition (EMT) contributes to therapy resistance. Glycolysis, lipid synthesis, fatty acid oxidation and vitamin deficiency in cancer cells also reduced the efficiency of EGFR-targeted therapy. The agents for specific points are also shown in the figure. Abbreviations: CSC, cancer stem cell; EMT, epithelial-to-mesenchymal transition; PI3K, phosphoinositide 3-kinase; IGF-1R, insulin-like growth Factor 1 receptor [88]. However, the other clinical study did not achieve the expected results. A phase IB/II study of 77 refractory CRC patients treated with dasatinib plus FOL-FOX and cetuximab did not demonstrate meaningful clinical activity because the treatment did not fully inhibit the intracellular tyrosine kinase Src [24]. Notably, some untargeted agents displayed positive results in KRAS-mutated CRC cells. The combination of simvastatin and cetuximab suppressed BRAF activity and reduced the proliferation of KRAS-mutant cells [49]. Furthermore, Metformin reversed KRAS-induced resistance to the anti-EGFR antibody by activating AMP-activated protein kinase (AMPK) and inhibiting mTOR [47]. Jung revealed that resistance to cetuximab in CRC cells with KRAS mutations can be bypassed by L-ascorbic acid relying on a sodium-dependent [39] vitamin C transporter 2 [51]. In addition, small chemical compounds such as KY7749 and methylglyoxal scavengers resensitize KRAS-mutated CRC cells to cetuximab in vivo [48,89]. Despite not specifically targeting the RAS protein, these drugs add alternative methods to reverse resistance induced by RAS. RAF mutations and RAF inhibitors BRAF is a serine-threonine kinase just downstream of EGFR/KRAS that activates the MEK/extracellular signal-regulated kinase (ERK) signalling cascade through its phosphorylation and then promotes cancer cell proliferation [90]. BRAF mutation is a powerful biomarker Natural bioactive monomer Tumor growth inhibition and less lymph node metastasis [60] of poor prognosis for mCRC patients receiving anti-EGFR mAbs [80,91,92]. The hotspot BRAF V600E mutation at codon 600 of exon 15 increases the activity of BRAF kinase by 130-to 700-fold [26,[93][94][95]. The prevalence of BRAF V600E mutations in mCRC is 8-10%, and they occur mutually exclusively with KRAS mutations [25,96]. Some BRAF non V600E mutations were also reported, including D594G, G469A, L485F, L525R, Q524L and V600R located in the kinase domain. Non V600E mutations, other than Q524L, may also contribute to primary resistance to anti-EGFR mAbs [25,96]. BRAF V600E mutations occur in various cancers, such as melanoma, non-small-cell lung cancer, breast cancer and CRC, and inhibitors targeting BRAF have demonstrated clinical benefit for these patients. Vemurafenib, a selective oral inhibitor of the BRAF V600 kinase, achieved an approximately 50% response and improved survival among metastatic melanoma patients with the BRAF V600E mutation [97]. Recently, many clinical trials have been conducted to evaluate EGFR therapeutic resistance with vemurafenib. In 2015, a pilot trial of combined vemurafenib and panitumumab in BRAF-mutant mCRC patients post chemotherapy reported that the treatment limited tumour progression and resulted in modest clinical activity [98]. However, a multicentric clinical study containing 27 CRC patients showed that vemurafenib alone or with cetuximab did not benefit CRC patients [99]. The next year, a phase IB study affirmed the value of vemurafenib again. This study demonstrated that triplets of vemurafenib, irinotecan, and cetuximab were well tolerated and exceeded tumour regression in refractory BRAF-mutated mCRC [100]. Of course, more clinical studies are needed to ensure that vemurafenib is efficacious. Despite the confusing results of the vemurafenib studies, another BRAF inhibitor, encorafenib, has confirmed the feasibility of dual-targeted EGFR and BRAF treatment to increase the efficiency of anti-EGFR mAbs. The BEACON trial showed promising efficacy results with an objective response rate (ORR) of 48% (95% CI, 29.4-67.5%) among 29 patients in the study [27]. Within the randomized portion of the BEACON trial, the confirmed ORR for the triplet treatment was much better than that for the control (26% vs. 2%). The median OS was 9 months on the triplet regimen compared to 5.4 months in the control group (P < 0.0001, 75]. Based on the randomized, phase III BEACON trial, a combination of encorafenib and binimetinib (a MEK inhibitor) with cetuximab has been recommended as a second-line systemic therapy for BRAF V600E mutation CRC. MEK activation and MEK inhibitors MEK/ERK is the most important downstream cascade of the signalling pathways related to anti-EGFR mAbs. However, mutations of RAS/RAF induce constitutive activation of MEK to promote cell proliferation and survival. Encouragingly, combination treatment with MEK and EGFR inhibitors seems to be a possible strategy to overcome the multifaceted clonal heterogeneity in tumours [29,101]. Additionally, there are already some small molecule MEK inhibitors under research. AS703026 (also known as pimasertib), AZD6244 (also known as selumetinib) and BAY86-9766 have a great ability to hinder the growth of mutant KRAS cells in vitro and in vivo by specifically suppressing the key target kinase ERK, which is downstream of MEK [30,42]. MEK inhibitors were also confirmed to increase the tumour-suppressive effect of cetuximab. Selumetinib or pimasertib plus cetuximab enhanced antiproliferative and proapoptotic effects in cells resistant to cetuximab in vitro and in vivo [102,103]. However, Misale found that in vitro and in vivo, the growth of resistant cells could not be hampered by MEK1/2 inhibitors alone; instead, the synergistic pharmacological blockade of EGFR and MEK induced drawnout ERK inhibition and serious growth impairment of resistant tumour cells [43]. More importantly, the combination of selumetinib and cetuximab failed to achieve positive results in a clinical trial in refractory metastatic CRC patients with KRAS mutations [30,101] To date, binimetinib is the only MEK inhibitor permitted by the Food and Drug Association for clinical use in mCRC. Clinical verification of the feasibility of MEK inhibitors to reverse EGFR therapeutic resistance is urgently required. PI3K/AKT activation and PI3K/AKT inhibitors Phosphor-EGFR is capable of initiating the PI3K/AKT/ mTOR pathway, and PI3K mutation and aberrant AKT/ mTOR activation promote resistance to anti-EGFR mAbs [104]. The most common mutations in PIK3CA are in exons 9 (68.5%) and 20 (20.4%), and they are detected in 10-18% of mCRC [11,105]. Mutations in PIK3CA exon 20 were significantly associated with a worse outcome in KRAS WT mCRC patients treated with cetuximab, whereas PIK3CA exon 9 mutations had no effect on outcome in KRAS WT mCRC patients [11]. PTEN is a negative regulator in the PI3K/AKT pathway and was found in 20-40% of mCRC [106,107]. Loss of PTEN protein results in long-term tumour growth by activating PI3K/AKT. Patients with PTEN-negative status showed a worse response rate and shorter progression-free survival (PFS) than those with PTEN-positive status [106,108,109]. PI3K gene mutation and PTEN protein loss are confirmed as novel biomarkers in mCRC patients treated with anti-EGFR mAbs [110]. Combinations of cetuximab and PI3K, AKT or mTOR inhibitors can profoundly control tumour growth in mCRC regardless of driver genotypes [32]. Specifically, PI3K inhibitors have been shown to greatly inhibit the growth of cancer in preclinical and clinical experiments. For example, the PI3K inhibitor XL147 was reported to inhibit the PI3K pathway with a 40-80% reduction in the phosphorylation of AKT and 4EBP1 in tumours and unexpectedly inhibited the MEK/ERK pathway in a phase I trial [111]. Then, in an investigation of the effects of XL147 on proliferation in a panel of tumour cell lines, Shapiro et al. revealed that XL147 was useful for PI3K mutation/amplification cell lines without KRAS/BRAF/ PTEN mutation [112]. Another PI3K inhibitor, BKM120, was found to impede KRAS mutation-induced colorectal cancer growth both in vitro and in vivo, regardless of PI3K genotype [45]. Despite the ideal results of preclinical studies, clinical trials on the combination of PI3K inhibitors and EGFR-targeted agents are frustrating. PX-866 is a panisoform inhibitor of PI3K; however, the addition of PX-866 to cetuximab did not improve the PFS and OS of KRAS WT mCRC and caused greater toxicity in a phase II study [32]. Considering the lack of clinical trials on the combination of PI3K inhibitors with cetuximab or panitumumab, the application of PI3K inhibitors in enhancing the response to anti-EGFR mAbs remains unascertainable thus far. The clinical use of ERBB2-targeted drugs, such as trastuzumab, pertuzumab and lapatinib, improved outcomes for breast cancer and colorectal cancer patients with ERBB2 amplification [115,116]. Dual-targeted therapy with EGFR and ERBB2 inhibition were found to restore sensitivity to cetuximab in vitro and in vivo. The monoclonal antibody 4D5 is an ERBB2 inhibitory antibody that shows antitumour function in an EGFRdependent manner. The combination of the mAb 4D5 with cetuximab induced a significant decrease in proliferation in the EGFR-dependent colon cancer cell line and an actual regression of the tumours in xenografted mice [43]. Similarly, trastuzumab, the most common antibody for ERBB2, inhibits the growth of CRC cells when combined with cetuximab [44]. However, the pan ERBB kinase inhibitor neratinib plus cetuximab did not reach an objective response in anti-EGFR treatment refractory mCRC with quadruple-wild-type (KRAS, NRAS, BRAF, PIK3CA) in a phase II clinical trial [31]. In general, it will be important to investigate the efficiency of mAb 4D5 and trastuzumab in the clinic to confirm the value of anti-ERBB2 agents. IGF-1R activation and anti-IGF-1R mAbs The IGF-1/IGF-1R pathway plays a crucial role in CRC proliferation, differentiation, apoptosis, migration and angiogenesis. Hyperactivation of IGF-1R results in primary and secondary resistance to EGFR inhibition in RAS wild-type mCRC by upregulating the PI3K/AKT pathway [117,118]. Analyses from two clinical trials confirmed that the coexpression of pIGF-1R and MMP-7 in RAS wild-type mCRC predicts worse OS after treatment with cetuximab [119]. Therefore, targeting both EGFR and IGF-1R may be a potential therapy for mCRC. Disappointingly, a trial showed that the combination of cetuximab and dalotuzumab or IMC-A12 did not improve the survival of mCRC resistant to cetuximab [35,36]. MET amplification/activation and MET inhibitors The MET signalling pathway is another compensatory feedback loop that mostly arises during the treatment of anti-EGFR mAbs. Phosphorylation of MET induces the activation of PI3K/AKT and RAS/RAF/MAPK cascades to rescue tumour cells from EGFR inhibitors [120]. Bardelli et al. highlight that MET amplification is related to acquired resistance to anti-EGFR therapy in tumours without KRAS mutations [12]. Moreover, the EGF ligands HGF and TGF-α can bind to MET and then increase the phosphorylation of MET and its downstream MAPK and AKT [121,122]. Accordingly, the resistance function of MET was demonstrated by combining MET inhibitors and cetuxima b [123]. Therefore, the application of MET inhibitors has strong therapeutic potential in human cancers. The combination of MET inhibitors with anti-EGFR agents presents encouraging results in both preclinical and clinical studies. Tivantinib (ARQ 197), a selective, non-ATP-competitive inhibitor of c-MET, displayed tolerated toxicity and suggested some activity in previously treated mCRC when combined with cetuximab and chemotherapy [124]. Another phase II clinical study reported a positive outcome in mCRC patients who received tivantinib plus cetuximab. Forty-one patients with tumour progression on cetuximab or panitumumab treatment were enrolled in the study; the ORR was 9.8% (4/41), the median progression-free survival (mPFS) was 2.6 months (95% CI, 1.9-4.2 months), and the mOS was 9.2 months (95% CI, 7.1-15.1 months) [33]. Another small molecule c-MET inhibitor, crizotinib, has been shown to improve the efficiency of radiotherapy in cetuximab-resistant KRAS mutant CRC cell lines [46]. At the American Society of Clinical Oncology (ASCO) 2019, the phase II multicentre, multicohort GEOMETRY mono-1 clinical study showed that the combination of capmatinib (a c-MET inhibitor) with gefitinib (EGFR-TKI) had a good overall response rate in EGFR-TKI-resistant patients, particularly those with MET-amplified disease [34]. Furthermore, capmatinib plus cetuximab suggested that there were preliminary signs of activity in MET-positive mCRC patients who had progressive disease following anti-EGFR mAbs [125]. Suppression of MET will be an important target in overcoming resistance to anti-EGFR therapy. Microsatellite instability and immune checkpoint inhibitors Microsatellite instability (MSI) caused by dysfunctional mismatch repair (dMMR) is detected in approximately 15% of all CRC and in nearly all cases with Lynch syndrome [126]. Microsatellite status and cetuximab efficiency is another area of interest. In the CALGB/SWOG 80405 study, patients with microsatellite instability-high (MSI-H) tumours showed worse OS in the cetuximab arm than in the bevacizumab arm [127]. MSI may interact with oncogenic drivers such as BRAF and ERBB2 to promote cetuximab resistance. BRAF V600E occurs in 40% of sporadic MSI-H CRCs and is typically genetically seen subsequent to hMLH1 hypermethylation [128]. In addition, other hotspot mutations in KRAS, PIK3CA and ERBB2 were identified in BRAF WT MSI CRC patients [129]. It has been proven that hMLH1 deficiency plays a role in cetuximab resistance by increasing the expression level of ERBB2 and downstream PI3K/AKT signalling [130]. Although we believe that mismatch repair genes may partly modulate the expression of oncogenic drivers, the mechanism remains largely unclear and a worthwhile focus for further research. EGFR-targeted treatment increased the infiltration of cytotoxic immune cells and the expression of the PD-L1 immune checkpoint, which may be a potential method to treat cetuximab-resistant CRCs with immunotherapy [19]. In addition, NK cell-mediated ADCC activated by cetuximab triggers immunogenic death of tumour cells, thereby increasing the antitumour activity of immunotherapy [131]. The phase II CAVE mCRC trial demonstrated that rechallenge avelumab (anti-PD-L1) plus cetuximab resulted in a mPFS of 3.6 months and a median OS (mOS) of 11.6 months in a RAS WT mCRC population who developed acquired resistance to anti-EGFR drugs [38]. The ongoing AVETUXIRI trial investigates the efficiency of avelumab combined with cetuximab and irinotecan for refractory mCRC patients with microsatellite stability. The current study data has shown that encouraging results of DCR, PFS and OS were observed in both the RAS MT and RAS WT cohorts [132]. The combination of immune checkpoint inhibitors with anti-EGFR mAbs may bring great breakthroughs to overcome resistance to anti-EGFR drugs and improve the outcome of mCRC regardless of the status of RAS. Metabolic remodelling and regulators Alterations in cellular metabolism are essential for rapid tumour proliferation and affect the sensitivity of cancer cells to various drugs [133]. Anti-EGFR treatment causes metabolic rewiring in CRC patients, which makes it possible to increase anti-EGFR mAb efficiency by adding metabolism regulators. Abnormal glycometabolism reduces the efficiency of anti-EGFR therapy. High glycolytic metabolism regulated by TRAP1 was involved in resistance to EGFR mAbs [15]. Sirt5-positive CRCs develop cetuximab resistance due to an elevated succinate-to-ketoglutarate (αKG) ratio, which inhibits αKG-dependent dioxygenases [134]. Sodium glucose transporter 2 (SGLT2) can ensure glucose entry into cells and is highly expressed in the majority of cancer cells. The SGLT2 inhibitor dapagliflozin combined with cetuximab dramatically reduced carcinoembryonic antigen (CEA) and substantial shrinkage of metastatic tumour lesions [37]. The methylglyoxal scavenger carnosine was confirmed to resensitize KRAS-mutated colorectal tumours to cetuximab in vivo [48]. AMPK activity was consistent with the sensitivity of anti-EGFR mAbs, and metformin overcame KRAS-induced resistance to anti-EGFR antibodies by regulating AMPK/mTOR/Mcl-1 (myeloid cell leukaemia 1) in vivo and in vitro [47]. Fatty acid metabolism displayed strong antiapoptotic effects in cetuximab-nonresponders [135]. Inhibition of lipid synthesis or decomposition with simvastatin or glutaminase 1 inhibitor CB-839 significantly reduced tumour growth of CRC under cetuximab treatment [49,50]. In addition, vitamin D deficiency has a negative impact on cetuximab-induced ADCC. Supplementation with vitamin D in vitamin-deficient/insufficient CRC cells has been suggested to improve cetuximab-induced ADCC in CRC cell lines [136]. Resistance to cetuximab in mutant KRAS CRC patients can be reversed by L-ascorbic acid by reducing RAF/ERK activity in an SVCT-2-dependent manner [51]. Others Autophagy and cancer stem cells (CSCs) also contribute to resistance to EGFR target therapy [18,137]. Treatment with anti-EGFR agents results in dysregulation of autophagy [138]. Increased levels of autophagy-related proteins such as Beclin-1 and LC3 were observed in cetuximab-treated patients [139,140]. Inhibition of autophagy by chloroquine and 3-methyladenine sensitizes cancer cells to cetuximab [138,139]. However, blocking general autophagy might greatly affect normal cell growth. Therefore, developing specific autophagy inhibitors that target tumour cells is crucial. CSCs possess genetic determinants for the EGFR therapeutic response and are primarily supported by a network of pluripotency transcription factors (PTFs). Single-nucleotide polymorphisms of PTFs were significantly associated with PFS of the cetuximab cohort in the FIRE-3 trial [141]. The property of CSCs to EMT is a core transcriptional network to predict the efficacy of EGFR-targeted therapy in KRAS WT CRC [142]. Inhibition of EMT is of great interest for reversing EGFR therapeutic resistance. β-Elemene, a bioactive monomer isolated from the Chinese herb curcumae rhizoma, has been shown to induce ferroptosis and reduce EMT to increase cetuximab activity in RAS-mutated CRC cells [60]. Furthermore, cytotoxic drugs and natural bioactive monomers were confirmed to overcome resistance to EGFR-targeted drugs. TAS-102 is a novel chemotherapeutic agent that contains a thymidine phosphorylase inhibitor, tipiracil hydrochloride, and a cytotoxic thymidine analogue, trifluridine, which has been approved for the treatment of mCRC. Panitumumab/TAS-102 cotreatment showed additive antiproliferative effects in LIM1215 CRC cells in vitro and in vivo [59]. Extrinsic mechanisms of resistance to targeted therapy and related strategies Microenvironmental plasticity dramatically affected by EGFR inhibition is as powerful of a driver of drug resistance as genetic alterations [18] (Fig. 2). Dysfunction of immune cells, abnormal infiltration of cancer-associated fibroblasts (CAFs) and angiogenesis impair EGFR therapeutic efficiency. Strategies to remodel the tumour microenvironment are part of a larger goal to increase the efficiency of anti-EGFR mAbs. These strategies include (i) modification or activation of NK cells and T cells, (ii) Fig. 2 Extrinsic mechanisms of resistance to anti-EGFR mAbs in metastatic colorectal cancer. Tumour microenvironment plasticity confers resistance to EGFR-targeted therapy. Cetuximab and panitumumab suppress tumours through ADCC mediated by NK cells and macrophages. Dysfunction of NK cells and macrophages with lower ADCC impairs the suppression of EGFR-targeted therapy in cancer. Reduced density of effector T cells and increased PD-L1 expression in cancer cells also promote survival from cancer. CAFs promote resistance to targeted therapy by secreting growth factors that activate the RAS or MET pathway. Abnormal angiogenesis always predicts poor response to anti-EGFR mAbs. Therapies focused on the microenvironment are also shown in the figure. Abbreviations: CAFs, cancer-associated fibroblasts; NK cells, natural killer cells; ADCC, antibody-dependent cellular cytotoxicity; PD-1, programmed death 1; PD-L1, programmed death ligand 1. VEGF, vascular endothelial growth factor; VEGFR, vascular endothelial growth factor receptor suppression of CAFs, and (iii) inhibition of angiogenesis (Tables 1 and 2). Immune cells and agents Antibody-dependent cellular cytotoxicity (ADCC) mediated through Fc receptors (FcγRs) on immune cells is one of the proposed antitumour mechanisms of anti-EGFR mAbs [20]. Cetuximab-treated patients with FcγRIIa-131R and/or FcγIIIa-158F genotypes had shorter PFS than 131H/H and/or 158 V/V carriers [143]. When exposed to cetuximab in CRC cell lines, human NK cells substantially increase the expression of the costimulatory molecule CD137 (4-1BB ) [144]. The combination of cetuximab with anti-CD137 mAb administration was synergistic and resulted in complete tumour resolution and prolonged survival, which were dependent on the participation of NK cells [52]. IL-2 and IL-15 cooperate with cetuximab to stimulate NK cells and improve cytotoxic functionality [145]. Another preclinical study using a mouse model reported that umbilical cord blood stem cell-derived NK (UCB-NK) cells increased antitumour cytotoxicity against CRC regardless of the status of EGFR and RAS [53]. Neither cetuximab nor panitumumab can engage T cells when T cells lack Fcγ receptors, which serve as targets for modifying T cells to enhance the ADCC activity of anti-EGFR agents [54]. T cell-engaging BiTE antibodies targeting the binding domains of cetuximab and panitumumab transiently connect T cells with cancer cells to initiate redirected target cell lysis. Then, they showed that cetuximab-based BiTE antibody mediated potent redirected lysis of KRAS-and BRAF-mutated CRC lines in vitro and prevented the growth of tumours from xenografts [54]. Toll-like receptor 9 (TLR9) is expressed in various immune cells, such as macrophages, NK cells, B lymphocytes and plasmacytoid dendritic cells [146,147]. Toll-like receptor 9 (TLR9) activation causes antitumour activity by interfering with cancer proliferation and angiogenesis [148]. IMO is a novel second-generation, modified, immunomodulatory TLR9 agonist and was proven to synergistically inhibit tumour growth by improving the ADCC activity of cetuximab in a cetuximab-resistant colorectal cancer line and a mouse model regardless of KRAS genotype [55,56]. CAFs and inhibitors Cancer-associated fibroblasts (CAFs) are believed to play a vital role in promoting tumour metastasis and drug resistance by secreting mitogenic growth factors, including FGF1, FGF2, HGF, TGF-β1 and TGF-β 2 [19]. Luraghi et al. reported that HGF can bind to MET receptors and activate MAPK and AKT to induce cetuximab resistance in vitro [75]. The dual inhibition of FGFR and EGFR may be a practical strategy to reverse resistance to anti-EGFR mAbs. The combination of BLU9931,an FGFR4 inhibitor, with cetuximab presented profound antitumour activity compared to cetuximab alone [57]. Regorafenib, a multikinase inhibitor targeting FGFR, VEGF and PDGFR-β, was found to overcome cetuximab resistance in GEO-CR and SW48-CR cells in vitro and in vivo [58]. Angiogenesis and inhibitors Inhibition of angiogenesis is also one of the mechanisms of cetuximab action. Treatment with cetuximab reduced the expression of vascular endothelial growth factor (VEGF), and a high level of VEGF under cetuximab treatment was associated with a lower response rate and shorter PFS in mCRC [149,150]. VEGF is one of the most significant angiogenetic factors, and it contributes to cancer prognosis and metastasis. Therefore, it is worth exploring the feasibility of dual-targeted VEGF and EGFR in colorectal cancer. Combination treatment with anti-VEGF and anti-EGFR antibodies demonstrated synergistic activity in vitro, and tumour growth and angiogenesis were strongly suppressed in an in vivo xenograft mouse model [151]. However, in another study, the use of bevacizumab and cetuximab together did not have a greater increase in apoptotic tumour cell death compared to either drug alone [152]. Recently, small molecule inhibitors targeting VEGF have presented the potential to increase the efficiency of anti-EGFR therapy. Pazopanib, a multitargeted tyrosine kinase inhibitor, combined with irinotecan and cetuximab showed manageable safety and feasibility in refractory mCRC [153]. The combination of the anti-EGFR antibody cetuximab and the multikinase VEGF inhibitor regorafenib overcame intrinsic and acquired resistance in mCRC. Eight of 17 mCRC patients, who all were previously receiving anti-VEGF and anti-EGFR therapy, showed clinical benefit from cetuximab and regorafenib, including partial response in 1 patient and stable disease in 7 patients [39]. Dual-targeting of VEGF and EGFR seems to be an effective choice for mCRC patients receiving multiline treatment. Conclusions and future directions Heterogeneity and adaptive alterations promote resistance to anti-EGFR targeted therapy and are strongly associated with the clinical outcome of colorectal cancer (Fig. 3). RAS/BRAF/MEK mutations downstream of the EGFR pathway and ERBB2/MET/IGFR/PI3K mutations or amplifications bypassing EGFR are strong biomarkers to predict the efficiency of anti-EGFR mAbs. It is of great importance to ascertain the molecular subtypes in mCRC before treatment. Advances in gene detection methods such as ctDNA, liquid biopsy and exosome DNA sequencing make molecular subtyping feasible. By identifying similarities and differences among tumour subtypes, the use of precision medicine results in greater cancer eradication and better patient care. For subpopulations with driver-gene mutations, combination therapies of different targeted inhibitors make great strides in overcoming resistance to anti-EGFR mAbs. Combining EGFR targeted therapy with inhibitors of BRAF, MET and MEK produces expected results in clinical trials. It is recommended to use encorafenib, binimetinib and cetuximab in the second-line treatment of mCRC. More clinical studies are needed to ensure the effectiveness of MEK inhibitors. In addition, the new generation of anti-EGFR monoclonal antibodies and cytotoxic agents is promising to achieve better outcomes, but further research is needed before clinical application. In this review, we provide new insight into EGFR therapeutic resistance in the tumour microenvironment (TME) and summarize current agents for the TME. The TME, including the immune microenvironment and vascular microenvironment, facilitates tumour growth and metastasis. The ADCC activity of anti-EGFR mAbs mediated by NK cells, T cells and macrophages is one of the antitumour mechanisms targeted by cetuximab and panitumumab. The strong effect of cetuximab on the immune landscape dramatically changes immune infiltrates. Thus, more effective immunotherapies are anticipated to regress the growth and metastasis of tumours. Some antibodies or inhibitors constructed to bind FcγR or TLR9 to stimulate ADCC mediated by NK cells, T cells and macrophages present significant antitumour activity Fig. 3 Strategies to increase anti-EGFR therapy efficiency in different subtypes of mCRC. Biomarker analysis should be conducted before treatment for mCRC. For patients with disease progression on anti-EGFR therapy, biomarker analysis is still recommended. For mCRC with driver gene alterations, there are some therapies to increase anti-EGFR efficiency. In RAS-mut mCRC, the selected therapies include a combination of RAS inhibitors and anti-EGFR agents, metabolic regulators, immune therapy, cytotoxic drugs and natural bioactive monomers. In RAF-mut mCRC, the main therapy is a BRAF inhibitor. In ERBB2-amp mCRC, ERBB2 inhibitors can be used to promote the antiproliferation of anti-EGFR. In MET-amp mCRC, combined therapy with MET inhibitors and anti-EGFR mAbs was confirmed to be effective. In mCRC with EGFR ECD-mut, new anti-EGFR agents are preferred. In mCRC with no driver gene alteration, multitargeted therapies, metabolic regulators, immune therapy, cytotoxic drugs and antiangiogenic agents can be used with anti-EGFR. Abbreviations: mCRC, metastatic colorectal cancer; EGFR, epidermal growth factor receptor; ERBB2, human epidermal growth factor receptor 2; MET, tyrosine-protein kinase Met; MSI-H, microsatellite instability; dMMR, dysfunctional mismatch repair; PD-1/PD-L1, programmed death-1/programmed death ligand 1; ECD, extracellular domain; WT, wild type; mut, mutation in cell lines and mouse models. Dual-targeted VEGF and EGFR treatments show exciting results in multilinetreated mCRC patients, providing a chance for improved outcomes in refractory patients. Notably, anti-EGFR therapy especially enhances the expression of PD-L1 on tumours and the infiltration of CD8 + T cells. Therefore, this feature may expand indications of immune checkpoint inhibitors in CRC. Treatment with immune checkpoint inhibitors either along with anti-EGFR mAbs or later is a promising therapy for mCRC. In summary, the recognition of resistance to EGFR-targeted therapy has progressed from driver genes to nongenetic alterations. Different therapies that reverse EGFR therapeutic resistance demonstrate potential in preclinical and clinical trials. These treatments show promise in taking a giant step towards overcoming EGFR therapeutic resistance. Acknowledgements Not applicable. Authors' contributions Qi Li generated the ideas and design structure of manuscript. Jing Zhou organized all figures, and wrote the manuscript. Qing Ji and Qi Li revised and approved the final manuscript. All authors read and approved the final manuscript. Funding This work was supported by National Science Foundation of China (81830120 to Q.L., 82074225 to Q.J.) and Clinical Research Plan of Shanghai Hospital Development Center (SHDC2020CR4043 to Y.W.). Availability of data and materials All the data obtained and/or analyzed during the current study were available from the corresponding authors on reasonable request. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that there is no potential competing interest.
v3-fos-license
2018-12-07T06:10:27.519Z
2013-04-24T00:00:00.000
55303675
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://ccsenet.org/journal/index.php/ijef/article/download/26859/16380", "pdf_hash": "bb8c406946a2256a392313a2dabe7b69a6470059", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:179", "s2fieldsofstudy": [ "Economics" ], "sha1": "bb8c406946a2256a392313a2dabe7b69a6470059", "year": 2013 }
pes2o/s2orc
Diversification in a Small Market : Some Evidences from Namibia Maximizing returns and minimizing risk through diversification has been a popular topic in economics and finance research. Studies have shown that correlation among international portfolio returns increases during periods of turbulence in capital markets, meaning that benefits from international diversification are lost exactly when they are needed most (Bodie, Kane & Marcus, 2008). This and other similar findings pave the way for nontraditional diversification strategies. The present paper is an attempt to analyse portfolio returns and diversification benefits of including gold, bonds, real estate and stock in portfolio of a Namibian investor. Introduction Despite a plethora of research supporting the notion of international diversification, recent studies have shown that correlation among international portfolio returns increases during periods of turbulence in capital markets, meaning that benefits from international diversification are lost exactly when they are needed most (Bodie, Kane & Marcus, 2008).This and other similar findings pave the way for nontraditional diversification strategies.Inspired from this the current paper is an attempt to identify innovative diversification opportunities and to identify the effect of including direct investment in gold and real estate as one of the diversification strategies.Several studies (Chua, 1999;Goetzmann, 1993;Lee, 2008) have shown that including housing investment in asset portfolio increases portfolio returns and decreases portfolio risk.Although many researchers have studied the effect of real estate in multi-asset portfolio in several countries, little research has been done in Namibia on diversification in general and the diversification effect of non-traditional asset classes such as gold and real estate in particular.The remaining of this article is organized as follows.In the section two, literature review, earlier research on portfolio diversification is discussed while the section three briefly explains the objectives of the present study.The section four succinctly describes the methodology used while the section five presents discussion and analysis of findings.The section six presents conclusion of the research. Literature Review Diversification for maximizing returns and minimizing risk is an important economic task for individual investors and portfolio managers.Modern portfolio theory, dating back to the seminal work of Markowitz (1952), says that an investor should optimize her portfolio's return-risk-exposure trade-off by carefully spreading out her scarce resources over various assets.Unfortunately, this task is quite demanding, as infinitely many possible combinations have to be considered (Baltussen & Post;2011).Investors and researchers have long debated over the two popular maxims, "put all your eggs in one basket and then watch the basket" and "do not put all your eggs in one basket"; however, the later appears to be the belief of many (Olaleye & Aluko, 2007).Based on several research studies, they further conclude that diversification benefits may be captured by combining different classes of real estate assets in different locations or by acquiring different property types or using both strategies (Olaleye & Aluko, 2007). Over the last two decades, the international financial markets have experienced a series of financial crises and turbulences in different parts of the world, which resulted in drastic drop and excessive volatility in the stock markets of the crisis-originating countries as well as markets of other economies through the "contagion" effect.The recurring heightened volatility in the stock markets imposes substantial risk to stock investment (Ibrahim & Baharom, 2011).Existing studies on stock market risk have a predominant focus on characterizing the risk using GARCH-type models and whether the risk can be diversified through international diversification.Studies on the benefits of international diversification tend to suggest increasing interactions among national markets and their interactions are more intense during crisis episodes and accordingly limit the benefits of diversifying away financial risks originating from a specific market, thus highlighting the need to identify other types of financial assets as a protection against this risk (Ibrahim & Baharom, 2011). A large body of recent literature exists on the benefits of international equity diversification, which emanates from the Markowitz's theory, "the less the assets are correlated, the greater the benefit of risk diversification".However, in today's volatile global environment, with increasing interdependence among world stock markets, especially after the global financial crisis of 2008-2009, it has been questioned by many "whether it still makes sense to diversify globally" or "can the investments in global equity portfolios be protected in today's volatile and interdependent markets?" (Hsu, 2011). Traditionally, investment managers in direct real estate have focused on a single geographical region.To achieve diversification, they have invested across different property types, in assets with different characteristics, or by selecting assets in targeted areas within that region.Achieving diversification through international investment, common in other asset classes, has not been considered as attractive for direct real estate because real estate markets are less transparent and there are higher risks and costs involved (Wit I de, 2010).Liow (2010), in an analysis of integration among securitized real estate markets found that conditional correlations are (substantially) weaker than the broader stock market correlations; implying the existence of potential benefits in international portfolio diversification that includes real estate.Liow, (2010), also found stronger return linkages between some pairs of real estate securities markets as well as between the securitised real estate markets and the global stock market over the past decades, implying that international linkages have been increasing over time, although their integration process has been much slower than that among the corresponding broader stock markets and from the world stock market.This further implies the potentialities of portfolio diversification benefits across the major real estate securities markets and the world stock markets might reduce/diminish in the long run.Masron & Fereidouni, (2010) in their study of performance and diversification benefits of housing investment in Iran concluded that housing is not only an effective asset for investment but also a good vehicle of diversification if included in a mixed-asset portfolio.They further concluded that investment in housing sector produces the real investment returns as the housing returns exceed the rate of growth in the CPI (inflation rate).Gold has proven to be a solid investment choice, stable in times of global geopolitical instability and economic uncertainty, recession and depression.Used correctly, gold and silver can be effective components of a properly diversified investment portfolio (O'Byrne, 2007). Objectives of the Paper The objectives of this paper are:  To identify trend of returns offered by various asset classes in Namibia  To analyse whether direct investment in gold and real estate provided diversification benefits to Namibian investors Methodology The present study is based on secondary data collected from variety of sources.Data for overall index and local index were collected from Namibia Stock Exchange Limited (NSX).Both index prices were then converted into monthly returns.Rate of return on risk free assets is interest rate on treasury bonds and was collected from the publications of Bank of Namibia (BON).Since monthly interest rates were not available, the annual interest rates for treasury bonds were converted into monthly rates.Monthly gold prices were obtained from the website of the World Gold Council and monthly returns on investment in gold were determined.Monthly real estate price data were obtained from the First National Bank of Namibia (FNB), which published this data since October, 2007.Using the FNB data the rates of return on investment in physical real estate were computed.It should be noted that real estate data were not available for the whole period of study; as prior to October 2007 no reliable data were available on housing prices. The first part of discussion and analysis thus explains the portfolio returns on three assets and diversification potential of these assets viz.; bonds, overall index and local index.This is followed by discussion on portfolio returns and diversification including gold as fourth asset class.Further, since about half the study period, from third quarter of 2007 represents more volatile returns on stock market investments, the data was divided into two parts and the first part was used to analyse the diversification effect of these assets during the period before the start of global financial crisis and the second sub sample was used to analyse the contribution of these four assets to portfolio returns as well as their diversification potential during the period of volatility (second half of the study period).Incidentally, as the data for housing prices was also available for this period; the effect of including real estate in portfolio was also analyzed for the second part of the study period. Namibian Stock Exchange (NSX) -A Brief Profile Namibian Stock Exchange (NSX) is a small stock exchange with 24 securities listed on overall index and 7 securities listed on local index, which means that 17 of the companies listed on overall index are dual listed elsewhere in the world and are multinational companies operating in Namibia.The year to date volume traded at the beginning of April 2012 was 19,989,577 shares on local index and 40,481,113 shares on overall index respectively.The year to date value traded on the same date for both indexes amounted to N$ 247.91 Million on local index and N$ 1150.59Million on overall index (1USD = N$ 7.8 on the same date). Analysis First of all the portfolio returns were calculated assuming that an investor takes equal position in local index, overall index and bonds.Table 1 presents the average monthly returns, standard deviation and risk to return profile of the three assets for the period September 2003 -December 2011.It is evident from the above table that investment in bonds during this period proved to be nearly risk free (SD = 0.15) with an average monthly return of 0.65% and highest Sharpe ratio of 0.3329 among the three assets classes available.Investment in local index also proved to be worthwhile with monthly average returns of 1.38% with standard deviation of 2.52%.Based on the following correlation matrix (Table 2) it is clear that overall index and local index have no correlation at all and overall index is negatively correlated with bonds (r = -0.18),which means that mixing these assets provided good diversification benefits.It is clear that a position in local index together with bonds provided good returns, while lowering the portfolio risk.As concluded by many researchers (AlKulaib & Almudhaf, 2012;Ibrahim & Baharom, 2011;Kristof, 2011;McCormick, 2010) adding gold into ones investment basket has provided both diversification as well as hedging benefits.Following from such empirical findings it was tested whether gold does shine in portfolio of Namibian Investor.Since Namibia is a small market, there is no option for securitized investment in gold locally, therefore, direct investment in gold was considered as an option.Following section presents the effect of including direct investment in gold in one's portfolio.Figure 1 presents an overview of average monthly returns on gold, overall index, local index and bonds.As can be observed from the Figure 1, the overall index returns have been most volatile during this period.As most of the companies included in overall index are dual listed multinational companies; they were less resilient to the shocks of global economic meltdown.5 showing correlation matrix that investment in gold proved to be a good diversification option to minimize risk as it had very low correlation with other assets especially during the second half of the study period, when it was most needed.As observed earlier, overall index returns have been negative during the second half of the study period; an appropriate strategy would have been no investment in overall index during this period.It may be further concluded that a small position in gold would increase portfolio's monthly returns from 0.7% to 0.8% and push Sharpe ratio from 0.35 to 0.37. Having observed the effect of investment in gold on portfolio returns and diversification, the next step was to study the effect of direct investment in real estate.Since housing price data was available from October 2007, its effect could be analyzed for the second half of the study period only.As depicted in Figure 3, housing returns have outperformed returns from bonds and that from overall index.with real estate returns elsewhere as found by empirical research (Chua, 1999;Goetzmann, 1993;Lee, 2008;Masron & Fereidouni, 2010). Conclusion As evident from the foregoing discussion, the Namibian overall index was affected by the global economic crisis to a large extent.As may be noted from the preceding section, including gold into portfolio resulted in much needed diversification benefit for Namibian investors (monthly returns increased from 1.1% to 1.2% and Sharpe ratio increased from 0.33 to 0.36).Direct investment in real estate was also found to provide diversification benefit.However, real estate returns as well as its contribution to diversification was not very impressive when compared to the findings from other countries (Chua, 1999;Goetzmann, 1993;Lee, 2008;Masron& Fereidouni, 2010).As a conclusion it may be noted that though the historical evidence supports the diversification benefits provided by gold and real estate, it may not continue to be so in future, therefore, investors have to continuously monitor their investment strategies and keep changing their position in various assets to achieve desired outcomes from their investment. Table 1 . Average monthly returns, standard deviation and return to risk ratio Table 2 . Correlations: average monthly returns on overall index, local index and bonds Table 3 indicates average monthly returns of the minimum variance portfolio invested in three assets: Table 3 . Average monthly returns and Sharpe measure for minimum variance portfolio Table 4 shows the portfolio returns including gold for the period from September 2003 to December 2011.and resulting shocks.It may be observed from Table 4 and the following Table www.ccsen Table4.A Table 5 . Correlation between monthly returns on gold and other assets during two periodsTable6presents the portfolio returns with and without gold for the second half of the study period. Table 7 . Diversification effect of real estate investment on portfolio returns Table 7 shows a small position in real estate increased monthly return from 0.8% to 0.85%, however, a corresponding increase in portfolio risk (standard deviation) kept the Sharpe measure unchanged.It is interesting to note that effect of real estate investment in the portfolio of Namibian investor is less attractive when compared
v3-fos-license
2022-09-04T15:15:50.753Z
2022-09-01T00:00:00.000
252063508
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2305-6320/9/9/45/pdf?version=1662113443", "pdf_hash": "6f322d8d37baf257f5ff6d27bf3e46bedf9667a6", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:180", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "f0a238847983820be9294c3377046b9d27863f85", "year": 2022 }
pes2o/s2orc
Cutting the First Turf to Heal Post-SSRI Sexual Dysfunction: A Male Retrospective Cohort Study Post-SSRI sexual dysfunction (PSSD) is a set of heterogeneous sexual problems, which may arise during the administration of selective serotonin reuptake inhibitors (SSRIs) and persist after their discontinuation. PSSD is a rare clinical entity, and it is commonly associated with non-sexual concerns, including emotional and cognitive problems and poor quality of life. To date, however, no effective treatment is available. The aim of this study was to retrospectively evaluate the potential efficacy of the different treatments used in clinical practice in improving male PSSD. Of the 30 patients referred to our neurobehavioral outpatient clinic from January 2020 to December 2021, 13 Caucasian male patients (mean age 29.53 ± 4.57 years), previously treated with SSRIs, were included in the study. Patients with major depressive disorder and/or psychotic symptoms were excluded a priori to avoid overlapping symptomatology, and potentially reduce the misdiagnosis rate. To treat PSSD, we decided to use drugs positively affecting the brain dopamine/serotonin ratio, such as bupropion and vortioxetine, as well as other compounds. This latter drug is known not to cause or reverse iatrogenic SD. Most patients, after treatment with vortioxetine and/or nutraceuticals, reported a significant improvement in all International Index of Erectile Function-(IIEF-5) domains (p < 0.05) from baseline (T0) to 12-month follow-up (T1). Moreover, the only patient treated with pelvic muscle vibration reached very positive results. Although our data come from a retrospective open-label study with a small sample size, drugs positively modulating the central nervous system serotonin/dopamine ratio, such as vortioxetine, could be used to potentially improve PSSD. Large-sample prospective cohort studies and randomized clinical trials are needed to investigate the real prevalence of this clinical entity and confirm such a promising approach to a potentially debilitating illness. Introduction Selective serotonin reuptake inhibitors (SSRIs) are one of the most used psychiatric drugs, either due to an on-label or to an off-label application [1]. Post-SSRI sexual dysfunction (PSSD) is a set of heterogeneous sexual disorders that may arise during the administration of SSRIs and persist after their discontinuation. PSSD is an iatrogenic, idiosyncratic disorder, as well as a clear example of the post-drug syndromes [2]. It mainly develops following cessation of SSRIs, but other classes of antidepressant drugs have also been reported to cause the disease [3,4]. Tricyclics, serotonin and norepinephrine reuptake inhibitors (SNRIs) as well as antipsychotics have been reported to cause enduring SD, beside the well-known (although rare) long-term effects on the extrapyramidal system, including tardive dyskinesia [5]. Moreover, other non-psychoactive drugs, such as isotretinoin and finasteride, may cause long-lasting genital anesthesia, loss of libido and other SDs. This rare and/or under-reported clinical entity is still not recognized by many specialists in the field. In 2019, only PSSD gained an official recognition after the European Medical Agency concluded that PSSD is a medical condition that persists after discontinuation of SSRIs and SNRIs [2]. This clinical entity is characterized by a wide array of symptoms that may persist for variable periods or even indefinitely [5,6]. In particular, PSSD includes genital anesthesia, anorgasmia, delayed orgasms, ejaculatory dysfunctions and decreased libido that may arise when SSRIs are established and specifically continue when they have been ceased [7,8]. Notably, many patients define it as a "disconnection" between the brain and the genitals [9]. Additionally, growing reports suggest additional non-sexual symptoms including anhedonia, apathy and blunted affect [10]. It is important to differentiate between depression-related SD symptoms and those of PSSD, since some symptoms, such as genital anesthesia, seem to be more associated with PSSD rather than with depression [11][12][13][14]. In fact, depression is strongly associated with SD as part of the core depressive syndrome, in which sexual function may be diminished or absent. In both sexes, decreased sexual desire in depression is the most prominent symptom, and dysfunctions of sexual arousal and orgasm may also occur [15]. There may also be a bidirectional relationship between depression and SD, given that patients who are depressed might not search for sexual intimacy, and, conversely, patients with SD might experience reactive depression [16]. SD and depression are both syndromes that may be caused by the same dysfunctional brain systems. Indeed, the treatment of depression itself may result in iatrogenic SD, as dopamine is known to enhance libido and sexual arousal, whereas serotonin has an inhibitory effect on sexuality [16]. Although SD is a common side effect of SSRIs, its pathophysiology remains largely unknown, as well as therapeutic treatment [17]. No rational or consistent treatment has been found for this disorder [18]. It is imperative for clinicians to be aware of non-sexual symptoms and to be able to differentiate between PSSDassociated SD and depression-related SD, as each of their symptoms can be quite distinctive with a few symptoms overlapping. Consequently, there are no well-designed clinical trials and, therefore, a clear consensus on treatment modalities has not been reached so far. For these reasons, the aim of this study was to retrospectively evaluate the potential efficacy of the different treatments used in clinical practice in improving male PSSD, and to shed some light on the management of this growing problem. Study Population and Design This retrospective cohort study included patients with a diagnosis of PSSD who attended the neurobehavioral outpatient clinic of the IRCCS Centro Neurolesi "Bonino-Pulejo" (Messina, Italy) between January 2020 and December of 2021. The patients enrolled were referred by general practitioners or other neurologists/psychiatrists to RSC (a recognized specialist of such syndromes), or they reached the clinic through information coming from Google or other media/social sites. They were screened for current iatrogenic or psychogenic SD before PSSD diagnosis was carried out. In fact, the enrolled patients were almost diagnosed and treated for different psychiatric disorders in other outpatient clinics, and came to our attention due to the onset of the symptom after SSRI intake and/or its persistence after withdrawal. Then, a physical examination with a complete sexual hormonal profile (including testosterone, LH, FSH, prolactin and estrogens) as well as a psychosexological assessment (including sexual attitude and habits, previous SD, response to sexual stimuli, libido and fantasies) were performed. Demographic characteristics (sex, age, relationship status, country of origin, occupation and daily activities) and a detailed medical history of all participants were also collected. Diagnosis of PSSD was based on the existing criteria [19]. In particular, the following "core" inclusion criteria were used: Necessary 1. Prior treatment with a serotonin reuptake inhibitor. 2. An enduring change in somatic (tactile) or erogenous (sexual) genital sensation after treatment stops. 3. Enduring reduction in or loss of sexual desire. 4. 5. Enduring inability to orgasm or decreased sensation of pleasure during orgasm. 6. The problem is present for ≥1 month after stopping treatment. There should be: 7. No evidence of pre-drug sexual dysfunction that matches the current profile. 8. No current medical conditions that could account for the symptoms. 9. No current medication or substance misuse that could account for the symptoms. Exclusion criteria were: (i) use, abuse or misuse of any drug potentially affecting sexuality; (ii) a clinical history of urologic, endocrine or systemic disease; (iii) severe depression or other concomitant psychiatric disorders, with a Hamilton Rating Scale for Depression < 17, which was administered by a trained psychiatric rehabilitation therapist. In particular, to avoid overlapping symptomatology and reduce the misdiagnosis rate, patients previously diagnosed with major depressive disorder (MDM), bipolar disorder and psychotic symptoms were a priori excluded. Afterwards, the final sample was composed of individuals with anxiety disorders and/or adjustment disorders. Only patients with a high probability of PSSD according to the previously published criteria [19], as well as the clinical assessment, were included. Given that no recognized treatment for the syndrome exists, patients were treated based on their features, needs, expectancies and according to the few data coming from case reports and series. Usually, an antidepressant with a dopaminergic/noradrenergic profile or antagonizing/positively modulating the serotonergic system (i.e., with fewer or no known SD side effects) was used, as well as nutraceuticals and/or PDE5 inhibitors. The Hospital Research Ethical Committee of the IRCCS Neurolesi approved this study (IRCCSME-CE-31/2021), and all participants gave their consensus for data publication. Outcome Measure Patients were asked to fill out the International Index of Erectile Function-15 (IIEF) [20], a well-validated psychometric tool to measure sexual function (with regard to erection) at baseline and after a follow-up period of 12 months. The IIEF-15 is a multidimensional, self-administered investigation that has been found to be useful in the clinical assessment of erectile dysfunction (ED) and treatment outcomes in clinical trials. It has been recommended as a primary endpoint for clinical trials of ED and for diagnostic evaluation of ED severity [21]. A score of 0-5 is awarded to each of the 15 items that examine the 4 main domains of male sexual function: erectile function, orgasmic function, sexual desire and intercourse satisfaction. The questionnaire was administered in Italian. The IIEF-15 scoring ranges from 0 to 30: a score of 1-10 is indicative of severe ED; 11-16 moderate ED; 17-25 mild ED; and 26-30 absence of ED. Statistical Analysis Mean, standard deviation, median lowest, highest, frequency and ratio values were used in the descriptive statistics of the data. The distribution of the variables was measured by the Kolmogorov-Smirnov test. Due to the non-normal distribution of the study variables and the small population, we performed the Wilcoxon signed-rank test in the analysis of the dependent quantitative data, using software R 4.1.3 (Messina, Italy) [22]. A p-value < 0.005 was established. Results Of the 30 patients referred to our neurobehavioral outpatient clinic from January 2020 to December 2021, 13 Caucasian male patients, with a mean age of 29.53 (±4.57) years, were included in the study. The main sociodemographic and clinical characteristics are shown in Table 1. Less than half of the patients only complained of SD (with anorgasmia and loss of libido being the most frequent ones), whilst in about 23%, SD was associated with cognitive problems, in 8% emotional problems and in about 25% both cognitive and emotional concerns. The enduring SD was caused only by an SSRI, and those with a more selective profile (i.e., citalopram and escitalopram) were the most common (Table 2). Notably, different strategic treatments were used to overcome PSSD, with vortioxetine being the most common and effective one. After the various treatments, the IIEF-15 score improved significantly (p > 0.05) in the majority of the sample, except for two cases, one treated with vortioxetine and nutraceuticals and the other with bupropion, tadalafil and a nutraceutical. At T0, nearly all PSSD patients treated with vortioxetine (10-20 mg according to each patient's response) started from a severe level of SD, according to the IIEF-15. At T1, we observed a significant improvement in the IIEF score, with a substantial reduction in SD (with regard to anorgasmia), achieving a high percentage of therapeutic success (from 33.3 to 60%) (see Table 3). Therefore, most of the patients (10/12) reported an improvement in the main sexual and non-sexual symptoms as per the IIEF score, except two cases in which the therapeutic success rate was equal to 0% where we did not find any score increase. Moreover, in the only drug-resistant patient receiving Vibra-Plus Therapy, we observed an improvement of 50% in the IIEF-15 score at T1. Discussion This real-life retrospective study describes a small cohort of patients diagnosed with PSSD according to the recently published selection criteria [19] and treated with strategic pharmacological and non-pharmacological interventions. Sexual dysfunction can appear while on treatment and persist after discontinuing any serotonin-reuptake-inhibiting drug [23]. There is a growing awareness that a substantial number of medicines have either positive or negative effects on sexual functioning [24]. These include antibiotics, antihypertensives, lipid-lowering agents, medicines affecting endocrine systems and others [24]. Notably, psychotropic drugs, targeting serotonin and dopamine pathways, are widely recognized as the main drugs responsible for SD. The treatment approaches adopted to overcome iatrogenic SD have been largely aimed at reversing the acute sexual effects rather than reversing the mechanism that leads to enduring effects [25]. Our preliminary data advance the research in the management of PSSD, as the clinical use of vortioxetine, as well as bupropion (although in fewer cases), which is associated with nutraceuticals, might be considered as a potentially effective treatment of this enduring problem. In fact, in nearly all patients treated with these strategic interventions, we observed a positive change in the level of SD, according to the IIEF-15 score as well as an improvement in non-sexual side effects (evaluated by a specific psychosexological in an interview). However, in the current literature, there is still no definitive treatment for PSSD. Some authors suggest that a treatment option for patients might be to take bupropion or nefazodone, which are antidepressants that are known to cause few or no sexual adverse effects [25]. In fact, bupropion does not have serotonergic activity and, hence, does not affect sexual function in patients. According to the literature, patients treated with bupropion report less SD, and also document a recovery in satisfaction, desire and frequency of sexual activity [26], given that the drug has a positive effect on dopaminergic pathways. In line with our results, Jacobsen et al. (2015) showed that switching antidepressant therapy to vortioxetine may be beneficial for patients experiencing SD during antidepressant therapy with SSRIs [27]. Vortioxetine has been approved for the treatment of adults with major depressive disorders (MDDs) since 2013, and subsequently it has been shown that the drug may be particularly beneficial for specific populations of patients, including those with treatmentemergent SD and patients experiencing certain cognitive symptoms [28,29]. This is possible because of the multimodal action of vortioxetine; indeed, it is a serotonin (5-HT) transporter inhibitor that also acts on several 5-HT receptors, such as the 5-HT3 and 5-HT1A receptors [30]. This is why the drug might have led to such positive results in our sample, even if the patients were affected by anxiety/adjustment disorders. One may be concerned that treatment with bupropion or vortioxetine is more likely to treat ongoing depression, including symptoms of SD, anhedonia, apathy, cognitive symptoms and emotional blunting, than to reverse a postulated effect of an SSRI after discontinuation [31]. Nonetheless, we believe that SD and related problems were more likely due to the enduring iatrogenic effect, since our patients were properly assessed at baseline, and diagnosed with PSSD according to the current available criteria [19] and after an accurate psychosexological anamnesis. The use of turmeric could have positive effect on sexual function in some cases, since it is known to help increase BDNF and reduce inflammation, and thus also improve depression [32]. Moreover, the compound we used might have also acted on mood and anxiety according to the well-known bi-directional relationship between sexuality and depression [33]. Men's sexual functioning could be improved by the use of nutraceuticals, as they may increase libido and genital arousal, and may be considered as an alternative treatment of PSSD. In a previous case report by Calabro' et al., a dietary supplement called EDOVIS has been used to restore PSSD [34]. It is composed of L-Citrulline, tribulus terrestris, andean maca, damiana, muira puama, and folic acid, which are useful for the physiological sexual activity of males [35]. Today, nutraceutical and functional food components could also represent a strategic approach to treat SD, according to a holistic approach [36]. The main component of the nutraceutical, i.e., nitric oxide-NO, is the pivotal factor involved in the endothelium-dependent relaxation of the human corpus cavernosum, potentially boosting erectile function and genital sensation [37]. Nutraceuticals and dietary supplements are an accessible alternative that men with ED use to attempt to address their SD, as reported in a recent review [38]. In particular, the main nutraceuticals included a series of natural components: ginseng, composed of biologically active compounds called ginsenosides and ginseng saponins [39]; the amino acid L-arginine, which is a precursor to NO and is converted by NO synthase [40]; Tongkat Ali, an aphrodisiac herbal extract, because of its ability to increase testosterone levels [41]; horny goat weed, whose bioactive ingredient is icariin, which has historically been used as an aphrodisiac and herbal treatment for ED in Chinese men [42]; tribulus terrestris, an herbal plant that has been claimed to improve physical performance and sexual activity [43]; Maca, a vegetable derived from the Lepidium meyenii plant that has been historically used as both a nutritional supplement and fertility enhancer [44]; zinc, a mineral able to improve erectile function [45]; and damiana (also known as turnera diffusa), a well-regarded aphrodisiac ingredient that stimulates sexual desire and performance [46]. Based on such data, our patients were treated with EDOVIS, with some positive results when used alone or in combination/after other compounds. As is known, the market for dietary supplements and nutraceuticals taken to improve the sexual health or psychological well-being of the customer is enormous. However, after accidental and excessive intake of these supplements, some side effects, such as nausea, diarrhea, vomiting and cramping abdominal pain, have been reported [47]. More attention to adverse effects and potential interactions is needed in order to prevent pharmacological interactions and potentially serious medical outcomes. Waldinger et al. reported the effect of physical therapies, such as low-power laser irradiation, or phototherapy, directed toward the scrotal skin and the shaft of the penis in a male patient with PSSD and penile anesthesia, alleviating anejaculation and erectile dysfunction symptoms of PSSD in the same patient [48]. In our sample, a patient with no response to previous pharmacological treatment received an intensive alternative treatment using Vibra -Plus, with a beneficial role in his sexual symptoms, including sexual hypoesthesia and anorgasmia. In particular, muscle vibrations (MVs) have already been used to manage different pelvic floor dysfunctions due to diverse pathologies [49]. There is converging evidence that MV provides the central nervous system with strong proprioceptive inputs that reach the sensorimotor cortices. This may help to modify the corticospinal excitability, to favor intracortical inhibitory systems and to induce better muscle synergy patterns by acting on the excitability of spinal motoneurons and interneurons. MV may directly act at the spinal level, reducing abnormalities of the spinal excitability and restoring abnormal reciprocal and presynaptic inhibition mechanisms [50], also leading to an improvement in genital sensation. Instead, in the peripheral nervous system, the improvement in erection may depend on the effects of MV on the specific properties of the muscles and surrounding connectivity tissues (including viscoelasticity), as well as on vessel vasodilatation [51]. Moreover, the use of MV plus other kinds of non-invasive neuromodulation could be taken into consideration as a future and promising treatment, as demonstrated by previous works [52]. Different evidence managing transcranial magnetic stimulation demonstrated that focal MV increases or decreases motor-evoked potential amplitude and short intracortical inhibition strength in the vibrated muscles, while opposite changes occur in the neighboring muscles [53]. In this way, pelvic MV may contribute to regulating the contraction and excitability dynamics of the pelvic floor muscles involved in erection. The presence of few not validated approaches highlights the difficulty in choosing the treatment that must be targeted at the individual level in patients with PSSD. Furthermore, the psychological and behavioral component cannot be underestimated in this kind of patient. Indeed, cognitive-behavioral therapy also has been used by psychiatrists to help patients reach a better understanding of their condition and cope better with their situation. Cognitive-behavioral therapy is useful for dealing with the negative thoughts that develop in many patients, such as sexual inadequacy and low self-esteem [54,55]. Partners need to be involved in this approach because they are collaterally affected by PSSD. Sex therapy and couples counseling should aim to inform the partners that the sexual dysfunction is a side effect of the medication and not a lack of interest. In addition, such behavioral therapies can provide emotional and psychological support for patients and partners [55]. We are aware that the diagnosis of PSSD is not currently recognized by the DSM-V, and that its prevalence is not known because of the lack of well-designed studies. Indeed, depression is frequently associated with SD in both men and women. Clinicians should consider the bi-directional association between depression and SD. Patients reporting SD should be screened for depression, whereas patients presenting with symptoms of depression should be routinely assessed for SD [56]. However, PSSD appears to be a different clinical entity, as recognized by the European Medical Agency in 2019 and current criteria in a consensus paper [19]. Moreover, none of the patients had MDD (an exclusion criterion), which may have accounted for the enduring sexual and affective symptomatology more than adjustment and anxiety disorders. Unfortunately, given that diagnosis was performed before our assessment for PSSD and the stressors were identified and managed by other clinicians, we are not completely able to rule out their potential role in iatrogenic SD. In addition, it is not possible to ignore that young people without any history of depression or use of antidepressants, trauma or anxiety frequently present with SD, and no cause can be identified. No prospective systematic study starting patients with a psychiatric syndrome, such as MDD, on SSRIs and following persistence of SD or relapse of the symptomatology after medication discontinuation has been carried out, nor will it be possible given the rarity of this complaint. This important issue should be addressed by future studies to better understand this "new" clinical entity and the subtending pathophysiological mechanisms. Moreover, we have described and treated patients with suspected iatrogenic PSSD independently of their psychiatric diagnosis, although this can make the sample less homogeneous. Furthermore, it could be useful to report in futures works the correlation between SSRI and testosterone levels, since abnormalities in sexual hormonal levels have been reported after the drug intake [57]. Nonetheless, our patients' sexual hormones at assessment were within the normal range. The study had some other limitations. First of all, the retrospective design prevented us from developing any a priori hypothesis. However, this was an open-label observational study performed in a real-life context that could be the basis for planning future randomized clinical trials. The small sample size was another limitation, but it is not so easy to collect larger homogeneous cohorts as the disease is still unrecognized and few papers exist to guide the right diagnosis. Other outcome measures, such as the Patient Health Questionnaire-9 and the Generalized Anxiety Disorders 7-item scale at baseline and after intervention as well as assessment for change over time, would have been more helpful than just the initial HAMD score to achieve the diagnosis and assess whether and to what extent improvement could have continued. However, we administered this scale to exclude patients with severe depression/somatization symptoms at assessment, beyond those who were a priori excluded due to being affected by MDD. Future trials should address this important issue, and more systematic large-sample cohort studies on patients on SSRIs are necessary to investigate the "real" prevalence of PSSD. Moreover, since this was a retrospective small-sample real-life study, it was not possible to compare the efficacy of the different compounds, either used alone or in combination, in improving PSSD. This is why we have only reported a single patient's therapeutic success rate, but larger studies are needed to solve this important issue so as to give indications on the best therapeutic approach. Finally, a pharmacogenetic assessment to see if the patients were slow metabolizers might have been helpful to better understand the cause of the clinical entity. As a strength, our sample was homogeneous, as in all of the patients' PSSD was caused by an SSRI, and diagnosis was made after a strict application of the ongoing criteria and an accurate psychosexological anamnesis and clinical investigation. Conclusions As far as we know, this is the first study that attempted to identify therapeutic intervention strategies for enduring sexual dysfunction related to the use of SSRIs. Although our data come from a retrospective open-label study with a small sample size, drugs positively modulating the central nervous system's serotonin/dopamine ratio, such as vortioxetine, could be used to potentially improve PSSD. Larger randomized clinical trials are needed to confirm our data and find promising neuropharmacological approaches to better manage this potentially debilitating illness.
v3-fos-license
2015-02-18T22:23:18.000Z
2014-01-03T00:00:00.000
15606142
{ "extfieldsofstudy": [ "Sociology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://peh-med.biomedcentral.com/track/pdf/10.1186/1747-5341-9-1", "pdf_hash": "669172771a49868f38758c64e21a60f25a8c6546", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:183", "s2fieldsofstudy": [ "Philosophy" ], "sha1": "1673d06d24f3a5aae91cf45d232f36647a41cfd0", "year": 2014 }
pes2o/s2orc
A principled and cosmopolitan neuroethics: considerations for international relevance Neuroethics applies cognitive neuroscience for prescribing alterations to conceptions of self and society, and for prescriptively judging the ethical applications of neurotechnologies. Plentiful normative premises are available to ground such prescriptivity, however prescriptive neuroethics may remain fragmented by social conventions, cultural ideologies, and ethical theories. Herein we offer that an objectively principled neuroethics for international relevance requires a new meta-ethics: understanding how morality works, and how humans manage and improve morality, as objectively based on the brain and social sciences. This new meta-ethics will simultaneously equip neuroethics for evaluating and revising older cultural ideologies and ethical theories, and direct neuroethics towards scientifically valid views of encultured humans intelligently managing moralities. Bypassing absolutism, cultural essentialisms, and unrealistic ethical philosophies, neuroethics arrives at a small set of principles about proper human flourishing that are more culturally inclusive and cosmopolitan in spirit. This cosmopolitanism in turn suggests augmentations to traditional medical ethics in the form of four principled guidelines for international consideration: empowerment, non-obsolescence, self-creativity, and citizenship. International neuroethics The scientific foundations of neuroethics are structured upon advances in the brain and behavioral sciences, and in the novel technologies that allow access, evaluation, and manipulation of the brain and its functions, inclusive of the amalgam of conscious processes, cognitions, and emotions that contribute to the 'mind' and/or the 'self'. The ever-broadening use of neuroscience and neurotechnology arouses scrutiny of longstanding 'common sense' and philosophical concepts of the relation of brain to mind, and compels inquiry to the validity and value of these ideasand their implicationsin the scientific, medical and socio-cultural realms. It is in this critical light that the philosophical foundations for neuroethics are also gradually, yet steadily organizing. (Examples of broadly philosophical treatments of neuroethics include Levy [1] and the work of Racine [2]). These foundations are based in part upon extant constructs of science, mind, self, and social relations, and yet, we opine that there is an increased need for their reexamination and perhaps reconstruction in light of new information from the brain sciences, to update epistemological, anthropological, and ethical norms. Better understanding of how those normative sources have functioned for humanity to dateespecially because they can now be openly scrutinizedcan then be leveraged in formulating concepts, constructs, and constraints regarding the ways that neuroscientific research could and should be conducted and applied in medicine to evoke effect(s) within cultures and the social sphere. Clearly, neuroethics will be an essential part of any such view [3][4][5]. Prescriptions for what ought to be done about these implications soon follow. Thus, neuroethics will be inescapably prescriptive, and justifications for those prescriptions will rely on normative premises. Normative premises are abundantly available: social, moral, and legal norms abound from all directions and every culture. Neuroethics might remain prescriptively splintered by such normative diversity and conventionality. Therefore, we ask if neuroethicsas a philosophical fieldcan define and settle on core norms to take a unified principled stance? If it can, where will those normative premises be found, which ethical principles for neuroethics would be wise, and what policy and legal regulations would follow from such ethical principles? We assert that pondering a unified principled stance for neuroethics is not an idle speculative venture. The field of neuroethics is confronted with urgent international questions of how to deal with brain research and the uses of novel neurotechnologies originating in many countries and quickly crossing borders, whether from benevolent, commercial, or even malevolent intent. Looking globally, neuroscience and neurotechnology are no longer the province of Western nations, as shifts in scientific, technological and economic capabilities are evermore enabling non-Western countries to become viably engaged in a growing international market of neuroscience (currently estimated at greater than $150 billion annually). This shifting balance will necessitate addressing ethical, legal, and social issues incurred through the use of neuroscience and technology not only in developed nations, but in those that are developing and under-developed, as well. The worldwide discussion of neuroscience and neuroethics has swelled, and will undoubtedly continue to increase [6,7]. Calls for a global neuroethics relevant to upgrading international policies and laws are mounting accordingly [8][9][10][11]. As a field and set of practices, neuroethics should be involved in these international deliberations, because its theoretical resources allow direct examination and evaluation of the human being, and human predicament (of disease, illness, suffering and finitude) from a metaphysically and methodologically naturalistic grounding and perspective that is 1) well comported with medicine, 2) conciliatory toward human cultural diversity and 3) not incompatible with theological views. Accordingly, we further urge that neuroethics should forge philosophical foundations and theoretical ethics that are universally and objectively valid as science itself. To this end, we address the following core issues. How might neuroscientific information about putative bases of moral cognitions and actions be engaged to establish a basis for the development of ethical systems and practices that are naturalistically grounded? Can such neuroethical deliberations be guided by more than just one culture's ethical ideals in order to guide the ways that neuroscientific research is conducted and applied on the world stage? We affirmatively answer these questions in five stages. First, the primary modes of prescriptive neuroethics are outlined, showing how their argumentative forms admittedly fit better with social conventionality than with ethical theorizing. Second, a path for neuroethics to transcend inadequate ethical theorizing and outdated meta-ethics is cleared, a new meta-ethics for neuroethics is revealed, and hopes are posed that neuroethics can undertake ethical theorizing. Third, neuroethics is shown to be compatible with a modest type of cosmopolitan ethics that we believe will be important to a broader, more naturalistic, and culturally inclusive ethico-legal discourse. Fourth, in the spirit of cosmopolitanism, (four) principled guidelines for a more internationally capable neuroethics are proposed for consideration: Empowerment, Non-obsolescence, Self-creativity, and Citizenship. Finally, this philosophical path from 'synapse to society' and on to a principled international neuroethics is defended against expected objections. Prescriptive neuroethics Pro Roskies [12], neuroethics has inherently (if not axiomatically) embraced two central matters: first, studying neural function to understanding how our speciesand othersdeveloped and manifest capacities for sociality and morality; and second, undertaking ethical thinking about researching and modifying neural structure and functions of cognitions, emotions, and behaviors using the techniques and technologies of neuroscience. The first mode of neuroethics explores how new knowledge about the functions of the brain may impact wider understandings of self, social relations, and culture. The second mode of neuroethics explores how such self-and socio-cultural understandings should be applied to judging the implications and potential effects of neuroscientific research and its employment in various domains of the social sphere. Pondering how new neuroscientific information about the processes of intentional volition may indicate modifications to criteria for criminal responsibility and just punishment is an example of the first mode; pondering whether convicted criminals should receive novel brain modifications to diminish their anti-social conduct is an example of the second. Both modes have factually descriptive components [2]; both are normatively prescriptive as well. The prevalence of prescriptivity throughout neuroethics deserves more attention. The dual-aspect nature of neuroethics is generally acknowledged, but the disadvantages of bifurcating neuroethics into 'traditions' such a 'neuroscience of ethics' contrasted with an 'ethics of neuroscience' should also be recognized [13][14][15]. Distinctions can inflate into dichotomies, especially where the gravity of traditional dichotomies exert philosophical pull. The 'is-ought' divide can particularly sway an ethics of neuroscience towards enveloping all of the prescriptive work. On the contrary, the way that the neuroscience of ethics recommends adjustments to our conceptions of self, morality, and society necessarily involves sensitively important normative and ethical issues [16]. A non-normative and 'purely descriptive' neuroethics only appears feasible where some common notion of sociality or morality is appraised as unquestionably correct and made the object of research. This 'pure' description of 'the way humans do things' hides its normative prescriptivity behind a façade of unrecognized cultural conventionality. As soon as this 'purely descriptive' neuroethics is forced to notice how differing conceptions of sociality and morality are available for selective research, its purity is adulterated by normativity. Furthermore, any neuroethical judgment that sociality or moral responsibility needs to be re-conceived in light of fresh neuroscience exposes how this 'descriptive' neuroethics is already on prescriptive territory, since a specific norm of sociality or moral responsibility is getting selected for scrutiny, and some alteration to that norm is recommended for its better 'fit' with the current recognized facts about brain function afforded by neuroscience. Both modes of neuroethics are unavoidably prescriptive. Furthermore, the dual modes of neuroethics must be intricately connected across both descriptive and prescriptive dimensions, since novel self-conceptions must affect methods of doing ethics, which in turn will change how ethical norms are applied to proposed brain technologies that can further modify (self)-conceptions of humanity. To avoid chaotically changing everything at once, philosophical reflection typically approaches matters piecemeal. For both modes of doing neuroethics, even the most sophisticated arguments yielding prescriptions can exemplify a basic form. For the first mode, some item of neuroscientific knowledge is premised in order to justify modifying certain socio-cultural views. Hold the science steadily in view, and recommend socio-cultural change to keep a good fit with the realities science affords. For the second mode, some view of the human being and/or socioculture is premised in order to justify a verdict on the appropriateness of employing a neuroscientific technique or technology in research or practice. Here, hold the selfsocio-cultural view steadily in view, and use those norms to evaluate neuroscientific change(s) to brain structure and functions of cognition, emotion and/or behavior. Both modes basically hold one side of the neuroscientific/selfsocio-cultural formula steady, and recommend what must be done (or not done) to the other to maintain some sense of balance or coherence. At first glance, the philosophical quest for coherence and stability sounds reasonable enough. However, abundant resistance arises from all directions to obstruct revisions to self-socio-cultural matters, or to prevent deployment of novel technologies. Prevailing cultural traditions and ideologies (including folk psychologies, common morals, religious traditions, economic and political systems, etc.) mount resistance to modifying conceptions of the human being/society/culture, especially where those conceptions have normative dimensions. Struggles over brain science that might be relevant to sensitive matters such as gender or sexuality, family bonds and roles, personhood status and autonomy (e.g., of the mentally ill or criminals) supply just a few examples. Struggles just as easily erupt over opportunities to utilize novel technologies. On most any issue, opposed positions tend to develop and harden: one camp conservatively rejects using a new technology by appealing to stable tradition, while the other camp progressively recommends a novel social structure made possible by some new technology [17]. Both camps appeal to anything useful at hand, such as moral intuitions, common social standards, cultural norms, and legal rules. Indeed, so many of these are available for recruitment by both sides that neither camp may prevail, resulting in deadlock. Only where there is wide agreement on priorities would we expect to see somewhat easier convergence on accepting some change in views of the human being, society and culture, and the use of new technologies. Specifically, a society will more quickly and compliantly accept new life technologies when that society is already highly committed to some important goals, such as lifespan extension, mental health, or crime prevention. Where neuroethics is concerned, public justifications for using neurotechnologies to modify physiological functions and behaviors will largely take a 'socially conventional' form, as a society appeals to what it considers as valid and binding norms and goals. Without question, social-cultural norms can and do afford a vast amount of practical work and public benefit. In their more rigid form as legal statutes, such norms are often quite proper, and arguably necessary for social order. Prescriptive judgments coalesce into legal and policy regulations as needed. Neuroethics must pay due attention to cultural traditions, prevailing ideologies, and social conventions. Indeed, much of neuroethics will remain beholden to those powerful sources of norms and ideals, making tacit or explicit appeals to them in the course of urging prescriptive judgments. Yet, however attractive and useful, these normative sources do not supply universally accepted principlespeople disagree within societies, societies disagree with each other, and entire cultures gradually change over time. Just because a large part of a society, or much of a culture, happen(s) to prefer things a certain way does not automatically make it right, good, and/or just. What can appear to be the 'strongest' ethical arguments are really only locally and modestly prescriptive, and permitting majority-based social standards to speedily decide matters may actually perpetuate deep ethical disagreements rather than resolve them. If philosophical foundations of/for neuroethics remain at this socio-cultural level, argumentative stalemates will be frequent, and even where broad norms weigh in favor of one position, those norms will still be only socio-culturally relative and such positions have no wider 'objective' status. Prescriptive neuroethics at its best may remain philosophically fragmented, with an objectively principled neuroethics remaining out of reach. Of course, such a neuroethics would hardly be the only 'applied' ethics to be so fragmentedthere is a growing recognition of irreducibly pluralistic bioethics in general [18][19][20][21]. Offers to rescue neuroethics (and bioethics) from this fragmented situation have been offered from those claiming that there are universally valid norms for all humanity. Theologically inspired offers rarely comport well with the scientific worldview, but even if that clash could be overcome, religious traditions tend to disagree with each other over ethics as much as cultures do. The naturalisticallyminded philosophers among the theological community often appeal to preserving 'humanity' , 'human nature' , 'human virtues' , and the like. Their naturalism, however, prevents this strategy from rising above conventionality as much as hoped. In this Darwinian age, such essentialist appeals can only amount to aggregating nicer humans into one set and pointing to what many of us happen to be doing well [22][23][24]. For example, repudiations of futurist plans of trans-humanist agendas and post-humanisms typically make claims that either amount to "what humans have been doing as morally right is a path from which no one should stray," or "matters should they remain as they have been." (That is why the divergent values of some future 'post-human' society are typically disregarded by such conservative arguments.) Promoters of trans-humanism and post-humanism are quite capable of appealing to selected 'universal' norms of humanity as well, but closer examination of this strategy exposes how these norms tend to be conveniently pre-selected from special phases of civilization and then 'discerned' within all humanity [25]. To be sure, philosophy has additional resources. Established ethical theories, such as various deontologies, utilitarianism, contractarianism, and virtue ethics, may be ways to surmount conservative-progressive stand-offs, and rise above socio-cultural conventionalism altogether. These ethical theories lay claim to some higher 'objective' status, but do they really tend to end controversy? Far from it; the spectacle of argumentative standoffs among ethical theories lends applied ethics its characteristic adversarial tone. Any agreeable convergence among rival ethical theories seems more like a matter of chance than design. Even those ethical theories proud of a basis in 'reason' do not precisely agree on how to best be rational. Does prescriptive neuroethics have any further options beyond settling for socio-cultural fragmentation, seeking humanity's 'genuine' values and virtues, or following the lead of one or another established ethical theory? As a field, neuroethics has an opportunity to transcend these alternatives. By taking the social, behavioral, and brain sciences most seriously, the first mode of neuroethics has access to knowledge about how humans cognize the world, undertake their conduct, engage in relationships, and structure and manage social and moral responsibilities. The second mode of neuroethics has the capacity to apply such knowledge for evaluating the methods used for ethically judging proposed modifications to ourselves and our societies. In short, we opine that there is nothing about how we can do morality, make ethical judgments, change moral habits or social roles, or re-design societies that is theoretically off-limits or beyond the purview of neuroethics. This burdens neuroethics with the requirement of being consistent with several sciences (bringing attendant concerns discussed in the next section), but it simultaneously loosens neuroethics from complete dependence on folk psychologies, social conventions, cultural standards, obsolete epistemologies and theories of mind, traditional philosophical and religious ethics, and outdated meta-ethics. Has neuroethics fully realized the extent of a proper domain, and the potential capaciousness of its power? If not, neuroethics will remain weakly prescriptive, but it will obtain its value premises on loan from outside sources. Neuroethics can make appeals to intuitions, social conventions, legal statutes, and ethical theories too; indeed, these inherited argumentative habits from older versions of applied ethics (such as medical ethics) nearly exhaust the neuroethics literature to date. But we believe that a much wider field of action awaits neuroethics: the potential to be served byand serve asa new meta-ethics. A new meta-ethics for neuroethics Meta-ethics involves clarification of any linguistic, epistemic, psychological, or even metaphysical presuppositions and commitments involved with moral thinking and practice. Ethical theories tend to append some meta-ethics to their systems since each theory relies on a characteristic view of what morality is and how morality works, views contested by rival ethical theories. Before the advent of the behavioral and brain sciences, such meta-ethical presuppositions were just that: sheer assumptions. Philosophers and theologians 'found' them grounded in all sorts of places, such as folk intuitions, grammars, linguistic definitions, 'common sense' morals, socio-cultural norms, and legal regulations, along with whatever the 'best' sciences or theologies of the day said about free will, human nature, natural law, speculative metaphysics, or divine commands. Over the centuries, typical pronouncements of meta-ethical principles have really amounted to little more than personality traits, linguistic habits, folk psychology concepts, comfortable moral intuitions, race/class/gender prejudices, theological dogmas, armchair speculations, and so forth. Ethical theories and meta-ethics have long mapped out morality and moral concepts in the absence of adequate biological, sociological, and psychological knowledge about origins of human sociality, the human capacity for doing morality, and the ability to modify moral and social norms. We posit that a new scientific meta-ethics can gain independence from inherited intuitions, social conventions, and older ethical theorizing. Neuroethics will engage the social, behavioral, and brain sciences to erect the foundations of a new meta-ethics. Neuroethics need not be another 'applied ethics' beholden to outdated metaethics or ethical theories; nor will neuroethics be imperiously told (by any postmodernist meta-ethics, for example) that bioethics cannot attain any measure of objectivity, or be cowed by an analytic meta-ethics into abandoning empirical ethics as a fallaciously naturalistic project [26]. For neuroethics, neuroscientific understandings of the subject matter, namely actual human sociality and moral cognition, take priority. In a similar manner, the behavioral and cognitive sciences are supplying much-needed tests and correctives to epistemologies, theories of learning, and metaphysical notions of the body-brain-mind relationship [27,28]. Neuroethics could exemplify how to fruitfully apply a new scientific meta-ethics because it addresses and treats three matters that are crucial to any meaningful and authentic exploration of human life: namely, moral capacity, moral practice, and moral principle. What does 'morality' mean to neuroethics? Roughly, the naturalistic understanding of human morality takes it to be a socially sustained practice, found in all (or nearly all) cultures, in which individuals voluntarily and habitually conduct themselves in accord with understood norms promoting personal fitness for social interactions and regulating public conduct of wide social concern. People participate in a morality not only by regulating their own behavior in social relationships, but also by assisting in the needed enforcements of moral norms, and by teaching moral norms and the means of enforcement to those who need moral education. The universality of this social technology of morality indicates its significant and longstanding utility for social groups small and large (especially when supplemented by the far older norms of kinship and the much younger norms of law) [13,29]. Let us sketch neuroethics' approach to moral capacity, moral practice, and moral principle in that order. First, utilizing knowledge from the biological, cognitive and social sciences, neuroethics applies understandings of neural substrates and mechanisms of cognition to investigate how humans have the capacity to be social and moral [3,13]. Any theories involving mistaken presumptions about how sociality works, how we must think about morality, and the cognitive resources available for managing society or being moral, will be disproven and then suitably revised or speedily eliminated. Ideologies and philosophies having a concern for actual human morality means they can be held accountable by scientific information about human cognition and sociality. Theoretical recommendations about people being moral and becoming more moral must make at least four kinds of presumptions about (1) how people are already doing morality, in some specified sense of what it means to be 'moral'; (2) the cognitive/emotive processes that people undertake when trying to be moral; (3) how certain changes to these processes are possible; and (4) how some of these changes can result in a person's conduct becoming more moral. Theories making these presumptions can hence be discredited in any of four ways: (1*) a theory's specified sense of 'morality' may not resemble how humans generally do morality; (2*) a theory's view of the cognitive/emotive processes involved with doing morality may be inaccurate or entirely mistaken; (3*) a theory may be proposing modifications to processes of doing morality that are not in fact possible; or (4*) a theory's view that possible modifications to moral processes are effective for doing morality better cannot in fact be that effective. Social ideologies and ethical philosophies are not immune from evaluation and criticism from the behavioral and brain sciences. Ethical theories that can be adapted in light of scientific knowledge will enjoy a deserved advantage [30]. Second, from this sound(er) basis in reliable theorizing about sociality and morality, neuroethics can expand its inquiry into any and all social and moral practices, carefully evaluating them for their consistency with brain realities, and recommending modifications where indicated. Expectations that people should be doing things a certain way should align with the ways that (their) brains can actually function. Neuroethics (like the brain and behavioral sciences generally) will be perpetually confronted by cultural ideologies, legal and political philosophies, ethical theories, meta-ethical systems, and the like, each protesting that factual brain science is largely irrelevant to the normative task of making people into who they ought to be. While neuroscience does notand/or cannot purportto prescribe and proscribe actions or establish ideals, itand neuroethicscan infer and inform what, why and how neural functions, and effects can enable embodied organisms (like humans) to sense, perceive, emote, decide and act, and this is important to the establishment of norms and ethics about the ways we relate. Furthermore, guiding people's lives implies shaping minds, so ignorance of the brain is no excuse. Any movement of social reform, for example, should partner with neuroethics in order to determine how modifications to brain structure and function (by whatever means, from inter-socially pedagogical to neurologically pharmaceutical) can affect our personal capacities, interpersonal relationships, and moral practices. More generally, neuroethics is usefully central to inquiries into the potential wider impacts of modified mind/brain capacities and practices on all other moral, social, economic, legal, political, cultural, (etc.) realms of life [3,5,13,31]. Third, proceeding from some sense of human moral cognition and action, and how adjustments to ourselves and our social practices may have wider implications, neuroethics can help formulate principled judgments about whether and how modifications to existing moral and wider social practices ought to be made. Having participated in the comprehension of moral capacities and the reformulation of sound ethical theorizing, neuroethics can proceed to an articulation and application of improved ethics to concrete problems arising in and from brain research and new neurotechnologies that are coming fast to the global stage. Again, established ethical systems will claim priority here, offering to stock neuroethics with their principles, but such principles can be freely accepted or declined as appropriate. Unlike philosophies that prefer to isolate objective morality and its supposedly rational basis from conventional ethics in its cultural settings, we reserve for neuroethics a meta-ethical stance that takes the cognitive and social sciences seriously in their investigations of the embodied human being embedded within socio-cultural environments [3,4,31]. This opportunity might first appear like a return to the option of socio-cultural conventionality, but, starting from science in fact opens the possibility for a far more objective foundation for neuroethics than the 'objectivity' promised by older ethical theories. Neuroethics and moral naturalism Neuroethics is contributing to the project of moral naturalism that aims at scientifically understanding how humans practice sociality and morality in their cultures. Moral naturalism must not be confused with moral realismwhen a moral naturalist proposes to study human morality, there is no specific code of morality intended and no commitment is made about whether one or another morality is 'true'. Moral naturalism is hospitable to deep moral pluralism, although it is inhospitable to views of morality that contradict sound science [32][33][34]. This meta-ethical grounding for and of neuroethics in the brain and behavioral sciences arouses philosophical suspicions, too many to entirely forestall in this paper. Rather, we can only make a few statements about such concerns here. In our view, while neuroethics has no choice but to be naturalistic in its approach to studying sociality and morality, neuroethics is not automatically beholden to ethical naturalism, since neuroethics need not agree that all moral meanings and values, and any ethical principles adjudicating among them, entirely reduce to the status of objective facts about the natural world. Nor must neuroethics take a strictly eliminativist stance against freedom, agency, and responsibility, but need only consider scientifically acceptable versions that find responsible autonomy in learned capacities for managing individual conduct and social relations, rather than in some mythical 'free will' ignoring natural laws or non-existent 'self-conscious decisions' always instantaneously controlling actions (compatibilist theories grounded on social neuroscience are better scientific candidates, for example [2,[35][36][37]). Neuroethics need not necessarily heed extant ethical theories' criteria for possessing freedom or autonomy (such as "the capacity for purely rational thinking" and the like); nor need neuroethics be premised on any 'neuro-essentialism' positing that a conception of the human being cannot exceed our neurobiology [3,13,38]. Neuroethics is not reducible to any specific amount of science, yet science is crucial for meta-ethics and neuroethics. By ensuring scientific continuities between actual moral conduct in the natural world, inquiries into the conditions permitting such conduct, and prescriptions for modifying how people morally conduct themselves, neuroethics remains fully committed to the scientific worldview without reducing ethical philosophy to the sciences themselves. On this meta-ethical view, (neuro)scientific knowledge about human (or any other species') morality is not incompatible with all ethical philosophizing. While ethical theorizing that relies on entirely disproven notions must be eliminated, claims that evolutionary psychology, sociology, or the cognitive sciences will eliminate morality itself (and obviate all ethics) are hasty and overblown [39]. The scientifically-based meta-ethics of neuroethics will find plenty of genuinely natural morality among humans to research, and this meta-ethics will leave room for neuroethics to engage ethical philosophy. Some, but not all, ethical philosophies are refuted by the fact(s) that: many people are not fulfilling morality's altruistic expectations; peoples' moral intuitions have emotional roles set by evolution instead of cognitive ways to track moral realities; peoples' intuitive notions of how morality works are quite mistaken; and ordinary language about morality is replete with confusions and errors. Ethical philosophies do not all agree about the cognitive or motivational capacities of ordinary morality, and they don't all share the same degree of reliance upon what people happen to think or say about morality. Ethical philosophers typically focus on thoughtfully guiding people toward improving one or another system of moralityand the shortcomings obtained therein. For example, the discovery that people typically fulfill only minimal expectations of morality, and are sentimentally partial and partisan towards those like themselves who live in proximity, is not exactly a stunning revelation for much of philosophical ethics (or for religious ethics) that some brain scientists may have made it seem [39]. Similarly, when one or another ethical theory or meta-ethics has defined 'morality' or 'moral knowledge' in terms later discovered to be inadequate by the brain or behavioral sciences, philosophers should refrain from announcing that "morality does not exist," and instead focus on discrediting (sources of) poor definitions of morality [40][41][42]. Despite centuries of misguided and mistaken ethical theorizing about the origins and foundations of morality, it has been and remains a robust part of human social life. Neuroethics can be an equally robust and perhaps better philosophical ethics. In general, philosophical ethics can handle less than ideally moral people and can avoid defining morality in entirely fictitious terms, but ethical theories cannot keep supposing that their preferred modes of ethical reasoning are immune to discoveries about actual human cognition. A scientifically based meta-ethics, and its focus starting from an understanding of moral cognition, emotion and behaviors in the human world means that ethical philosophizing can be held accountable by neuroethics, not the other way around. No philosophical ethics, not even utilitarianism or deontology, can enjoy presumptive ethical status anymore. Neuroscience's liberation from reliance upon notions of morality established by antiquated ethical theories, (that are absent knowledge about cognition), is only half-heartedly recognized at present. For example, the relative immaturity of neuroethics as a discipline and practice is manifested by the curious way that some neuroscientists are attempting to map correspondences between specialized cognitive/affective functions and the modes of reasoning inherent to traditional ethical theories (for overview, see [1,43]). Why just those ethical theories, rather than others? Are we forever wedded to utilitarianism and deontology (or any other lineup of extant theories that one would care to list)? Imagine if epistemological inquiries were conducted in this manner. That some brains are capable of thinking 'deontologically' and others in a 'utilitarian' manner when confronted with an artificial situation having only two possible outcomes only indicates that brains are indeed trainable in those two ways (which we knew well before brain imaging). But no amount of brain imaging would infer that those are the only two ways of moral thinking. The far more interesting kinds of information from neuroscience do not involve what we already know about what brains can do, but rather what brains could potentially do differentlyand perhaps better. What will brain images look like from people who transcend the artificial utilitarian-deontological option when dealing with messier real-world situations? We should be looking at a neuroscience of the morally possible, not just the ethically necessary. To be sure, while we are pointing a way towards developing a scientifically adequate meta-ethics, this essay does not offer a 'correct' ethical philosophy grounded in that metaethics. Even the lengthy process of weeding out disproven ethical theories (not attempted here) leaves no obvious single winner in its placethe negotiation between the brain and behavioral sciences and adequate ethical theorizing will be an on-going process for as long as new things are learned about cognition. Instead, we here propose undertaking three modest meta-ethical goals: First, grounding a new meta-ethics for neuroethics on empirical knowledge about actual people in their societies; second, questioning whether a prescriptive neuroethics must remain beholden to such things as socio-cultural norms or traditional ethical theories; and third, suggesting how a new neuroethical framework with objectively principled outcomes could be erected. This path from real people to normative prescriptions, and then on to neuroethics principles, is neither obvious nor easy, especially because outdated metaethical presumptions crowd the philosophical landscape. Surmounting conventional prescriptivity still appears especially daunting. How can neuroethics go about selecting and elevating conventional prescriptions into objective principles? Since the meta-ethics of neuroethics must follow the brain and behavioral sciences in their view of morality as socioculturally embedded, doesn't that imply that all prescriptive judgment is forever limited to relativistic status? And if neuroethics would instead find its principles in some other source besides actual human cultures, would that search amount to a betrayal of its confessedly scientific foundations? We hold that there is a meta-ethical way past this dilemma. We resist a simplistic forced choice between many diverse social conventions or a unitary trans-cultural ethics for doing prescriptive neuroethics. Neuroethics' moral naturalism and its reliance on the brain and behavioral sciencesespecially cultural anthropology, social neuroscience, and cognitive neurosciencecannot endorse that dichotomy. Brains are certainly embodied, and people are thoroughly socialized and encultured beings [3,13]. So philosophical appeals to some mythical capacity for pure reason or detachment from group identity can't work; people can do far more than robotically express one culture. People are not individuals with accidental cultural identities, nor are their identities exhausted by the folkways of some culture or another. At the same time, these encultured humans possess intelligent capacities to cognitively reflect on cultures [9]. Furthermore, most people can appreciate how they stand with respect to cultures, they can enjoy some emotional ability to empathize with others in different cultures, and they can learn from other cultures. The very fact that humans enjoy quite sophisticated cultures is the very reason why we can defensibly assert that we are not forever trapped within just one culture (or sub-culture, etc.) or another. Indeed, ethnic and cultural identities could not be constructed, deliberately managed, and carefully sustained against hegemonic and assimilationist pressures unless ethnic and cultural identity could be objects of reflective evaluation and comparison [44,45]. Enculturation is most powerful when it is least visible, but it can come into view in many ways. People can realize how other cultures are different, yet at the same time, not so different. People can re-evaluate their own culture's habits and norms; people can revise their social structures in light of novel goals and ideals; people can combine cultural features or move to other cultures; people can respect and value people of other cultures without necessarily valuing everything about those other cultures; and people of different cultures can work on converging agreement on shared principles (although perhaps for differing reasons). In short, people can feel respectfully beholden to their own cultures even while they perceive that social norms can, or sometimes should, be modified. Humanity is a species that re-designs its moralities, just as it designs and modifies all social technologies. Modifying moralities cannot be a path towards some trans-cultural position, however, since at every stage of socio-technological development, we are still talking about thoroughly encultured humans [46][47][48]. But it is a path that permits recognition of the locations and limitations of any given socio-cultural convention. This human capacity helps to explain how cultural evolution happens at all. Furthermore, it turns out to be no paradox that we can travel across and partially transcend socio-cultural boundaries through our capacities for understanding the very existence and permeability of those boundaries. Summarizing, neuroethics should participate in forging a new, objective meta-ethics based upon scientific research into human societies and their moralities. This new metaethics in turn grounds the needed neuroethical testing of ethical theories for adequacy, which then permits neuroethics to suggest improvements to our understandings of morality and to ethical theories, and explain why humans have the cognitive resources to reflectively modify sociocultural inheritances. Modifying social structures such as moralities is far from easy; in the short term, domestic appeals to social convention get much practical and policy work done. All the same, methods yielding short term, local results don't necessarily work beyond their social range of application, or their conventional premises. Principled and cosmopolitan neuroethics We now come to the question of whether the evident capacity of neuroethics to be prescriptive on its own philosophical terms provides for the further ability to become objectively principled as well. Although neuroethics can and should take advantage of a new meta-ethics grounded in the brain and behavioral sciences to acquire some degree of liberation from socio-cultural conventions, cultural ideologies, and outdated ethical theories, this progress is insufficient to guarantee that neuroethics could erect an objectively principled ethical position. Understanding which conventions, ideologies, and ethical theories to avoid is hardly the same thing as discovering the one right ethical system. Neuroethics could still remain forever fractured, prescriptive only for local situations and social contexts, and valid only by being premised on group or cultural norms. Within any actual society, of course, prescriptive neuroethics can seem properly principled, as it contributes to the reflective stability of norms for that society. The larger question is whether a principled neuroethics can apply to far more than just local contexts in a piece-meal fashion. Will the philosophy and practices of neuroethics rise above social or cultural relativism? Can neuroethics provide something of objective value to the world at large? Thus far, this essay has sought to arouse a creative tension between (A) the way that neuroethics respects how human brains are embodied, socialized and encultured; (B) the expectation that neuroethics can and will do its prescriptive work with great sensitivity to socio-culturalhistorical contexts; and (C) the hope that neuroethics could approach an inter-cultural level of principled philosophical ethics. But we hold that the tension within and between these points is resolvable by the fusion of their concepts and tasks. Specifically, we think that a new metaethics for neuroethics is already entailed within points A and B: that is,respect for both the power of enculturalization and the intellectual flexibility to modify cultures. People are always encultured, yet they can be thoughtful and creative individuals, who can contribute to cultural comparison and change. We believe that this position points the way toward fulfilling the hopes of point C. As we view it, a new meta-ethics for neuroethics already contains some principled treatments of sociality and enculturalization that bridge the transition from how humans are successfully social, to ways they should continue to be social. For example, humans are properly encultured to permit opportunities for their flourishing, yet cultural essentialism is unsound. So we should be suspicious of social groups preventing individuals from changing their selfidentities, dictating the identities of its members, aggressively assimilating new members, or denying their members' efforts to learn and think about the ways of their culture and those of other cultures. Ethnocentrism is similarly unsound, so we should be suspicious of any society claiming to exemplify a 'correct' way of life. Along these lines, we can see why excessive cultural elitism is unsound, since no society/culture is so elite or correct that it can reasonably classify the members of other societies as sub-human or less worthy of respect or dignity. Cultures still permit people to pass moral judgments on others (that's the point of having a morality), but individuals in other cultures are still to be viewed as worthy candidates for moral regard [49]. Following this train of thought, excessive nationalism looks unsound as well. While citizenship can be a valuable status for people, no country should presume that a person's identity or loyalty is primarily characterized by one's current domicile or citizenship, and people should not automatically prioritize their nation's interests. Because the new meta-ethics of neuroethics will also remain skeptical towards any ethical theorizing that lays claim to trans-cultural or absolute status, this stance renders implausible any political theory reliant upon those sorts of foundations, such as certain kinds of political liberalism or social contractarianism grounded on a vision of 'true' human nature that fails to recognize and regard biology and culture [50]. A new meta-ethics will not merely describe how humans are social and moral within cultures, since it will also comprehend how ecologies are capable of providing conditions for successful human understanding and improvement of their cultures. Most relevantly, this neuroethical metaethics will grasp the proper cultural conditions minimally needed for people to intelligently manage, sustain and improve their moralities. Inappropriate cultural conditions are hence specifiable as well, and include: obstructing knowledge about how sociality and morality works; preventing people from intelligently questioning and creatively modifying their social structures and moralities; isolating people to keep them ignorant about other cultures; promoting ideology that one's own culture must be uniquely correct; encouraging people to demean and demonize those in other cultures; and generally stunting the human capacity (such as it is) for empathy and cooperation with others. Cultures that foster such inappropriate conditions are not fulfilling their proper function, basically by failing to enhance intelligent human flourishing, which is the entire point of being encultured humans. The universality of the use of culture across humanity supplies the key to locating cultural norms to encourage. Such norms obtain: respect for individuals who value their identities and are changing their self-identities; opportunities for people to acquire capacities for flourishing; protection of individuals from cultural insulation, isolation, and ignorance; denial that any society has exclusively correct norms; disdaining efforts to cast some peoples outside the circle of full humanity; and valuation of people for themselves and not merely with regard for their heritage, citizenship, or political status. One tradition of ethical and political philosophy highly prioritizes all of these recommendations: cosmopolitanism. Humanist in its ethics, liberal in its attention to rights, and open to secular as well as religious freedombut not oppression or aggressionin its politics, cosmopolitanism has long supported ethnic toleration, cultural pluralism, equal rights, liberal democracy, global cooperation, and international peace [51][52][53][54][55]. Cosmopolitanism cannot be uncritically adopted, of course. Over the course of its history, some varieties of cosmopolitanism have harbored hegemonic, essentialist, trans-cultural, or putatively absolutist principles among their foundations. Cosmopolitanism has occasionally included among its first principles unrealistic expectations about such things as a human motivation to prioritize and follow reason; a human capacity for deep empathy and equal concern for all; a willing suspension of concern for local matters to tackle distant situations; an eager altruism for supplying strangers with plentiful support at the cost of much personal wealth; an excessive tolerance for moral and cultural pluralism; an anti-pluralist hope for one hegemonic world culture; a determination to view humanity only as one community; or a drive to abolish countries in favor of a single world government. Varieties of cosmopolitanism can evidently be not only idealistically hopeful about humanity, but as unrealistic as any ethical or political philosophy could be. We opine that the naturalistic meta-ethics for/of neuroethics cannot support the aforementioned cosmopolitanisms that are reliant on these sorts of expectations. However, we assert that a modest cosmopolitanism, compatible with typical moral performance, hospitable to people enjoying ethnic diversity and democratic selfdetermination, and workable with contemporary political structures such as nations, international bodies, and global accords, makes a good fit with the new meta-ethics as we have formulated [56][57][58][59]. Despite prevalent caricatures of cosmopolitanism as a way for privileged Westerners to 'discern' agreeable moral rules ecumenically 'shared' by other cultures, only to blunder into cultural misunderstandings and perpetuate colonialist stereotypes, we venture to support a more philosophically sophisticated cosmopolitan stance. We caution that neuroethics would be wise to abstain from commitments about broader issues as wealth egalitarianism, economic globalization, personal property rights, or humanity's political solidarity [60,61]. Judging the appropriate political frameworks for realizing cosmopolitan visions, or deciding whether and when primary citizenship could be transferred from a country to a world polis is well beyond the purview of neuroethics alone. All the same, a principled, cosmopolitan neuroethics can be involved with offering recommendations for intercultural deliberation about crucial issues such as guaranteeing basic freedoms, protecting everyone from harms, promoting material and cultural opportunities for all, and preserving peoples' capacities for self-governance. Four guidelines of a principled neuroethics We have opened a reasoned path from the scientific foundations for a novel objective meta-ethics towards a principled cosmopolitan neuroethics. The next task of translating the high ideals of this cosmopolitan neuroethics into practical prescriptions about potential applications of neuroscience and neurotechnologies is not any easier. What are needed are mid-level principles to guide ethical and policy deliberations in concrete situations. Fortunately, neuroethics is hardly the first discipline to seek those sorts of principles. The heritage of medical ethics is conspicuously available in this regard. If neuroethics is to transcend social conventionalism, the relationships between neuroethics and medical ethics are necessarily going to be complex. As a discipline, neuroethics is a sub-field of bioethics, which considers the moral implications of the life-sciences, and since study of neural systems is among the life sciences, neuroethics falls under bioethics as an academic discipline [2]. Yet, while engaging the inter-disciplinarily of bioethics, the methodology of neuroethics will need to be partially liberated from bioethics and from medical ethics in particular [2,3,13,43]. Medical ethics to date has been dominated by problems of Western medicine and ideals of Western philosophy, which are premised on normative notions of the 'moral individual' , what counts as 'standard health' , and concerns for the 'autonomous patient' (what we have coined as 'MIS-HAP'). Indeed, medical ethics has had a generally conservative track-record, as befits a field trying to prevent medico-moral mishaps in these domains [62,63]. By contrast, in practical application, neuroethics is both more specific and broader than medical ethics, because neuroethics must consider how and why individuals, nonstate organizations, and governments will be utilizing brain/mind modifications for pursuing the widest imaginable variety of goals from pleasure to violence, both within countries and across international borders [3,13,31,64]. As for undertaking principled ethics, neuroethics will partially transcend medical ethics, precisely because neuroethics must regard modifications to the brain/mind made for any reason within and across cultural or political boundaries, including transitions to future iterations of humans, cultures, and/or beings yet to emerge. It must be acknowledged that medical ethics and its application of principles such as beneficence, non-male ficence, respect for autonomy and justice has been truly useful for grappling with the impacts of scientific knowledge and technologies [65,66]. These 'mid-level' ethical principles have made much good sense in the scientific context of medicine, and within the social contexts of Western culture, but they are not without contention [67], and in the light of neuroethical questions and dilemmas, we pose that they no longer entirely suffice. Novel neuroscientific technologies will soon expose the inherent limitations of all four principles as understood so far. For example, respect for autonomy presumes that there is an individual who has a stable personal identity over time, but radical cognitive modifications will permit the creation of new selves. Whose autonomy has been violated when someone has re-written most of their own memories? Beneficence presumes that there are objectively identifiable goods to be pursued by health care providers, but radical modifications will be undertaken by individuals who will decide for themselves what is valuable for their own lives. Who is to judge the harms of radical cognitive modifications when undertaken by people to gain competitive advantages in the workplace? Non-maleficence presumes that there are objectively identifiable harms for health care providers to avoid, but radical modifications will be chosen by individuals who will decide for themselves what 'harms' are acceptable. Where is the harm in eliminating the need for sleep without side-effects? Justice presumes that there are scarce medical resources to be distributed by health care providers (or governments) in some equitable manner, but some kinds of radical modifications will be selectively funded by communities, corporations, militaries, and countries to make people more useful in assigned jobs. Where is the injustice in obtaining a radical modification in order to stay employed in a well-paying profession, or receiving radical adjustments to courage and sensitivity levels to heighten performance as a peace officer? The tradition of Western medical ethics and the four principles mentioned here (and similar principles gone unmentioned) [68,69] are not well-designed for such future scenarios. To be perfectly fair, however, justifications for principled medical ethics have frequently appealed to the way that beneficence, autonomy, non-maleficence, and justice are widely respected by many of the world's civilizations, ethical systems, and wisdom traditions [70][71][72]. It is not a coincidence that twentieth century medical ethics has overlapped a great deal with modern cosmopolitan ideals. Selected ideals of medical ethics could be revised for fulfilling cosmopolitanism to a much higher degree. Practical continuity between principled neuroethics and medical ethics has many advantages. We agree with Eric Racine's pragmatic view that neuroethics should transformatively adapt useful bioethical work, rather than reinvent or duplicate bioethics [2]. While a new scientific meta-ethics may supersede outdated ideologies and philosophies, such meta-ethics cannot directly derive specific moral codes, so it would be impractical for a principled neuroethics to attempt a blank-slate start [3,5,13,31]. Evolutionary continuity reconciles this principlism with pragmatism (a pragmatic heuristics unable to suggest guiding principles is empty, after all), and the kind of principlism suggested here should be understood as the ethical prioritization of important moral ideals, rather than the rationalistic imposition of moral 'axioms' from which applied deductions must derive. This pragmatically flexible approach fully permits thoughtful balancing and adjudication among these ethical priorities when applying them to specific cases, and it encourages their perpetual testing and reconstruction in a manner consistent with the scientific meta-ethics of neuroethics. Summing up thus far, we have argued that progress towards an objectively principled neuroethics can be made by naturalistically reconstructing ideals of medical ethics and augmenting them according to a modest cosmopolitanism. To illustrate how this pragmatic ethical evolution may proceed, we suggest four augmented guidelines for international consideration: empowerment, non-obsolescence, self-creativity, and citizenship. Augmenting beneficence yields empowerment. The duty to advance the welfare of others should be extended to the duty to increase the capabilities of people to autonomously live independent and fulfilling lives. A modification would be considered to be unethical if it causes unreasonable harms to a person, makes a person more dependent on others (especially to the point of losing effective citizenship, the fourth guideline), or reduces a person's capacity to pursue one's own well-being. Augmenting non-maleficence yields non-obsolescence: The duty to avoid unreasonably harming people should be extended to avoid the creation of obsolete people, especially 'single-use' people that are so irreversibly specialized by radical body/brain modifications that career and lifestyle options become too limited. A modification is unethical if it unreasonably risks producing a person with peculiar or radical 'enhancements' that excessively restrict future self-creativity, or if it reduces empowerment or citizenship. Augmenting autonomy yields self-creativity: The right of persons to autonomously direct their lives should be extended to the right to re-create themselves for enriching their lives. Access to self-creative modifications, even to the point of making new selves, should be protected, so long as other guidelines are respected along the way. Self-creativity must not be conflated with individuality or peculiarity; people should also be allowed to re-create themselves to more closely conform to desired group standards (so long as those standards do not themselves involve loss of autonomy or violations of the other three guidelines). A modification is unethical if it contracts creativity; for example, by reducing responsible autonomy or capacity for further creativity, reducing basic capabilities for supporting one's self, or limiting potential competencies to improve one's standard of living and well-being. Augmenting justice yields citizenship. The duty to fairly distribute scarce goods should be extended to the duty to guarantee everyone's ability to be a free, equal, law-abiding, and participatory citizen. A modification would be unethical if it risks debilitating a person's capacity for fulfilling the roles and responsibilities of engaged civic life, or enjoying the rights and obligations of citizenship. While we are confident that a cosmopolitan neuroethics (indeed, a cosmopolitan bioethics in general) would be wise to include principled guidelines like these, we don't wish to exaggerate the priority of just these four specific principles. A different formulation of cosmopolitanism and a different selection from traditions of medical ethics would result in variant sets of guidelines. The process of finding the best mutual adjustment among a new neuroscientific metaethics, improved ethical theorizing, and inter-cultural principles for global utility has only begun. Cosmopolitan neuroethics We have claimed that neuroethics can find its philosophical foundations in much the same way that its scientific foundations are found in understanding the human brain. The objectivity of the new meta-ethics for neuroethics cannot exceed the degree of scientific objectivity involved, but there is robust objectivity available nonetheless, and that objectivity can infuse a cosmopolitan neuroethics as well. We propose fairly objective cosmopolitan principles for neuroethics out of a responsible sense of pragmatic need in light of current world affairs and with a view to recommending international protocols, conventions, and treaties [73]. Our approach seeks ethical objectivity not in trans-human rationality but in inter-human deliberations [48]. Only inter-cultural principles are sought here, not trans-cultural or absolutist norms. We have not paradoxically acknowledged cultural and moral pluralism only to heedlessly forge ahead with imposing a universalistic ethics or unitary vision of 'the' good life without regard to actual human experience. We do not risk any self-contradiction by first protesting against both the traps of conventionalism and the dreams of absolutism, and then offering guidelines that should sound persuasive to many if not most contemporary cultures. Nor do we risk self-contradiction by dismissing principles grounded on essentialist and eternal visions of humanity and then resorting to an objective meta-ethics grounded in how humans commonly attempt to live best at present. Our proposal does not elevate one culture's norms to universalist status over humanity, but rather it seeks the universal norms inherent to cultures doing their proper work for humanity. Objective ethical theorizing need not be held hostage to self-destructive cultural ideologies and outdated philosophies inconsistent with the natural facts of human cognition, sociality, and moral capacity. As we all are well aware, plenty of cultures instruct fantastical notions about mental abilities, inculcate doctrines hostile to healthy social relationships, and perpetuate power inequalities by reserving full moral autonomy only to the few. Refusing to permit their veto over an objective global ethics is entirely consistent with the cosmopolitan standpoint on proper cultural functioning and promoting human flourishing. The modest cosmopolitanism urged here lends serious support to efforts to preserve cultural diversity and selfdetermination in the face of assimilation and hegemony (although it cannot aid cultural essentialism or isolationism). Our four proposed principles can help address understandable worries that radical biomedical enhancement could undermine human sociality and solidarity [74], and they can preserve moral achievements made by civilizations around the world to date, which are impressive enough that even postmodern pluralists tacitly appeal to their validity. While conservative in spirit from a certain perspective, these principles can also be viewed as liberal and even liberating. If the advocates of the most daring transhumanist and post-humanist visions are able to admit that contemporary adherence to these cosmopolitan principles would be fairly useful for heading towards their dreams from where humanity is standing today, then we may avoid accusations of having prejudiced ethical theory against them. Neuroethics cannot avoid its destined role for deep investigations of humanity and broad relevance to humanity's problems and potentials. Neuroscience, neurotechnology, and a host of other scientific and technological advancements can and will change the human predicament, if not the human being. Socio-cultural forces will both affect the scope and conduct of neuroscience and technology and be affected by them in turn. Neuroethics will remain valid, viable, and of value by boldly participating in the sciencebased evaluation of these interacting dynamics and by helping to flexibly guide those dynamics on an international, pluralized world stage.
v3-fos-license
2014-10-01T00:00:00.000Z
2004-04-02T00:00:00.000
453441
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://angeo.copernicus.org/articles/22/1251/2004/angeo-22-1251-2004.pdf", "pdf_hash": "5f5f316299488d2c7f4a3c1d923abe2268c66073", "pdf_src": "CiteSeerX", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:184", "s2fieldsofstudy": [ "Physics" ], "sha1": "5f5f316299488d2c7f4a3c1d923abe2268c66073", "year": 2004 }
pes2o/s2orc
Reconstruction of two-dimensional magnetopause structures from Cluster observations: verification of method Abstract. A recently developed technique for reconstructing approximately two-dimensional (∂/∂z≈0), time-stationary magnetic field structures in space is applied to two magnetopause traversals on the dawnside flank by the four Cluster spacecraft, when the spacecraft separation was about 2000km. The method consists of solving the Grad-Shafranov equation for magnetohydrostatic structures, using plasma and magnetic field data measured along a single spacecraft trajectory as spatial initial values. We assess the usefulness of this single-spacecraft-based technique by comparing the magnetic field maps produced from one spacecraft with the field vectors that other spacecraft actually observed. For an optimally selected invariant (z)-axis, the correlation between the field components predicted from the reconstructed map and the corresponding measured components reaches more than 0.97. This result indicates that the reconstruction technique predicts conditions at the other spacecraft locations quite well. The optimal invariant axis is relatively close to the intermediate variance direction, computed from minimum variance analysis of the measured magnetic field, and is generally well determined with respect to rotations about the maximum variance direction but less well with respect to rotations about the minimum variance direction. In one of the events, field maps recovered individually for two of the spacecraft, which crossed the magnetopause with an interval of a few tens of seconds, show substantial differences in configuration. By comparing these field maps, time evolution of the magnetopause structures, such as the formation of magnetic islands, motion of the structures, and thickening of the magnetopause current layer, is discussed. Key words. Magnetospheric physics (Magnetopause, cusp, and boundary layers) – Space plasma physics (Experimental and mathematical techniques, Magnetic reconnection) Introduction The magnetopause current layer has long been a focus of investigation, because physical processes operating in this region control energy and mass transfer from the solar wind into the magnetosphere.In most past studies, the structure of this boundary was examined under the assumption that it is locally one-dimensional (1-D), having spatial variations only in the direction parallel to n, the vector normal to the boundary surface.The determination of n has usually been based on the assumption that the magnetopause is totally planar and has a fixed orientation during a traversal.These studies paid special attention to the normal components of plasma flow and field, because they are directly related to net transport of mass and energy across the magnetopause and to dynamic behavior.However, in reality, the magnetopause layer could have significant two-or three-dimensionality and/or temporal variations.If this is the case, previous analyses might in some cases have been misleading. A technique utilizing single-spacecraft data to recover two-dimensional (2-D) magnetic structures in space has recently been developed and applied to magnetopause traversals (Sonnerup and Guo, 1996;Hau and Sonnerup, 1999;Hu andSonnerup, 2000, 2003) and to flux rope observations in the solar wind (Hu andSonnerup, 2001, 2002;Hu et al., 2003).In a proper frame of reference (the deHoffmann-Teller frame), where the structures are assumed to be magnetohydrostatic, time-stationary, and have invariance along the z direction, the equation j ×B=∇p holds and can be reduced to the Grad-Shafranov (GS) equation in the (x, y, z) Cartesian coordinate system (e.g.Sturrock, 1994): where the partial magnetic vector potential, A(x, y) ẑ, is defined such that B=(∂A/∂y, −∂A/∂x, B z (A)).The transverse pressure, P t =(p+B 2 z /2µ 0 ), the sum of plasma pressure and pressure from the axial magnetic field, and hence, 1252 H. Hasegawa et al.: Recovery of 2-D magnetopause from Cluster the axial current density j z , are functions of A alone.The plane GS Eq. ( 1) is solved numerically as a Cauchy problem using plasma and magnetic field measurements along a spacecraft trajectory through the structures as spatial initial values.As a result, the magnetic field configuration and plasma pressure distribution are obtained in a region of the x-y plane surrounding the trajectory. This data analysis technique has been fully developed and described in detail by Hau and Sonnerup (1999), and successfully tested by use of synthetic data from several analytic solutions (Hau and Sonnerup, 1999;Hu andSonnerup, 2001, 2003;Hu et al., 2003).Because the method is based on the magnetohydrostatic equation, in which inertia forces are neglected, its application to magnetopause traversals is, strictly speaking, limited to cases in which reconnection effects are weak or absent.This means that the local structure can be approximately regarded as a tangential discontinuity (TD).Note, however, that our definition of TD includes not only the traditional 1-D current sheet with no normal magnetic field component (B n =0), but also 2-D or 3-D current layers having structured field lines within a TD.The presence of internal structures, such as magnetic islands and localized channels of magnetic flux linking the two sides of the magnetopause, is allowed, unless the inertia terms contribute significantly to the momentum balance.In the simplest application, a constant deHoffmann-Teller (HT) frame velocity, V H T , which describes the motion of the magnetic field structure past the spacecraft, is determined from standard HT analysis, using measured magnetic field vectors and plasma flow velocities (e.g.Khrabrov and Sonnerup, 1998).Then a co-moving frame, where the spacecraft moves across the structure with the velocity, −V H T , is defined such that x=− V H T −(V H T •ẑ)ẑ |V H T −(V H T •ẑ)ẑ| and ŷ=ẑ× x.The magnetic potential, A, at points along the x-axis, i.e. along the projection of the spacecraft trajectory onto the x-y plane, is calculated by integrating the measured B y component of the field: The space increment along the x-axis is obtained from the corresponding time increment via the constant HT frame velocity: dξ =−V H T • xdt.Since, as a result of the invariance in the z direction, the quantities, p(x, 0) and B z (x, 0) are both known along the x-axis, a functional fit of P t (x, 0) versus A(x, 0) is used to approximate the function P t (A) on the right-hand side of the GS Eq. (1).Once dP t dA is known along the trajectory, it can be used in the entire domain in the xy plane that is threaded by field lines (given by A= const.)encountered by the spacecraft.Outside of that domain, simple extrapolation of P t (A) is used.The integration proceeds explicitly in the ±y direction, starting at y=0 and utilizing B x (x, 0)= ∂A ∂y | y=0 , B y (x, 0)=− ∂A ∂x | y=0 , and A(x, 0) as initial values (Hau and Sonnerup, 1999).As a result, the magnetic potential, A(x, y), is obtained in a rectangular domain surrounding the x-axis.The contour plot of A(x, y), called a field map or transect, represents the transverse magnetic field lines.The field component B z (x, y) and the plasma pressure p(x, y) are computed from functions B z (A) and p(A), obtained by fitting to the measurements along the spacecraft trajectory. Determination of the orientation of the invariant (z)-axis is an important issue.If the spacecraft trajectory intersects a field line more than once, which commonly happens in magnetic flux rope observations, one can usually find the correct z-axis from single-spacecraft data by use of the condition that the three quantities, p, B z , and P t , take the same values at each intersection point (Hu and Sonnerup, 2002).For magnetopause traversals, however, multiple encounters of the same field line occur only near the center of the current sheet, whereas many other field lines are encountered only once.Furthermore, field lines encountered on the magnetospheric and magnetosheath sides of the boundary have pairwise the same A value but usually have different P t values, indicating that the function P t (A) has two branches (Hu and Sonnerup, 2003).This kind of behavior makes reliable determination of the invariant (z)-axis difficult: one can use only very few data points within the central current sheet for optimization of the choice of invariant axis and the resulting data fit to the functions P t (A), p(A), and B z (A).Because of this difficulty, the intermediate variance direction, computed from minimum variance analysis of the measured magnetic field (e.g.Sonnerup and Scheible, 1998), was often used as a proxy for the invariant axis in earlier studies (Hau and Sonnerup, 1999;Hu and Sonnerup, 2003). In the present study, the reconstruction technique, as improved by Hu and Sonnerup (2003), is applied to two magnetopause traversals by the Cluster spacecraft, both occurring in the tail flank on the dawn side.In a previous study, using data from the AMPTE/IRM and UKS spacecraft, the spacecraft separation distance was only about 40 km and the resulting two field maps showed only minor differences (Hu and Sonnerup, 2000).For the events addressed in this paper, the four spacecraft formed a tetrahedron and were separated by about two thousand km from each other, allowing us to evaluate the model assumptions, such as two-dimensionality and time independence, and also to determine the orientation of an approximate invariant (z)-axis with more accuracy.In Sect.2, we test the reconstruction technique with a Cluster event in which the encountered magnetopause appears as an approximately time-stationary current layer of the TD-type.In Sect.3, we apply the method, as an experiment, to an event showing non-negligible temporal variations for which the reconstruction results obtained separately for two of the spacecraft are quite different.In the last section, we summarize our results and discuss their significance and implications.Our procedure to select an optimal invariant axis is described in Appendix A. (−7.89, −17.11, 3.25) R E in GSE on 30 June 2001.The panels, from top to bottom, show ion number density, ion temperature, intensity and three components of the magnetic field in GSE coordinates, and ion bulk speed, respectively (black: spacecraft 1 (C1), red: C2, green: C3, blue: C4).The interval enclosed by the two black vertical lines is used in the reconstruction based on C1 data, while that enclosed by the green lines is in the reconstruction based on C3 data. Background information We utilize data from the Cluster Ion Spectrometry (CIS) and the Flux Gate Magnetometer (FGM) instruments.The CIS instruments measure full 3-D ion distribution functions and moments (Rème et al., 2001), with a resolution up to the spin rate (∼4 s).The FGM experiment can provide magnetic field measurements at time resolutions up to 120 vector samples/s (Balogh et al., 2001), but only spin-averaged data with ∼4-s time resolution are used throughout this study.For our two events, the CIS instruments were fully operational on spacecraft 1 and 3 (C1 and C3).Additionally, after appropriate recalibration, the CODIF portion of CIS on board C4 delivered reliable velocity measurements.The FGM instruments on all four spacecraft were operating for the two events.However, since the reconstruction requires reliable pressure measurements, field maps can be produced only from C1 and C3. On 30 June 2001, around 18:12 UT, the Cluster spacecraft were moving from northern high-latitude regions toward the tail flank on the dawn side.An inbound, complete crossing of the magnetopause occurred when the reference spacecraft (C3) was located at (X, Y, Z)∼ (−7.89, −17.11, 3.25) R E in the GSE coordinate system.Shown in Fig. 1 are, from top to bottom, time plots of ion number density, ion temperature, magnitude and three components of the magnetic field in GSE, and ion bulk speed, respectively.The black, red, green, and blue lines represent the measurements by C1, C2, C3, and C4, respectively.Plasma data for C1 and C3 are provided by the CIS/HIA instrument, which detects ions without mass discrimination.The velocity data for H + ions are provided by the CIS/CODIF instrument on board C4.The figure shows that the Cluster spacecraft were in the magnetosheath, which is characterized by high density (N ∼10 cm −3 ) and low temperature (T ∼1 MK), until ∼18:12 UT, although signatures associated with a flux transfer event (FTE), such as magnetic field perturbations and a temperature enhancement (for a review, see Elphic, 1995), were found at around 18:11 UT.The local magnetosheath magnetic field was tailward/dawnward/southward.The spacecraft then crossed the magnetopause and entered the plasma sheet where the temperature is much higher (∼20 MK) and, in this event, the field magnitude is slightly smaller than in the magnetosheath.The orientation changes of the magnetic field indicate that the time order of the magnetopause traversals was C3, C2, C1, and C4. We used the following criteria when selecting this crossing as a good test case: (1) The slope of the regression line in the Walén plot (e.g.Khrabrov and Sonnerup, 1998) is small, indicating strongly subalfvénic flow in the HT frame.This means that the boundary encountered is likely to be TDlike rather than RD-like, and that inertia effects associated with field-aligned plasma flows can be neglected.(2) A good deHoffmann-Teller frame with a constant HT velocity is found.This indicates that motion and time evolution of the structures are negligibly small in the HT frame and also that the MHD frozen-in condition is well satisfied.(3) The speed of the boundary motion along n, calculated, for example, as V H T •n, is small enough to give a sufficient number of measurements within the magnetopause current layer, so as to allow for a good functional fitting of P t (A) and accurate recovery of meso-scale current sheet structures.These criteria can be used for identifying events for which the model assumptions are likely to hold and which are therefore suitable for the reconstruction analysis. The time interval between the two black vertical lines is used for reconstruction from C1 data, whereas that between the green lines in the figure is for reconstruction from C3.These intervals include a number of data samples in both the magnetosheath and in the magnetosphere.Their start times are chosen such that variations related to the FTE are outside the intervals.The reason for this choice is that temporal variations and/or inertia effects, which cannot be taken into account in the current technique, could be significant in the FTE structures.We assume in this study that only ions, assumed to be protons with isotropic temperature, T = (2T ⊥ +T )/3, contribute to the plasma pressure. Reconstruction from spacecraft 1 crossing For the magnetopause encountered by C1, the Walén slope (slope of the regression line in a scatter plot of the velocity components in the HT frame, V −V H T , versus the corresponding components of the Alfvén velocity, B/ √ µ 0 ρ) is 0.3430.The slope is much smaller than unity, indicating small field-aligned velocities in the HT frame, a result that is consistent with a TD.The minimum variance analysis (e.g.Sonnerup and Scheible, 1998) of the magnetic fields, measured by C1 in the interval 18:12:00-18:12:49 UT and using the constraint B n =0, (referred to as MVABC, hereinafter) yields the magnetopause normal vector, n=(0.2003,−0.9654, 0.1671) in GSE.For this crossing and throughout this paper, we use the variance analysis with this constraint, because, for all the crossings examined in this study, the analysis without the constraint results in a rather small ratio of the intermediate to minimum eigenvalues (< 5)so that the normal determined may not be reliable.The usage of the constraint is justified for this event, since the Walén test shows consistency with a TD.The HT analysis for the same interval results in a constant HT frame velocity, V H T =(−236.6,−83.9, −8.5) km/s, with the correlation coefficient between the components of −V ×B from the set of discrete measurements and the corresponding component of −V H T ×B being cc H T =0.9753, indicating that a relatively good deHoffmann-Teller frame was found for this boundary (the value cc H T =1 corresponds to an ideal HT frame).The magnetopause motion along n is V H T •n=+32.2km/s.The positive sign indicates outward magnetopause motion as expected for an inbound crossing.By following the procedure described in Appendix A, the orientation of the optimal invariant axis was found to be ẑ=(0.5941,−0.1160, −0.7960) in GSE. Figure 2 shows the resulting data points of P t versus A, obtained from the spacecraft measurements and the corresponding fitting curves for two separate branches.In constructing the diagram, we have used a slightly modified reconstruction technique in which the use of a sliding-window HT calculation is incorporated so as to allow for temporal variations in the velocity of the magnetopause structures as they move past the spacecraft (Hu and Sonnerup, 2003); the reason for this procedure will be mentioned later.The sliding-window HT analysis yields a set of HT frame velocities, { Ṽ H T }, one vector for each data point sampled during the analysis interval.The Ṽ H T vectors vary from point to point along the spacecraft trajectory, resulting in a curved spacecraft trajectory in the reconstruction (x-y) plane.The calculation of the magnetic potential A is then modified to a line integral along the curved trajectory, The black curve in Fig. 2 is the magnetosheath branch of P t (A), fitted by a high-order polynomial to the data samples (open circles) in the magnetosheath and in the central current sheet, a region of intense axial current density (large slope of P t (A)).The gray curve, fitted to the data samples (stars) obtained on the magnetospheric side, is the magnetosphere branch of P t (A).Exponential functions, attached beyond the measured A range, are used to generate the field map in regions of the x-y plane containing field lines that are not encountered along the trajectory.We describe a reasonable way to determine the extrapolating functions in Appendix A. The recovered magnetopause transect and the plasma pressure distribution are shown in Fig. 3.An improved numerical scheme, developed by Hu and Sonnerup (2003), was used to suppress numerical instabilities and hence, to extend the computation domain in the y direction.The spatial extent in the x direction corresponds to the analysis interval marked in Fig. 1.The spacecraft were moving to the right, as shown in the upper reconstruction map.C2, C3, and C4 were located away from C1 by −1468 km, +369 km, and +27 km, respectively, in the out-of-plane (z) direction.The contours show the transverse magnetic field lines, B t =B x x+B y ŷ, separated by equal flux; color filled contours show the B z (upper panel) and p (lower panel) distributions, as specified by the color bars.The white arrows along the spacecraft trajectories in the upper panel show the measured magnetic field vectors, projected onto the x-y plane.The recovered field lines are exactly parallel to these vectors at C1 and also approximately parallel at the locations of the other spacecraft (C2, C3, and C4).The magnetosheath is located on the upper left side, where B x <0 and B y <0, while the magnetosphere is on the lower right side, where B x >0 and B y >0.The magnetopause encountered is found to be a thin, markedly nonplanar current layer of the TD-type.The presence of an X point is evident at (x, y) ≈(13 500.0)km, resulting in a small number of interconnected field lines, embedded in the TD, and presumably in a localized thickening of the current sheet on the right of the X point.The red arrows show the normal vectors computed from MVABC, based on each spacecraft measurement, and projected onto the x-y plane.These normal directions are qualitatively consistent with the overall orientation of the recovered magnetopause surface but individual normals can deviate substantially from the local orientation (for example, see the normal for C2).The lower panel shows that the plasma pressure had a maximum in the central current layer.The white arrows in this panel represent the projection of the flow velocity vectors, as seen in the time-dependent HT frame.With a few exceptions, the vectors are approximately field-aligned for all three spacecraft, as they ideally should be.The velocities in the co-moving frame are not very large on the magnetosheath side, whereas they have substantial values on the magnetosphere side.Thus the HT frame moves approximately with the magnetosheath flow.However, the larger speeds in the magnetosphere contribute little to inertia forces because the corresponding streamlines, which ideally would coincide with the field lines, have no significant curvature.Figure 4a shows the comparison between the time series of measured magnitude and the three components along the reconstruction coordinates of the magnetic field, and the corresponding values computed from the map recovered from C1.The predicted values were obtained along the trajectories of C2, C3, and C4 in Fig. 3.The time scale of the panel is expanded, relative to Fig. 1, to cover only the reconstructed range.We see that the recovered variations agree qualitatively with the measured variations for almost the whole interval.The recovered values predict the timings of the magnetopause crossings at the other spacecraft rather well, although the durations of the current layer traversals have small differences.Figure 4b illustrates that a very good correlation exists between the measured and predicted magnetic field values for C2, C3, and C4: the correlation coefficient is cc = 0.979.This result indicates that the reconstruction technique based on C1 data is rather successful in predicting quantitatively reasonable values at the locations of the other three spacecraft. The magnitude of this correlation coefficient can be used as a measure for judging whether or not the orientation of the invariant z-axis, the co-moving (HT) frame velocity, and the extrapolating exponential functions in the P t versus A plot, are adequately selected.In fact, the optimal invariant axis, the HT frame, and the functional form P t (A) for the extrapolated parts are all determined in such a way that the correlation between measured and predicted magnetic field data is at, or near, a maximum.The steps we have used for optimal selection of the invariant axis, the HT frame, and the extrapolating functions are presented in Appendix A. For the reconstruction in Fig. 3, the results were found to improve by use of the sliding-window HT method, suggesting that the whole set of magnetopause structures was approximately time-independent but was moving with small acceleration.The extended reconstruction technique, developed by Hu and Sonnerup (2003), was shown to be useful for this case.The orientation of the selected invariant axis (z-axis in Fig. 3) corresponds to the angles (defined in Appendix A) θ =−1 • and φ=6 • , i.e. it was rotated away from the intermediate variance direction by ∼6 • , with the axis of rotation mainly being the maximum variance direction (see Appendix B for a method to determine the intermediate and maximum variance directions under the constraint B n =0). Reconstruction from spacecraft 3 crossing The reconstruction technique is now applied to the magnetopause traversal by C3 which crossed the boundary ∼20 s earlier than C1 did, using the data interval denoted by two vertical green lines in Fig. 1.The MVABC and HT analysis yield: n=(0.2117,−0.9608, 0.1791); a constant HT velocity, V H T =(−269.4,−98.3, −14.8) km/s, from the interval 18:11:41-18:12:38 UT.The correlation coefficient is cc H T =0.9598, and V H T •n=+34.7 km/s.The Walén slope is 0.3689, indicating again that inertia effects due to fieldaligned flow were reasonably small.As before, these results are consistent with the spacecraft crossing an outward moving magnetopause of the TD-type. For this case, the reconstruction, using neither the standard (constant HT frame speed) nor the sliding-window HT analysis, led to a satisfactory correlation between the predicted and measured magnetic field components.These results, and also the fact that the HT frame was less well determined (cc H T =0.9598), suggest that the motion of the structures varied rapidly and by significant amounts along the C3 trajectory.Therefore, before the reconstruction was performed, we modified the y component (in the reconstruction plane) of the HT velocity vectors computed from the slidingwindow HT method, such that the remaining velocity vectors became completely parallel to the local magnetic field measured along the spacecraft (C3) trajectory in the reconstruction plane.The obtained P t (A) profile (not shown) is qualitatively similar to that for the C1 reconstruction: P t (A) has two branches and has smaller values at smaller A, but it generally increases with A and the two branches merge in the largest A range.The magnetic field and pressure maps thus recovered are shown in Fig. 5.The trajectories of the spacecraft are more strongly bent than in Fig. 3, because of the substantial modification of the y component of V H T needed at certain points.By definition, the alignment between the flow vectors and the transverse field lines is now fulfilled for C3.The invariant axis is found to be z=(0.6261,−0.0246, −0.7794) in GSE which is obtained by rotating the intermediate variance (M) axis, computed from the C1 data, by θ =3 • and φ=2 • .Thus, this orientation has an angle of 5.6 • with respect to the invariant axis used in Fig. 3, indicating that the two axes are not far away from one another.As in the previous case (Fig. 3), a qualitative agreement of the normal vectors from MVABC with the orientation of the recovered magnetopause is seen in Fig. 5. Interestingly, the global shape of the magnetopause surface is similar in the two maps -the one from C1 (Fig. 3) and the one from C3 (Fig. 5).The X point, seen in Fig. 3 at x=13 500 km, seems to be equivalent to the one found at (x, y)≈(11 500.0)km in Fig. 5, although its location in y is displaced: it is between the C1 and C2 trajectories in Fig. 3, and between C2 and C3 in Fig. 5.If our interpretation is correct, the migration distance of the X point of about 2000 km during the ∼20-s time interval between the C3 and C1 crossings gives a sunward speed of the X point of about 100 km/s in the reconstruction plane.However, that plane was moving downtail at speed V H T • x≈230 km/s.Therefore, relative to Earth, the X point was sliding tailward at some 130 km/s.The presence of the bulge in the magnetopause seen in Fig. 3 but absent in Fig. 5, may indicate a minor time evolution: it may have been produced as a result of ongoing reconnection activity at the X point.The current layer thickness appears to be somewhat different.A small magnetic island located at (x, y)≈(9500.0)km in Fig. 5, where both the B z and the plasma pressure reach maximum values, is not found in Fig. 3.These differences in fine structures are due to the fact that the profile of the function P t (A) was quantitatively different, in and near the current sheet, for C1 and C3 (not shown), i.e. it may have been different on opposite sides of the dominant X point.The structures in the current layer on the left side of the X point in Fig. 5, where the field lines were not encountered by C1, thus may not have been recovered correctly in Fig. 3. The time series of the measured magnetic field magnitude and components and the corresponding predicted values shown in Fig. 6a indicate that the reconstruction results predict both the timings and durations of the current layer crossing very well.In Fig. 5, C1, C2, and C4 were separated from C3 by −325 km, −1848 km, and −480 km, respectively, in the z direction.It is noteworthy that the predicted and measured variations are quite similar even for C2, whose z position was farthest from C3, supporting the conclusion that the approximate invariance along the selected invariant (z)axis held over the spatial scale of at least 2000 km. Figure 6b shows that an excellent correlation (cc=0.980) between the measured and predicted field components is attained for this case, demonstrating that the technique succeeds in predicting the conditions in regions surrounding the spacecraft trajectory with reasonable accuracy. Orientation of invariant axis Figure 7 shows the dependence of the correlation between the measured and predicted field components on the choice of the invariant (z)-axis.θ and φ are the angles described in Appendix A: (θ , φ)=(−90, 0), (0, 0), and (0, 90) correspond to the maximum, intermediate, and minimum variance directions, respectively, for the C1 magnetopause crossing.The intermediate variance direction computed from the C3 data is oriented toward (θ , φ)≈(2, −1).The correlation coefficient is shown by the darkness of the grey points.The thick cross represents the orientation of the optimal invariant axis used in Figs. 3 and 5.The spaces in the diagram where no points are shown correspond to axis orientations which an unrealistic field map is recovered, either due to an unreasonable profile in the P t versus A plot, or to the correlation coefficient being smaller than 0.93.The optimal invariant axis is found to be relatively close to the intermediate variance axis, for both the C1 and the C3 reconstructions.The correlation coefficient is sensitive to changes in φ (rotation about the maximum variance direction) but less sensitive to changes in θ (rotation about the minimum variance direction), for both cases.In other words, the magnetic field configuration in the reconstructed map is strongly modified by changes in φ while it is only weakly sensitive to changes in θ , the latter result being the finding also reported by Hau and Sonnerup (1999) and Hu and Sonnerup (2003). Summary of 30 June 2001 event Intercomparison of the two reconstructed maps (Figs. 3 and 5) demonstrates that the magnetopause encountered in this event was a quasi-static, TD-type current layer, for which the model assumptions appear to be well justified.Similarities of the orientation of the invariant axis, current sheet thickness, and the overall magnetopause structures among the results from C1 and C3 data indicate that mainly two-dimensional structures were present, with superimposed weak threedimensionality and temporal variations.The dominant X point in the two maps appears to be a real feature, moving tailward, relative to Earth, at about 130 km/s.The associated magnetic topology allows for easy access of the magnetosheath plasma to the inner portion of the magnetopause layer, by means of field-aligned flow on the two sides of the X. Background information The second event is a crossing from the magnetosphere to the magnetosheath occurring on 5 July 2001, around 06:23 UT, when C3 was located at ∼(−6.78, −14.97, 6.24) R E in GSE.This event has also been investigated in detail by Haaland et al. (2004), with the objective of comparing singleand multi-spacecraft determinations of magnetopause orientation, speed, and thickness.Time plots of number density, temperature, magnitude and three GSE components of the magnetic field, and bulk flow speed are shown in Fig. 8. Compared to the 30 June event, the spacecraft resided in a higher-latitude part of the plasma sheet before the crossing, as is inferred from the fact that both the magnitude and the x component of the magnetic field were more intense and the temperature was lower than in the 30 June event.The spacecraft traversals of the magnetopause took place in the time order C4, C1, C2, and C3, i.e. opposite to the order in the previous event.We see that the duration of the current layer traversal was relatively short for C4 and C1, whereas it was longer for C2 and C3.The local magnetosheath magnetic field was tailward/dawnward/southward, as in the previous event. Reconstruction from spacecraft 1 crossing The MVABC and HT analysis for the interval 06:23:03-06:23:44 UT yield (all vectors are in GSE): the magnetopause normal vector, n=(0.6098,−0.7862, 0.0999); the constant HT frame velocity, V H T =(−248.6,−102.5, 68.6) km/s with the correlation coefficient, cc H T =0.9660 (These two vectors are very close to, but not identical to those reported in Haaland et al. (2004)).The usage of the constraint B n =0 might be questionable for this event, since, as shown later, the Walén relation is relatively well satisfied, i.e. the boundary may be of the rotational discontinuity-type.Nonetheless, we use the constraint because the orientation of the normal with, rather than without, the constraint is more consistent with those from various other methods (Haaland et al., 2004).Also the result without the constraint leads to an unlikely large B n value.The normal component of the HT velocity is negative (V H T •n=−64.1 km/s), consistent with an outbound crossing of the magnetopause.The field map recovered from the C1 measurements during the time interval of 06:22:11 to 06:24:20 UT is shown in Fig. 9.For this event, the constant HT velocity is used for the reconstruction, because effects other than the kinematic effects of HT frame acceleration could be substantial, as will be shown later in this section.The optimal invariant axis is found to be z=(0.6066,0.3061, −0.7337) (GSE) and C2, C3, and C4 were displaced from C1 by −1219 km, +935 km, and −570 km, respectively, in the z direction.In the reconstruction plane, the magnetosphere (B x >0; B y <0) is on the lower left side and the magnetosheath (B x <0; B y >0) on the upper right side.The magnetopause appears to be a slightly bent TD-like structure.Thinning of the current sheet locally at (x, y) ≈(9000, 1000) km implies the presence of an X point at this location.The flow velocities remaining in the HT frame are shown by the white arrows.They are negligibly small on the magnetosheath side, indicating that, as before, the HT frame is strongly anchored in the magnetosheath plasma.Near the magnetopause on its magnetospheric side, the flow directions in the C1 and C3 crossings are consistent with the recovered field configuration.The yellow arrows, representing the normal vectors determined from MVABC for each spacecraft measurement are approximately perpendicular to the recovered magnetopause surface. Figure 10 shows the result of the Walén test across the C1 magnetopause crossing, in which GSE velocity components in the HT frame are plotted against the corresponding components of the Alfvén velocity.The regression line has a significant positive slope (slope=0.568),suggesting that some reconnection activity could have been present.The positive slope means that the plasma was flowing parallel to the field, which has a small negative normal component, B n , at the location of C1.In other words, one may infer that plasma was flowing earthward across the magnetopause, albeit at considerably less than Alfvénic speeds.These results are consistent with the reconstructed field map, although the C1 velocity vectors shown in the map do not show clear direct evidence for such an earthward flow component.In Fig. 11 the correlation between the field components measured by C2, C3, and C4 and the corresponding components predicted from the reconstruction map (Fig. 9) are shown.The correlation is slightly lower than in the previous event but it remains high, demonstrating that the reconstruction technique works well also for this case.A few outlying points from the B x component of the C4 data result from a small error in the predicted time of the crossing by C4. Reconstruction from spacecraft 3 crossing The MVABC and HT analysis for the interval 06:23:32-06:24:49 UT yield: n=(0.5959,−0.8000, 0.0704); the constant HT frame velocity, V H T =(−236.0,−94.5, 125.4) km/s with the correlation coefficient, cc H T =0.9512; and V H T •n=−56.2km/s.The GSE z component of the HT velocity is substantially different from that computed for the C1 traversal; a possible explanation will be mentioned later.The normal motion of the magnetopause is negative, i.e. earthward, as required, although Haaland et al. (2004) have shown from Minimum Faraday Residue (MFR) analysis that the inward magnetopause speed was only some 43 km/s.The difference between this number and 56.2 km/s indicates the presence of an inward flow of plasma across the magnetopause.The magnetic field map reconstructed for this C3 magnetopause crossing, using the data from 06:22:00 to 06:25:21 UT, is shown in Fig. 12.The selected optimal invariant axis is z=(0.6997,0.3727, −0.6096) (GSE), which is tilted from the invariant axis used in Fig. 9 by 9.7 • .A significant amount of field lines that connect the magnetosheath and magnetospheric sides of the magnetopause is seen in the map.A prominent X point in the transverse field at (x, y)≈ (4000, 3000) km looks more like a Y point.The HT frame is no longer anchored in the magnetosheath plasma, i.e. the flow vectors have substantial field-aligned components in the magnetosheath.This behavior is suggestive of ongoing reconnection.Note that the plasma is flowing across the magnetopause in the direction parallel to the magnetic field in the open-field channel between the X point and the center of a bulge in the current layer, located at (x, y)≈ (7500, 0) km, indicating that the magnetosheath plasma enters the magnetosphere along the reconnected field lines.The flow vectors have significant downward and rightward components at the bulge center, implying that in the reconstruction frame the reconnected flux tubes were moving in this direction.Notice that the spatial dimension of the map in the x direction is smaller than in Fig. 9, in spite of the longer data analysis interval (see Fig. 8).This is due to a smaller HT frame speed along the x-axis for the C3 traversal, caused by the frame motion being better anchored in the reconnected field lines than for the C1 traversal. The Walén plot for the C3 crossing is shown in Fig. 13.The flow speed in the HT frame is almost 100% of the Alfvén speed, in excellent agreement with the expectation from a one-dimensional RD.For earthward plasma flow across the magnetopause, the positive slope of the regression line implies that the normal magnetic field also points inward.This is consistent with the field map and with reconnection occurring tailward of the spacecraft.As in the 30 June event, the reconnection site is moving relative to Earth with a tailward velocity component. Comparison of the two magnetic field maps for this event (Figs. 9 and 12) shows that there was dramatic evolution of the configuration during the 30-s time interval between the traversals by C1 and C3.At the moment when C1 crossed the current layer, there was incipient reconnection, as suggested by the corresponding Walén plot (Fig. 10).On the other hand, it is clear that when C3 crossed the magnetopause, the reconnection was fully developed and had resulted in the formation of a wide channel of interconnected field lines.The full-blown reconnection caused a localized thickening of the magnetopause current layer in the region traversed by C3 and an associated longer duration of this crossing (see Fig. 8).The crossing by C2 also had a long duration, which may, however, have been the result, at least in part, of a smaller magnetopause speed (Haaland et al., 2004).Such changes in speed are not accommodated by the map, which is based on a constant HT velocity. Figure 14 shows the same type of correlation plot as Fig. 11, except that the predicted values are based on the map shown in Fig. 12.The correlation (cc=0.975)suggests that the technique predicts conditions at the other three spacecraft locations fairly well.This result may be surprising since the map is derived under the assumption that inertia forces are small, which is not the case near the bulge where the streamlines have strong curvature and where the flow speed in the HT frame is comparable to the Alfvén speed (Fig. 13).The assumption that the time dependence of the structures is negligible is also not valid for this event, as is evident from a comparison of the maps in Figs. 9 and 12.Nevertheless, our results indicate that the maps recovered from C1 and C3 are at least qualitatively correct. Orientation of invariant axis In Fig. 15 the dependence of the correlation between the measured and predicted field components on the choice of the invariant (z)-axis for the 5 July 2001, event is shown.In these coordinates, the intermediate variance direction from MVABC is oriented at (θ , φ)=(0, 0) for the C1 crossing, while it is at (θ , φ)≈(2, −1) for the C3 crossing.We also see in this event that the optimal invariant axis is not far from the intermediate variance direction for both reconstructions. As in the previous event, the reconstruction from the C1 data produces a correlation coefficient that depends strongly on φ but only weakly on θ .This behavior may be understood in the following way.The three reconstructions in Figs. 3, 5, and 9 exhibit a magnetopause current layer that is modestly tilted with respect to the x-axis in the reconstruction plane.Hence, rotations around the minimum variance axis (changes in θ ) change the B y component in the reconstruction plane only weakly, and do not have a significant influence on average profiles of A calculated along the spacecraft trajectory.It follows that the behavior of the function P t (A) and hence, the reconstruction result, have only a modest dependence on θ .On the other hand, rotations around the maximum variance axis, i.e. changes in φ, cause significant changes in B y and therefore, a strong dependence on φ. In contrast, we find the correlation coefficient to be sensitive to variations in both θ and φ for the C3 reconstruction on 5 July (see Fig. 12).This behavior may be related to features that were not seen for the other three cases: The magnetopause crossed by C3 was tilted more steeply, relative to the x-axis, it was of the RD-type, and it had a fairly large-scale 2-D structure, namely the reconnection-associated bulge.A study of more cases is required to determine which of these factors affect the sensitivity of the correlation to variations of the orientation of the z-axis. Summary of 5 July 2001 event Substantial differences in the two recovered maps indicate that the magnetic field configuration evolved dramatically in the ≈30-s interval between the magnetopause crossings by C1 and C3.At the time of the crossing by C1, the magnetopause was basically a TD-type current layer but with a small amount of interconnected field lines embedded.The boundary crossed by C3 had a much thicker current layer of RD-type.The presence of a single dominant X point and an associated reconnection layer is evident in the field map recovered for the C3 crossing, indicating that reconnection had been developing locally in a time period less than 30 s.Although the model assumptions of time invariance and of negligible inertia forces are violated in the event, the bulge in the magnetopause, containing reconnected field lines in the C3 map, was found to be a persistent feature in our various reconstruction attempts.For this reason, we believe the C3 map to be at least qualitatively correct. Summary and discussion In this paper, we have applied the technique for recovering 2-D magnetohydrostatic structures from single-spacecraft data to two magnetopause crossings by the four Cluster spacecraft, occurring when they were separated by about two thousand km from each other.In summary, the following results have been obtained. 1.An optimal invariant (z)-axis can be found in such a way that the correlation between the magnetic field components predicted from the reconstruction map for one spacecraft and the corresponding components measured by the other three is at, or near, a maximum, with the proviso that the measured velocity vectors, transformed into the co-moving (HT) frame, become nearly field-aligned in the field map (see Appendix A).The orientation of the invariant axis thus selected is relatively close to the intermediate variance direction determined by MVABC.The invariant axis is generally well determined with respect to rotations around the maximum variance axis but less well with respect to rotations around the minimum variance axis. 2. Two complete magnetopause crossings, occurring on 30 June 2001 and 5 July 2001, have been examined.For each of the two events, two reconstruction maps have been produced, one based on the data from C1 and a second based on the data from C3.For an optimally selected invariant (z)-axis and HT frame velocity, the correlation coefficient between the predicted and measured field components exceeds 0.97 in all four cases.The result demonstrates that the reconstruction technique is capable of predicting field behavior at distances up to a few thousand km away from the spacecraft used for the reconstruction. 3. The reconstruction method incorporating the slidingwindow HT analysis that takes into account time-varying motions of the HT frame, as described by Hu and Sonnerup (2003), was successfully applied to the 30 June 2001 event.This result suggests that, over a spatial scale of a few thousand km, the entire portion of the magnetopause shown in a map was approximately time-stationary but was moving in a time-dependent way.Localized motions of the magnetopause were small.4. Intercomparison of the two field maps obtained for the 30 June 2001 event shows that the overall magnetopause structures were similar in the two maps, having a current layer of TD-type.It appears that the assumptions of local two dimensionality and time coherence were well satisfied for the magnetopause encountered on this day.The reconstructed field structures show a current layer significantly bent on spatial scales of a few thousand km, demonstrating that the magnetopause cannot always be treated as a planar structure during a Cluster encounter.Haaland et al. (2004) have shown that even modest deviations from the planar geometry can lead to difficulties with various multi-spacecraft techniques for predicting the magnetopause velocity. 5. In the 5 July 2001, event, time evolution is clear from comparison of two field maps recovered individually from C1 and C3, which crossed the magnetopause at different moments.Evidence consistent with reconnection developing locally in the magnetopause current layer over a time interval of 30 s or less has been found.The map recovered for C3 shows a rather thick current layer with a dominant X point and interconnected flux tubes embedded, allowing for an efficient access of the magnetosheath plasma into the magnetosphere, while the map for C1, which spacecraft crossed the boundary ∼30 s earlier than C3 did, shows a thin TD-type current sheet within which a much smaller amount of interconnected field lines is present. 6. Density ramps at the magnetopause occurred in the earthmost half of the current layer in both events (see Figs. 1 and 8).This behavior is consistent with the recovered field maps which show a dominant X point and associated flux tubes that connect the outer and inner parts of the magnetopause transition layer.The interconnection permits effective transport of magnetosheath plasma into most of the current layer, via field-aligned flow.The ramps were located in the inner half of the current sheet, for the C1 traversal in the 30 June 2001 event and for the C3 traversal in the 5 July 2001 event, whereas they were closer to the center of the current sheet, for the C3 traversal in the June 30 event, and for the C1 traversal in the 5 July event.This can be explained by the temporal evolutions seen in the maps: for both events, the layer consisting of the interconnected field lines had been thickened during the interval between the C1 and C3 traversals.In neither event is there any evidence of a low-latitude boundary layer, containing magnetosheath-like plasma, earthward of, but adjoining, the magnetopause. 7. Our experiments have shown that the optimal invariant (z)-axis is not far away from the intermediate variance direction for the cases examined, but also that a modest rotation of the trial z-axis around the maximum variance direction is critical for optimization of the map.This could be related to the fact that the ratio of intermediate to minimum eigenvalues is often not very large, resulting in significant uncertainties in the determination of both minimum and intermediate variance directions (Sonnerup and Scheible, 1998).Proximity of the invariant axis to the intermediate variance direction suggests that MVABC can provide a rough estimation of the orientation of the axis of two-dimensional structures and hence, of X lines, etc. 8. The orientation of the optimal invariant axis is not very different for the two events, which were at positions nottoo-distant from one another: The angle between the invariant axes for the C1 crossings is ≈25 • .It is also noted that the orientation of the magnetic field outside of the magnetopause was relatively similar among the two events: The angle between the magnetosheath field directions for the 30 June 2001 and 5 July 2001 events is 15 • .This result suggests that, at a chosen location on the magnetopause surface, the orientation of the reconnection lines is similar for similar IMF directions.This topic and also the question of how the orientation of the X lines depends on the solar wind conditions are important subjects to be pursued in future work by applying the reconstruction method to more events.9.Both events occurred on the tail flank magnetopause, on the dawn side.The signatures of the RD-type current layer, found for the 5 July 2001 event in both the reconstruction result and the Walén test for C3, suggest that reconnection can occur at the dawn tail magnetopause, consistent with the conclusion reached by Phan et al. (2001).But the local magnetic shear for the 5 July 2001, event was not very high (101 • ), in contrast with the reconnection events reported by Gosling et al. (1986) and Phan et al. (2001).Those events also occurred at the tail flank magnetopause but under almost antiparallel field conditions.The present event is consistent with the finding that occurrence of reconnection on the tail surface is not rare even for relatively modest magnetic shears (Hirahara et al., 1997;Hasegawa, 2002;Hasegawa et al., 2004).It is noted in Fig. 12 that a clear X point and significant out-of-plane magnetic field components are found within the reconstructed domain, demonstrating that component merging was occurring.Our results for the 5 July 2001 event also indicate that the reconnection site was not stationary relative to Earth but was moving both downstream and toward higher latitudes. 10.Although a qualitatively consistent field map was obtained for the C3 crossing on 5 July 2001, the fact that the Walén slope was close to one (Fig. 13) indicates that inertia forces must have played an important role in the tangential stress balance in the reconnection layer.Incorporation of inertia effects into the reconstruction technique is not simple but is necessary for accurate modeling of magnetopause structures during significant reconnection activity, as on 5 July 2001.If such effects could be accurately taken into account, the recovered field map might show significant quantitative deviations from the one shown in Fig. 12, at least near the reconnection site.Even so, we expect the map in Fig. 12 to be qualitatively correct.Development and testing of a technique that incorporates inertia effects will be addressed in a future study. 11.The present work has made it clear that the past onedimensional (1-D) local view of the magnetopause is not adequate.The constrained normal, n, from MVABC appears to represent the average magnetopause orientation relatively well, but the reconstructed maps show that the local orientations can deviate from the average.Mesoscale 2-D structures seen in the magnetopause current layer and their dependence on parameters on the two sides will provide insights into how reconnection operates and how the mesoscale phenomena are controlled by the plasma parameter regime.These problems will be dealt with in a future statistical study. of the spatial scale of the reconstruction domain in the x direction and for computation of the magnetic vector potential, A, is determined.Under the assumption of time independence of the structures, acceleration or/and rotation of the HT frame is allowed, but as a first step we simply use a constant HT frame velocity obtained from C1 for the interval 18:12:00-18:12:49 UT. 2. We define L, M, and N axes as the maximum, intermediate, and minimum variance directions, respectively, which are determined from MVABC (Appendix B) and are ordered as a right-handed orthogonal coordinate system with N pointing outward.An optimal invariant axis is searched for by rotating the trial z-axis by trial and error, starting from the intermediate variance (M) direction, such that the correlation coefficient in Fig. 4b, between the measured and predicted field components, reaches a higher value.First the initial invariant axis (the M axis) is rotated in the plane perpendicular to N by an angle θ from the M direction.New axes M and L are determined after this rotation.The trial invariant (M ) axis thus obtained is then rotated by φ in the plane perpendicular to L , resulting in a new invariant axis M , which is used for a trial reconstruction.The positive signs of θ and φ are defined according to the right-hand rule.A certain number of candidate orientations, M , for which the correlation coefficient is sufficiently high, are chosen by surveying the two angles, θ and φ.In principle, any coordinate system may be used for this survey process.In this study, we use the coordinate system based on the results from MVABC. 3. As shown in Fig. 7, the angular domain in which the correlation coefficient exceeds a certain value is belt-like and there is an uncertainty in the determination of an optimal θ value.Therefore, two further criteria are used to select the best invariant axis from the candidate orientations, one based on the functional behavior of P t (A), p(A), and B z (A), the other based on the alignment between the remaining velocity vectors in the HT frame and magnetic field lines, when visually inspected in the recovered map.For some of the candidates, the quantities P t , p, or B z have two or more significantly different values for certain A values near the center of the current sheet, i.e. near the maximum of P t and A in the P t versus A plot (see Fig. 2), meaning that they vary substantially on the same field line and thus, that the model assumptions are violated.For other cases, the velocity vectors in the HT frame of C1, measured by the spacecraft not used for the reconstruction (C3 and C4), have non negligible components in the direction perpendicular to the reconstructed magnetic field lines.This feature suggests that the recovered field may not be reasonable.The best orientation of the invariance (z) is determined by considering these features. 4. In a second cycle of trial and error, the reconstruction is tested by incorporating the sliding-window HT technique (Hu and Sonnerup, 2003), which allows for the acceleration of the HT frame.This step is taken unless a very nearly timeindependent HT frame is found, that is unless the correlation coefficient between components of −V ×B and −V H T ×B for the analysis interval is extremely good.Note that the ap-plication of this method assumes that the entire structure encountered by the four spacecraft moves together with a timevarying HT velocity.Inertia effects associated with the acceleration of the HT frame are assumed to be negligible.The optimal invariant axis can then be selected in the same way as in steps 2 and 3.If a better correlation is obtained than for the constant HT velocity case, the result obtained by using the time-varying HT velocity is adopted as the optimal one.Otherwise, the result with the constant HT velocity is selected. 5. If a less than satisfactory correlation is obtained for both the constant and the time-varying HT velocity cases, a modified method to compute the HT velocity that results in larger acceleration of the HT frame, described in Sect.2.3, can be tested to improve the result. 6. Our experience indicates that the correlation coefficient depends relatively strongly on the choice of both the invariant axis and the HT frame velocity, but only weakly on the behavior of the extrapolating exponential functions in the P t versus A plot.Therefore, these exponential functions are adjusted after the above steps are finished.The above behavior is reasonable, since the extrapolating functions only modify magnetic field values in regions far from the current sheet, but have no effect on the shape of the current sheet.The correlation coefficient seems most sensitive to how well the timing of the magnetopause crossings is predicted. Appendix B Intermediate and maximum variance axes with the constraint B n =0 Methods for determining the vector normal to the magnetopause with the constraint B n =0 were given by Sonnerup and Scheible (1998).Here we describe a method to determine the intermediate and maximum variance directions under this constraint.By using a constraint of the form n•ê=0, where ê is a known unit vector (here to be chosen as the normal vector from MVABC, n B n =0 ), the eigenvalue problem can be written, Here, M B is the magnetic variance matrix, M B ij ≡ B i B j − B i B j , and P is the matrix describing the projection of a vector onto the plane perpendicular to ê, i.e.P ij =δ ij −e i e j .By putting n=ê in the eigenvalue Eq. (B1), it is seen that ê is an eigenvector corresponding to λ=0.The other two eigenvalues are denoted by λ min and λ max .The eigenvectors corresponding to λ min and λ max represent the minimum and maximum variance directions, respectively, in the plane perpendicular to ê.Thus, for ê=n B n =0 , the eigenvectors for λ min and λ max represent the intermediate and maximum variance directions, respectively, under the constraint B n =0.If one puts ê= B | B | instead, the eigenvector corresponding to λ min is the normal vector from MVABC (Sonnerup and Scheible, 1998). Fig. 1 . Fig.1.Time series of Cluster measurements around a magnetopause crossing event occurring at(−7.89,−17.11, 3.25) R E in GSE on 30 June 2001.The panels, from top to bottom, show ion number density, ion temperature, intensity and three components of the magnetic field in GSE coordinates, and ion bulk speed, respectively (black: spacecraft 1 (C1), red: C2, green: C3, blue: C4).The interval enclosed by the two black vertical lines is used in the reconstruction based on C1 data, while that enclosed by the green lines is in the reconstruction based on C3 data. Fig. 2 . Fig. 2.Plot of transverse pressure P t versus computed vector potential A and the fitting curves for the C1 magnetopause crossing on 30 June 2001.The circles and stars are data used in producing the magnetosheath (black curve) and magnetospheric (gray curve) branches, respectively.Extrapolated parts for each branch are shown; these are outside the measurements but are required for the reconstruction. Fig. 3 . Fig.3.Magnetic transect (top) and plasma pressure distribution (bottom) obtained by using time-varying HT frame velocity for the C1 magnetopause crossing on 30 June 2001.Contours describe the transverse magnetic field lines.In this reconstruction plane, the spacecraft generally moved from left to right: The magnetosheath, where B x <0; B y <0, is on the upper left side and the magnetosphere (B x >0; B y >0) on the lower right side.In the top panel, B z is expressed in color as indicated by the color bar; the four spacecraft tetrahedron configuration is shown by white lines; the measured magnetic field vectors are projected as white arrows along the spacecraft trajectories; the normal vectors, N1 -N4, computed from MVABC, are projected as red arrows.Line segments in the upper left corner are projections of GSE unit vectors, X (red), Y (green), and Z (yellow), onto the x-y plane.In the bottom panel, the plasma pressure is shown in color; the ion bulk velocity vectors from CIS/HIA (C1 and C3) or CIS/CODIF (C4), transformed into the accelerating HT frame, are projected as white arrows. Fig. 4 . Fig. 4. (a)Time series of intensity and three components of the measured (solid) and predicted (dashed) magnetic field data (nT) in the reconstruction coordinates.The predicted data are based on the field map recovered from the C1 data (Fig.3).(b) Correlation between the measured and predicted magnetic field components.The x, y, and z components in the reconstruction coordinates are represented as a plus, a cross, and a circle, respectively.The curves and points are color-coded as in Fig.1. Fig. 5 . Fig. 5. Magnetic transect (top) and plasma pressure distribution (bottom) recovered by using a time-varying HT frame velocity for the C3 magnetopause crossing on 30 June 2001.The format and the spatial scale size are the same as in Fig. 3. Fig. 6 . Fig. 6. (a)Time series of the measured (solid) and predicted (dashed) magnetic field data.The predicted data are based on the field map reconstructed from C3 data (Fig.5).(b) Correlation between the measured and predicted magnetic field data.The format is the same as in Fig.4. Fig. 7 . Fig. 7. Dependence of the correlation coefficient between the measured and predicted field data on the choice of the invariant (z)-axis for the reconstructions based on the C1 (upper panel) and C3 (lower panel) data; see text for definition of the angles θ and φ. (θ, φ)=(0, 0) corresponds to the intermediate variance direction determined by MVABC for the C1 magnetopause crossing.The orientations of the invariant axes used in producing Figs. 3 and 5 are shown as thick crosses.Note that the vertical and horizontal scales are different. Fig. 8 . Fig. 8. Time plots of Cluster measurements around a magnetopause crossing event occurring at (−6.78, −14.97, 6.24) R E in GSE on 5 July 2001.The format is the same as in Fig. 1. Fig. 9 . Fig.9.Magnetic field map reconstructed by using a constant HT velocity for the C1 magnetopause crossing on 5 July 2001.The format is the same as in the upper panel of Fig.3, except that for C1, C3, and C4, the flow vectors in the HT frame are projected as white arrows, and for C2, the spacecraft trajectory is shown by a white curve.In this plane the magnetotail (B x >0; B y <0) is on the lower left side, whereas the magnetosheath (B x <0; B y >0) is on the upper right side.The yellow arrows show the projections of the normal vectors determined from MVABC. Fig. 11 . Fig. 11.Correlation between the measured and predicted magnetic field data.The predicted data are from the field map recovered for the C1 traversal on 5 July 2001 (Fig.9). Fig. 12 . Fig. 12. Magnetic field map reconstructed for the C3 magnetopause crossing on 5 July 2001.The format is the same as in Fig. 9, except that the normal vectors are shown as red arrows.
v3-fos-license
2019-04-29T13:16:53.927Z
2017-05-26T00:00:00.000
136239995
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://pubs.sciepub.com/ajfn/5/2/4/ajfn-5-2-4.pdf", "pdf_hash": "aec3464b77613ab2d91b388586641b8ecda9701a", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:186", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "543ebf1ce917186ebe4ef535d03cfbafe97d88ec", "year": 2017 }
pes2o/s2orc
Effect of the Inclusion of Fruit Peel on Consumer Acceptability, Dietary Fiber and Chemical Properties of Jam Products Jam products were prepared from mango and pineapple fruits separately. The pulp and peel of each fruit were used. The effect of the inclusion of fruit peel on consumer acceptability, dietary fiber and other chemical properties of jam products were investigated. Dietary fiber was significantly (P>0.05) higher in jam produced with pulp and peels together. Pineapple jam with pulp and peel contained the highest dietary fiber of 30%. There was significant (P> 0.05) variation in pH, titratable acidity and total soluble solid of the different jam products. The samples also varied significantly (p=0.05) in terms of the colour, taste and other sensory attributes. When a mixture of pulp and peel of fruit were used together in jam making, mango jam had the lowest acceptability score of 5.6 while pineapple had a score of 7.0. Qualitative descriptive analysis of the jam samples revealed that jam from the mixture of fruit peel and pulp requires improvement only in smoothness compared with that from the fruit pulp. The use of edible fruit peels in food product such as jam is therefore recommended. Introduction Jams are centuries old and have been recognized worldwide for their fragrance and rich fruit taste. Lawrence and Franklin [14] defined jam as thick, sweet spreads made by cooking crushed or chopped fruits with sugar, pectin and water. Broomfield in 1988, defined jam as a mixture of fruits and sweetening agent brought to a suitable gelled consistency with or without other permitted ingredients. Though the traditional understanding of jam is that it is a self preserved cooked mixture of fruits and sugar with many fruits including mango and pineapple are used for jam making because they are rich in natural sugar, acids and pectin. Jam is not eaten alone. It is usually for eating bread, cake and other baked products. It is spread on top of bread, like butter or margarine. It helps to boost the taste, and nutritional value of the main food. Some baking industries use jam (pure jelly) in place of caramel for flavoring and coloring of baked products, since it is naturally produced from fruits [15]. One of the health problems associated with jam production is that of sugar management. Diabetics and other related diseases such as obesity, heart attack, and high cholesterol level are all elicited by blood high sugar level. The high level of sugar used during the manufacture of jam for gel formation affects the health of people especially those prone to insulin failure or who are having insufficient insulin in their body to regulate or control the sugar level in the body. At present effort are being made by food scientists, biochemists and some health practitioners to produce jam suitable for diabetics; where sugar is no longer added during the jam production. It is believed that the fruits contain compound that does not affect sugar level drastically, since the insulin does not require fructose or sorbitol, it is recommended by United Kingdom Food Regulation andLaw 1978 and1981, that the use of sucrose (sugar) in jam production for diabetics is prohibited, instead fructose or sorbitol. Any fructose or sorbitol-based products used must be clearly specified on the label. In addition to the restriction of sucrose in diabetic jam we are of the opinion that the other factors, which affect blood sugar level such as glycemic index of foods, should also be considered. Reports have it that some of these factors including pH, and phytochemical (dietary fiber, antioxidants) are abundant in the peel of most fruits. Incidentally fruit peels are not normally used during jam making. Inclusion of fruit peel (where the peel is not toxic and of course many are not) during jam making helps to increase the dietary fiber content of the finished product. The aroma and other biochemical content may also get improved. The health benefits of dietary fiber and other phytochemical in the peel of fruits are abound [7,8,10,11]. No wonder health workers have advised that we should each our fruits with their peels. A diet rich in dietary fiber again serves as a barrier against a range of diseases, while a diet low in dietary fiber is a causative factor in the etiology of diseases. Soluble dietary fiber has been shown to lower selectively serum cholesterol and improve glucose metabolism and insulin response [10]. Jam rich in dietary fiber may be beneficial in weight reduction and in the control of diseases such as hyperlipidemia and diabetics [2,3]. One of the ways to increase the dietary fiber content of jam product and its health benefits will be to include the peels of the fruit used in the jam making. We are anticipating that the acceptance of such products will be difficult initially, the health benefits notwithstanding. Hence the objectives of this research work were: 1. To improve the nutritional qualities of jam products through the inclusion of fruit peels. 2. To access the acceptance of jam made from whole fruit (pulp and peel together). Materials and Methods Ripped whole fruits of mango (Mangifera indica) and pineapple used for the project work were obtained from Ochanja Relief Market Onitsha, Anambra State, and Owerri Market. Other materials used which included sucrose (sugar), lime, citric acid, Sodium benzoate, NaOH, indicator, HCL, H 2 S04, were collected from Food Science and Technology and Crop Production Laboratories at Federal University of Technology Owerri. Preparation of Samples Partially ripped mango and pineapple fruits were washed and cleaned with hard brush. For the mango fruit half of the fruit were peeled and sliced into pieces of 0.5cm 3 while the other half was sliced together with the peels. Each of samples was ground with an electric blender into a fine mixture. The sample that was ground together with the peel was designated as "MPP" while that with only pulp was designated as "MPO". Samples were collected from the MPP lot and diluted with water to reduce the viscosity to equal that of sample MPO. [Sample MPP was originally very viscous] This sample with the adjusted viscosity was designated as "MPV". Again sugar (20g) was added to a portion of sample MPV in order to bring up to have the same sugar level as sample MPO. No jam was carried out with MPO as it is produced to ascertain the level of sugar in mango pulp only. This fourth sample was designated as "MPS". For the pineapple fruits only two products were made. One sample "PPD" was made with pineapple together. Altogether six samples were prepared; only five was used and they are MPP (mango pulp and peel), PPP (pineapple pulp and peel), PPD (pineapple pulp only), MPS (mango pulp and sugar) and PPS (pineapple pulp and sugar) are used. Lime fruit was washed, cut and the juice extracted. The juice was deseeded and stored in a clean closed container. Jam Preparation Thirty grams of sugar and 10mls of lime juice were used for each of the six samples during jam preparation. The sugar was added into 40mls of water and brought to boil. Then fruit samples were introduced with continuous stirring. Throughout the cooking the temperature was under control. Lime juice was added with continuous stirring at the temperature of 105°C. When the mixture was set the heating was stopped. The jam was poured into bottles already sterilized and kept in oven at about 100°C. The bottles and its content were cooled, sealed with paraffin wax, closed with the lid, labeled and stored. The flow chart for the jam making process is in Figure 1. Protein Determination Kjedahl digestion method as described by A.O.A.C [4] was used. Only 2g of the sample was weighed into 100ml Kjedahl flask and 30ml of concentrated sulphuric acid (H 2 S0 4 (vi)) Was added which boiled gently at first on the Kjedahl digestor until it blanked and stopped. This was heated until the solution is cleaved. The flask was allowed to cool, rinsed the neck dawn with distilled water. Further heat was done until the specks disappeared. The content was then transferred with several washing into a 250ml volumetric flask and made up to the mark after cooking. The steam was allowed to pass through Markham micro Kjedahl distillation apparatus for about 10 minutes. 5mls of boric acid indicator was introduced on the surface of the liquid. This was placed under the condenser such that the condenser was tipped on the surface of the liquid. 5mls of diluted digest was measured and placed in the distilled water. The cup was closed with the rod and 60% Na0H was added. This was carefully left behind to prevent ammonia escaping. This was allowed to leave through for about 5 collected against from green to purple. The colour of this was noted. ( ) x 100 W x 5 − = T s = Titre value for sample T s = Titre value for blank W = weigh samples Moisture Content Determination (MC) Moisture content is determined according to A.O.AC [4]. A clear dried moisture can was weighed in an electronic top loading satorious weighed balance. An aliquot of the jam was placed on the moisture can and the weighed of the sample was taken. The sampling was kept in an electric oven to dry at 105°C for about 6 hours. It was removed and replaced on the dessicator to cool and was reweighed again. The process was repeated at about 2 hours intervals until a constant dry weight was reached. The % moisture content calculation of MC% i.e. Where W 1 = mass of the sample w 2 = mass f sample + dish W 3 = mass of sample + dish after drying. Ash Content Determination Total ash content was determined according to standard analytical method in A.O.A.C [4]. Just 1g of jam sample was weighed into a previously ignited cooled and weighed crucible. The crucible and its content were transferred into muffle furnace at 550°C and left until a light gray ask resulted after ignition. The crucible and the residue (i.e. ash) was taken from the furnace, cooled in a dessicator and reweighed. The ash was calculated as a percentage of the original sample given that weight of the empty crucible. 3 1 2 2 1 w, weight of crucible sample before ashing W W 100 W x . W W 1 Determination of Total Fat 3-4g of the sample was boiled with 50ml of 4ml hydrochloric acid. The fat was diluted and extract with light petroleum of n-hexane using the "wash-bottle" technique. The extracted fat was collected and was weighed in flask; the solvent in it was removed by evaporation. The fat was dried and weighed the fat, the hydrolyzed mass was poured into a filter paper, and was washed with hot water, the filter paper containing the residue was dried, rolled and inserted into an extraction thimble. This method was completed by the Soxhlet technique [4]. Determination of pH and Total Titratable Acidity (TTA) The method described by A.O.A.C [5] was used. The solution of pH meter was pH7. Then the pH of the different sample was measured at temperature of about 25°C. Determination of titratable acidity was carried out according to the method of A.O.A.C [4]. 1g of the sample was weighed into a 250ml conical flask. 50ml of distilled water was added and allowed to stand for 40 minutes at room temperature and was stirring occasionally. The sample was filtered and 3drops of phenolphthalein indicator was added to the titrated 0.02N NaOH to a permanent pink and end point was noted. To calculate TTA: Let the titre-blank be x; 50ml of sample contained 0.02x mg acid 1g sample 0.02x 100 % T.T.A X 1 X . . 100 1 = = 0 002x Determination of Total Solution Sugar (TSS) The anthrone method of A.O.A. C. [4] was used to determine the total soluble sugar. 1g of the sample was weighed into a test tube and 2ml of concentration of H 2 SO 5 was digested over a water bath for about 30 minutes. This was diluted with about 20ml of distilled water and was allowed to cool. 80ml of the Anthrone reagent was added and made up to wavelength using the spectrophotometer "UV Unicon spectrometer". A series of standards was prepared and the optical density taken at the same wavelength. Calculation Let the parts per million (ppm) of pulp be x, 1ml contains 50ng brine hence 50ml contained 50 x 50x 1g of the sample. Determination of Dietary Fiber Jam formation (Gelatinization) The water and samples contained in each 5ml beaker were stirred thoroughly to obtain a homogenous mixture. Water is poured in the bath and brought to boil. The samples were gelatinzed (gelled) at 100°C with the steam from the water bath [4]. Analysis of total dietary fiber (TDF), insoluble dietary fiber (IDF) and soluble dietary (SDF) were determined as described by A. O.A. C [4]. Gelatinization of Starch This procedure was developed by AOAC [4]. 10ml of distilled water measured into measuring cylinder for each sample was poured into 50ml beaker containing 1g of raw samples i.e. plant materials. The water and the raw sample mixture were homogenized. Water is poured into the water bath and brought to the boil. The starch was gelatinized at 100°C with the help of steam from the water bath. Termamyl Incubation Following gelatinization of starch, the samples are now ready for termanyl incubation. This was done at pH 6.0 and the reagents used for the pH adjustment was acetic acid which helped to bring down the pH of these samples to 6.0. Some of the food samples had pH of 6.0 without adjustment using acetic acid. The pH was read off by the use of pH meter. After pH adjustment, 3mls of termamyl was added to the gelatinized samples and stirring continued. This termamyl incubation was done for 30 minutes at temperature of 100°C. The degradation of starch could be accomplished in a very short time (i.e. 30 minutes) because termamyl was a powerful enzyme which could degrade the bulk of the starch rapidly at 100°C. Neutrase Incubation Following termamyl incubation, the pH of each samples contained in the beaker were readjusted to pH 7.5. The adjustment of pH to 7.5 was carried by the use of 0.2M of sodium hydroxide (NaOH) and temperature reduced to 60°C and read off by the use of thermometer. About 3mls of Neutrase enzyme was added to the sample each. Neutrase incubation at temperature of 60°C during incubation period was done using Water bath. Amloglucosides (AMG) Incubation Following termamyl incubation, the pH of each samples contained in the beaker were further readjusted to pH 4.5. The adjustment of this pH to 4.5 was carried out by the use of acetic acid and read off with pH meter. The temperature of the contents in each beaker was maintained at 60°C and 3mls amyloglucosidase (AMG) was added and stirred continuously. This AMG incubation was carried out at pH of 4.5 for 30 minutes at 60°C. During this incubation, there was continuous stirring to ensure "complete" hydrolysis of starch. Total Dietary Fiber (TDF) Determination Following amyloglucosidase incubation, the content in each beaker was precipitated with 4 volumes of ethanol measured with measuring cylinder. After filtration of sample, each sample was washed with ethanol and acetone. Drying sample following washing of sample, the sample on the filter paper were then dried over the hot oven. Determination of Soluble Dietary Fibre (SDF) Following amloglucosidase (AMG) incubation above, the contents in each sample were filtered with the use of filter paper. There were residue and filtrate and the filtrate is not discarded. The residue obtained after filtration was then washed with ethanol and acetone. Drying was carried out with the use of hot oven at 60-65°C. After drying of each sample what remained was the soluble dietary fiber (SDF), which was weighed with weighing balance. Sensory Evaluation Sensory evaluation of the six samples was conducted using 20-member panel randomly selected from the University community. The samples were packaged in a transportation jam bottles and presented in a coded manner. The sensory quality attributes of the sample evaluation were colour, taste, texture aroma and acceptability. In the questionnaire presented to the panelists, they were requested to observe and taste each sample as coded and grade them based on a 9-point hedonic scale where 9 = like extremely and 1 = dislike extremely [12]. Qualitative descriptive analysis (QDA) was used to further test the product so as to reveal areas and intensities of improvement on the products. Judges were asked to evaluate the intensity of the perceived attributes using a 15 point unstructured scale. Statistical Analysis The results obtained from the sensory evaluation were computed into means and the analysis of variance (ANOVA) was carried out based on all the sensory attributes. The null hypothesis and the alternative hypothesis were used to assess the validity of the results. The least significant difference (LSD) was used to determine which of the sample that is significantly different. Chemical Properties of Jam Made from the Pulp and Peel of Mango and Pineapple Fruit. The result of chemical properties of the jam samples is shown in Table 1. There was significant (P > 0.05) variation on the pH, percentage total titrable acidity (TTA) and Total soluble solids (TSS) of the samples. The pH of the jam products was on the acidic side (3.52 -3.51) with the mango pulp peel sample (MPP) having the lowest pH of 3.52 while sample PPD had the highest pH value 3.91. The pH of the jam products was within the conventional pH value of 3.1 -3.52 (NAFDAC, 2000). Percentage TTA of the samples ranged from 0.049 to 0.082. The sample PPP had lowest TTA% of 0.082. In both mango and pineapple, % TTA values increased with the inclusion of fruit peel. This could be because of the fact that there is usually higher concentration of the annual (Organic) acids in the peels of fruit than in the pulp [12]. Jam product with a low pH and a high %TTA is considered adequate on health ground and for storage purposes. Total soluble solid (TSS) was 50% in sample PPP, which was significantly higher than the rest samples. Generally, pineapple jams had higher TSS than mango jams. Again jam made with the peel and pulp of either of the fruits had higher %TSS than their counter-part made with the pulp only [1]. Moisture content of the jam products ranged from 51.00 % to 72.77% moisture. Moisture was generally higher in mango jams (67.21 -72.77) than in pineapple jams (50 -64.97). The higher moisture content of the jam products could be because of the high humectants (sugar) content of the products. This is further explained by the fact that product with the peel had lower moisture content than those without the peels. However, the moisture contents are quiet high and need to be reduced in subsequent experiment. The percentage of protein content was generally low having a range of 0.84% to 1.37%. The result was expected since the fruits used are not known as protein sources [12]. Nevertheless product with the peels had a significantly (p>0.05) higher protein content than those with the peels. The percentage of ash was higher in pineapple jams than in mango products. Again the inclusion of fruit peels increased significantly the ash content of finished products. Total Dietary Fiber Content of Jam Made from the Pulp and Peel of Mango and Pineapple Fruits. Pineapple jams were found to contain a higher (p=0.05) total dietary fiber compared with mango jams (Figure 3). The total dietary fiber content of the pineapple samples was PPP 30%, PPS 28% and PPD 20%. The mango jams had 20% and 22% for MPS and MPP respectively. Inclusion of fruit periderrm increased the total dietary fiber content of the jam products. Fruit peels are known to contain higher amount of fibers especially the insoluble fibers than fruit pulp. The higher total dietary fiber observed in pineapple jam could be due to a higher quantity of peels and fibrous materials in pineapple than in mango. Sensory Evaluation of Jam Made from the Pulp and Peel of Mango and Pineapple Fruit The mean of the sensory evaluation of jam products from the peel and pulp of mango and pineapple fruits is shown in Table 2. In terms of texture, the samples scored 6.2 to 6.9 while a value range of 5.4 to 7.4 was recorded for taste. The scores for texture of the sample were not significantly (p<0.05) different but the scores for the rest attributes were significantly (p.>0.05) different, Sample PPD surprisingly scored highest in terms of taste followed by PPS and PPP. This implies that all the pineapple jams scored higher than the mango jams in terms of taste. The colour of sample MPP was almost rejected; it had the lowest (p>0.05) scored of 4.1, which was below average. The colour of MPP was actually not applying. The awful colour of MPP could be because of the colour of the mango peel used, which was green instead of the usual yellow colour. Green coloured mango was used because the yellow colored mangos were off-season as at the time of this experiment. Nevertheless, the color of MPP can be improved upon by addition of food grade synthetic colors. The most appreciated color was that of PPS (7.8). Again sample MPP scored lowest while sample PPS scored highest in aroma and overall acceptability. Sample PPS scored highest in all the attributes except in texture and taste. No wonder it recorded the highest in overall acceptability indicating that judges appreciated it more than other jam product [6,16]. The result of qualitative descriptive analysis of pineapple jams is shown in Figure 3. The analysis revealed that all the jam samples would need some improvement in smoothness. PPD had the most appreciated smoothness while PPP had the worst. The poor smoothness of the jams could be because of poor filtering process and for the fact that peel are bound to be rougher than the pulp. In terms of after-taste, it was discovered that sample PPS had a fruity taste whereas PPP had less fruity after-taste. This implies that the fruity aroma of sample PPP as well as sample PPD needs to be improved. For sweetness, there was no significant difference in the intensity of sweetness of the samples, though sample PPD had a lower intensity. This could be because no sugar was added to this sample. There were significant difference (p>0.05) in the perceived thickness (viscosity) of the pineapple jams. The viscosity of sample PPP was most appreciated followed by that of PPS and finally PPD. Inclusion of peel improved the viscosity of the samples. The diabetic jam, PPD was observed to have the highest value for ease of spreading. In other words it was much easier to spread sample PPD than the other two samples. Now a balance had to be stroke between viscosity and ease of spreading. It seems the two parameters have inverse relationship. Increase in sugar increased viscosity and reduced ability to spread (spreadability). Conclusion The dietary fiber of the jam product made from a mixture of fruit pulp and peel was relatively high (20% -30%). This implies that the supplementing of dietary fiber through the use of edible fruit peels could be one of the ways of improving intake of dietary fiber among jam eaters. From the result of sensory evaluation and qualitative descriptive analysis, jam obtained from the combination of pulp and peel of fruits was considered second to best in acceptability and taste. Nevertheless such jam products require improvement in colour unless overripe (yellow coloured) fruit are used.
v3-fos-license
2018-04-03T04:52:26.660Z
1995-09-08T00:00:00.000
40712916
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/270/36/20898.full.pdf", "pdf_hash": "924a16036d9c1126633ebe3e567ddad49fc45b42", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:187", "s2fieldsofstudy": [ "Biology" ], "sha1": "8c8fe0a8fe03b56082a522ad16a6d581f2ba5f7a", "year": 1995 }
pes2o/s2orc
CTG triplet repeat in mouse growth inhibitory factor/metallothionein III gene promoter represses the transcriptional activity of the heterologous promoters. Growth inhibitory factor/metallothionein III (GIF/MT-III) is expressed specifically in brain, and neither mRNA nor protein is detected in other organs. This tissuespecific expression might be regulated by negative elements as well as positive elements, such as tissue-specific enhancers. To investigate the repression mechanisms of this gene in organs other than the brain, transfection experiments were performed by using various deletion mutants. Interestingly, a 25 CTG repeat in the promoter region seemed to contribute to the repression activity. Moreover, the repression activity of this 25 CTG repeat was also observed in various promoters and in a direction and position independent manner, indicating that this element could act as a silencer. However, no binding protein was detected by gel-shift and footprint analyses. These results strongly suggest that the CTG repeat functions as a negative element, and that this effect is caused by unknown mechanisms, rather than by interactions between specific cis-elements and specific trans-acting factors as reported previously. It is also possible that the CTG repeat functions as a general silencer in many genes. Gene expression is regulated mainly at the transcription level. Many trans-acting factors, such as enhancer-binding proteins, participate in this regulation (see Refs. 1 and 2 for review). For tissue-specific expression, tissue-specific enhancers are known to be involved. Recently, it was proposed that, as well as enhancer elements and enhancer-binding proteins, negative regulators are also involved. Mechanisms include competition, quenching, direct inhibition, and squelching (see Refs. 3 and 4 for review). However, compared with enhancer functions, the mechanisms of negative regulation still remain to be investigated. Growth inhibitory factor (GIF) 1 has been purified from human brain (5). Both amino acid sequence analysis and cDNA cloning have revealed that GIF is a small protein, 68 amino acids long, and quite similar to metallothionein (MT) with insertions of 1 amino acid and 6 amino acids in the N-terminal and C-terminal portions, respectively (5)(6)(7)(8). Therefore, GIF is also termed MT-III, since MT-I and MT-II have been reported (6). In addition, a new MT isoform, MT-IV was reported recently (9). MTs are small, cysteine-rich, and metal-binding proteins, and MT-I and MT-II are thought contribute to detoxication of heavy metals, such as mercury and cadmium, and homeostasis of essential trace elements, such as zinc and copper (see Refs. 10 and 11 for review). Whereas MT-I and MT-II are expressed widely in almost all organs and are strongly induced by heavy metals, GIF/MT-III is expressed strictly in the brain and not induced by heavy metals (6). GIF/MT-III is also known to be deficient in the brain of Alzheimer's disease (AD) patients (5). Therefore, in addition to ␤-amyloid precursor protein and , GIF/MT-III seems to have an important role in Alzheimer's disease (5,(12)(13)(14). In this report, we characterized the promoter region of the mouse GIF/MT-III gene and found that a CTG repeat in this region functions as a silencer-like element and contributes to negative regulation. EXPERIMENTAL PROCEDURES Plasmid Constructions-The mGIF/MT-III gene was a kind gift from Dr. R. Palmiter (6). The fragment from base pairs Ϫ807 to ϩ57 in the promoter region of the GIF/MT-III gene was made by polymerase chain reaction techniques (PCR) (15) and ligated to the multicloning site in the promoter-less luciferase vector, PGV-P (Nippon Gene) according to standard protocol (16). Various deletion mutants were constructed from this construct by deletion at the 5Ј end by exonuclease III and mung bean nuclease digestions. Internal deletions and the 25 ϫ CTG fragment for insertion within various promoters were constructed by PCR. All constructs used here were checked by sequencing with the dideoxy method using denatured plasmid templates (17). Cell Culture and DNA Transfection-The HepG2 cells, human hepatoma cell line, were cultured in Dulbecco's modified Eagle's medium containing 10% fetal bovine serum. Cells were transfected by the calcium phosphate co-precipitation techniques described by Chen and Okayama (18). All the transfection experiments were performed at least three times by using two or three different preparations of DNA, and the mean values are shown under "Results." Transfection efficiency was checked by co-transfection with pRSVGAL, a eukaryotic expression vector containing the Escherichia coli-␤-galactosidase structural gene controlled by a Rous sarcoma virus long terminal repeat as an internal or an external standard. Luciferase activity was assayed by Pikka Gene (Toyo Ink Mfg. Co., Ltd.) and a lumiphotometer, and ␤-galactosidase activity was determined as described (19). Repression Activity of CTG Repeat in GIF/MT-III Gene Promoter- The GIF/MT-III gene is not expressed in any organs except the brain. To investigate the repression mechanisms of this gene, various deletion mutants with the luciferase gene as a reporter were transfected into human hepatoma cell line HepG2 cells. As shown in Fig. 1, the Ϫ257GIF construct revealed the highest activity among the constructs tested. All other constructs showed lower activity. This result indicates that there is a repressive element between Ϫ477 and Ϫ257. * This study was supported in part by grants from the Ministry of Education, Science and Culture of Japan and from the Asahi Glass Foundation. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. The nucleotide sequence of this region is shown in Fig. 2. A data base search for trans-acting factors revealed some typical cis-elements for LBP-1, LF-A1, and ADR-1 (data not shown). More interestingly, we have found that CTG is repeated 25 times in this region. To elucidate the function of this 25 ϫ CTG repeat on trans-activation activity, we next transfected the internal deletion mutant lacking the CTG repeat. Compared with the native Ϫ807GIF construct, the construct with deleted CTG repeats revealed higher luciferase activity (Fig. 3A). When 25 ϫ CTG repeats were joined to the CTG-less Ϫ257GIF construct, lower activity was observed (Fig. 3B). These results indicate that the CTG repeat has repressive activity on the GIF/MT-III gene promoter. Effect of CTG Repeat on Transcriptional Activity of Various Promoters-Since the CTG repeat has repressive activity on the GIF/MT-III gene promoter, we next examined whether this effect would occur in various gene promoters. When the CTG repeat was joined to their upstream regions, this repeat showed repressive activity in all promoters tested here (Fig. 4A). Moreover, when the CTG repeat was inserted in opposite orientation, the same results were obtained (data not shown). Even when the CTG repeat was joined downstream of the promoters of three genes, a similar tendency was observed (Fig. 4B). When transfected into 3Y1 cells, a rat fibroblast cell line, and NIH3T3 cells, a mouse embryo cell line, similar results were obtained (data not shown). These results strongly suggest that the CTG repeat acts as a silencer. Binding Analysis of Nuclear Proteins to CTG Repeat-We next performed gel-shift analysis by using 25 ϫ CTG repeats as a probe and nuclear extracts from HepG2 cells and rat liver. However, no binding protein was detected, although several other trans-acting factors including C/EBP␣ and Nuclear Factor 1 were detected by each specific probe using the same extract (data not shown). No protected region was detected by DNase footprint analysis by using the GIF promoter as a probe (data not shown). It seems that specific proteins do not bind to the CTG repeat, or that binding affinity is too weak to detect. DISCUSSION In this report, we characterized the promoter region of the mouse GIF/MT-III gene and identified its CTG repeat as a silencer. Since this gene is not expressed in organs other than the brain (5, 6), this function is quite important for negative regulation in the mouse. Recently, we cloned the human GIF/ MT-III gene, including the promoter region up to Ϫ2 kilobases, using the coding region as a probe (kind gift from Dr. R. D. Palmiter). Unfortunately, we have not yet identified a CTG repeat in this region by Southern blotting. Therefore, it is possible that the CTG repeat in the GIF/MT-III gene is speciesspecific. Indeed, only 200-base pair region from the cap site showed high similarity between the mouse and human genes (6). It is unclear whether the CTG repeat contributes to brainspecific transcription of the GIF/MT-III gene, since suitable cell lines for the brain are not available now. Transfection experiments using primary culture and/or in vitro transcription assay may dissolve this question in the near future. Amplification of CTG or CAG repeats is known in several inherited neurodegenerative diseases including Huntington's disease, spinal and bulbar muscular atrophy, spinocerebellar ataxia, dentatorynural-pallidoluysian atrophy, and myotonic dystrophy (20,21). In the first four diseases, CAG repeats in the N-terminal portions of their related genes encode polyglutamine residues, and the expansion of a glutamine-rich segment is related to the diseases. Although there is no evidence that an expanded CAG repeat has negative activity on tran- scription, except that the presence of the polyglutamine tract is inhibitory to trans-activation on the androgen receptor gene (22), it is possible that the CAG repeat functions as a silencer under normal conditions. In the case of the myotonic dystrophy gene, in which the CTG repeat is found in the 3Ј-untranslated region, decreased expression is observed (23). However, an opposite effect has also been reported (24). In Fragile X syndrome, hypermethylation of the CGG triplet repeat in the 5Јuntranslated region represses the expression of the Fragile X mental retardation-1 gene (25). GIF/MT-III is the first example of a CTG triplet repeat located in the 5Ј promoter region, and that has repression activity. We have tried to detect whether nuclear proteins bind to the CTG repeat. However, we could not observe binding, although the conditions used here could detect other transcription factors. These results strongly suggest that a specific binding protein is not necessary to this function. First, we thought that a conformational change caused by the CTG triplet repeat might cause an inhibition effect on protein-protein interaction between the promoter and basal machinery. However, using the method for the detection of DNA bending by native electrophoresis (26) in the absence of the protein, we failed to detect the conformational change. Moreover, even when the CTG repeat was in the downstream region of the luciferase gene, that is, very far from the promoter, the negative effect was also observed. Therefore, we prefer to consider the following mechanism; while a specific DNA-binding protein would not bind to the CTG repeat, the CTG repeat itself could interact with some proteins in the complexes of basal and specific transcription factors with weak affinity. If this is the case, it is also possible that the CTG repeat may function as an activator. So far as we tested, we observed only a negative effect. We also failed to detect CTG repeat binding activity in the brain nuclear extract, although we could detect the binding activity in other regions in the promoter. Therefore, it seems that the CTG repeat functions as a general silencer which is active in all organs including the brain, and some activator proteins overcome the si-lencer function in the brain. Further studies are required to clarify the effect of the CTG triplet repeat on gene expression. FIG. 4. Effect of CTG repeat on transcriptional activity of various gene promoters. A, effect of the CTG repeat on various promoters, mGIF, C/EBP␦, SV40, glutathione S-transferase (GST) P, and human metallothionein II A (hMT-IIA), in which the CTG repeat was joined to the upstream region of the promoter. B, effect of the CTG repeat on various promoters, mGIF, C/EBP␦, and SV40, in which the CTG repeat was joined to the downstream region of the promoter. Values are represented as relative luciferase activity compared with each CTG-less construct.
v3-fos-license
2018-12-27T22:29:45.578Z
2018-01-12T00:00:00.000
53499373
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11196-017-9538-5.pdf", "pdf_hash": "824289179df67181a7529038655790203e2b0b83", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:189", "s2fieldsofstudy": [ "Law", "History" ], "sha1": "7b78543263197685d774db5f3ef400683f8d4dd5", "year": 2018 }
pes2o/s2orc
The Distorted Jurisprudential Discourse of Nazi Law: Uncovering the ‘Rupture Thesis’ in the Anglo-American Legal Academy It has been remarked that the ‘rupture thesis’ prevails within the Anglo-American legal academy in its understanding of the legal system in Nazi Germany. This article explores the existence and origins of this idea—that ‘Nazi law’ represented an aberration from normal legal-historical development with a point of rupture persisting between it and the ‘normal’ or central concept of law—within jurisprudential discourse in order to illustrate the prevalence of a distorted (mis)representation of Nazi law and how this distortion is manifested within the discourse today. An analysis of the treatment of Nazi law in two major 50th anniversary publications about the 1958 Hart–Fuller debate, and a review of representations of the Third Reich within literature from the current discourse, demonstrates that the rupture thesis continues to be reproduced within jurisprudence. An examination of the role of Nazi law in the Hart–Fuller debate itself shows that it can be traced back to the debate, where it was constructed through a combination of conceptual determinism and historical omission. It concludes that the historical Nazi law has great significance for the concept of law, but neither positivism nor natural law has properly theorised the nature of the real Nazi legal system. Introduction It has been remarked that a 'rupture thesis' prevails within the Anglo-American legal academy in its understanding of the legal system in Nazi Germany. This term refers to the idea that 'the Nazi state is said to be a state so brutal, so criminal, so perverted, that it constitutes a radical, atavistic rupture in the otherwise largely benign historical process of law and politics, at least in the West' [8: 179]. This article explores the existence and origins of this idea-that Nazi 'law' represented an aberration from normal legal-historical development with a point of rupture persisting between it and the 'normal' or central concept of law-within jurisprudential discourse. It does so through an analysis of the treatment of Nazi law in the discourse in and related to the 1958 Hart-Fuller debate, a foundational moment in modern Anglo-American jurisprudential discourse. This analysis encompasses the initial Harvard Law Review contributions of the debate's two protagonists [17,20], two major collections of reflections on the ongoing significance of the debate that emerged from events to mark its 50th anniversary in 2008: the New York University Law Review special issue 'Fifty Years Later' (henceforth 'NYULR symposium') [53] and Peter Cane's edited volume The Hart-Fuller Debate in the Twenty-First Century [3], and a consideration of how Nazi law is represented within the discourse today. The representation of Nazi Germany as an unrecognisable 'other', because of its perceived barbaric, lawless nature, which contributes to the rupture thesis has occasionally been asserted [9,14], but is rarely explored in terms of its concrete manifestations within the legal academy, the specific place of Nazi Germany within academic legal discourse it entails, or its relationship with its historical antecedents; the reasons it persists today as it does. Beyond their link to the Hart-Fuller debate itself, the works considered in this article are connected by the enduring way the Nazi legal system is represented within jurisprudential discourse; its entrenched role as a source of eye-catching examples in elaborate debates between proponents of positivism and natural law, rather than a manifestation of law with anything of substantive note to contribute to our theoretical understanding of the general concept of law. This article is concerned with the creation and reproduction of a distorted representation of Nazi law within Anglo-American jurisprudence over almost six decades since the Hart-Fuller debate, and its consequent failure to engage with the historical reality of Nazi law and assess its implications for the central concept of law, on which jurisprudence focuses much of its attention. The Hart-Fuller debate represents a foundational moment in Anglo-American jurisprudence because of its framing of the debate about the concept of law around new strands of Hartian positivism and Fullerian natural law and around the central issues of the conditions of validity for law (the validity question) and the relationship between law and morality (the separability question), as well as for its unparalleled, enduring influence on jurisprudential discourse, which continues to this day. The influence of the parameters set by the debate on jurisprudential discourse should not be underestimated: 'The fact is that the exchange between Hart and Fuller really did set the agenda for modern jurisprudence: the separation of law and morality, the place of values in interpretation, and the relation between the concept of law and the values associated with the rule of law ' [52: 996]. Nicola Lacey has argued that their work 'of course, continues to shape contemporary jurisprudence to a quite remarkable degree' [26: 41], and 'it is worth reflecting on the remarkable fact that it still speaks to us so powerfully today' [25: 1059]. This article will focus on the impact that the debate has had on the representation of Nazi law within Anglo-American jurisprudence, both as a concrete example of the rupture thesis persisting within the legal academy and to demonstrate how the marginalisation of Nazi law in the Hart-Fuller debate continues to dictate how it is treated within jurisprudential discourse today. It will make the argument that Nazi law has been too hastily overlooked as having nothing fundamental to say for the central concept of law in Anglo-American jurisprudence in the period since the Hart-Fuller debate, and this is primarily as a consequence of how it was treated in the debate's initial exchanges. Nazi law represented little more than a springboard into the philosophical issues that really concerned Hart and Fuller, and the enduring preoccupation with these issues within the jurisprudential community has established the role of Nazism within the discourse as a piece of extreme historical marginalia, a paradigm of wicked law with no substantive significance for the concept of law. It is used within jurisprudence primarily as a source of examples and hypothetical scenarios in intricate theoretical debates between advocates of various strands of positivism and natural law, substantively unconnected to the philosophical arguments being advanced. This interpretation of the relevance of Nazi law for the concept of law is flawed because this representation of Nazi law is largely erroneous, based very limited historical evidence, and the actual Nazi legal system is much more interesting, complex and worthy of legal theoretical consideration than is understood within this discourse. Crucially, the limitations of positivism and natural law in their representation and use of the Third Reich may mean that neither paradigm is able to properly explain Nazi law or facilitate a productive jurisprudential debate about its true nature and significance. The ongoing significance of the Hart-Fuller debate is also apparent in the series of reflections on the debate on its 50th anniversary in 2008. One of the issues that received very little attention in these two publications was the earth-shattering era that brought about the very case that prompted Hart's initial reflections on the concept of law in the debate: the legal system of the Third Reich. The case of the grudge informer in the Federal Republic of Germany raised the question of whether a post-war German court should treat Nazi laws as valid or not and this led Hart to consider some of the issues he felt are at the centre of the concept of law: the conditions of validity for law and the nature of the connection between law and morality. The general failure to re-evaluate the trigger for these ruminations in the 50th anniversary literature and beyond should not surprise anyone who has taken note of the way Nazi law was represented in the initial Harvard Law Review exchange, particularly by Hart: as a piece of historical ephemera employed as a hook from which to hang broader conceptual arguments, but with little substantive contribution to make to them. This is the approach mirrored by its treatment in most of the commemorative contributions, which are understandably concerned with revisiting or extrapolating the issues that most concerned Hart and Fuller. These did not ultimately include the historical nature of Nazi law or its substantive implications for the concept of law. The current jurisprudential discourse about positivism and natural law, the validity question and the separability question, also follows and reproduces this approach, although often lacking even the limited degree of historical appreciation of the Third Reich shown by the Hart-Fuller debate. In order, therefore, to illustrate how Nazi law is represented within jurisprudential discourse, and demonstrate the connections between current jurisprudential treatment of Nazi law and the Hart-Fuller debate, this paper will first interrogate the representation and significance of Nazi law in the contributions to the two 50th anniversary collections. These publications are particularly useful as a snapshot of the Anglo-American jurisprudential community, incorporating papers from some of the field's most prominent legal philosophers, as well as being an excellent indication of which aspects of the Hart-Fuller are thought worthy of reflection and consideration today. Interestingly, they also include some perspectives on the debate that are external to the field of jurisprudence (narrowly defined) which gives some sense of the extent to which the dominance of the rupture thesis in relation to Nazi law is confined to jurisprudential discourse. It should be noted that more attention in this article is devoted to the Cane collection than the NYULR symposium precisely because the former is more challenging to the thesis advanced in this article than the latter. The NYULR symposium contributions almost exclusively follow the theoretical lead of the Hart-Fuller debate-reconsidering questions and issues that have occupied the discourse around positivism and natural law in the decades since 1958-while omitting its historical aspect. Consequently, that Nazi law is barely mentioned in these articles means that the most notable point for the purposes of this paper is the fact of its neglect. By contrast, while many of the Cane chapters also do not mention Nazi law, its relative prominence in some contributions, and particularly those that come from perspectives outside of traditional jurisprudence, requires some unpacking and analysis. This article will then review and evaluate the treatment of Nazi law in the initial contributions to the Hart-Fuller debate itself, to demonstrate how irrelevant the historical specificity of Nazi law was to the substantive issues in the debate and how peripheral its significance was as a matter at issue between Hart and Fuller. Following this, the article will bring the analysis of the discourse back up to the present day, by considering how Nazi law is represented in illustrative examples from jurisprudence since 2008. Finally, it will briefly highlight some of the problems caused by viewing Nazi law through the twin competing theoretical paradigms of positivism and natural law and without historical context and evidence, to begin to show why in fact Nazi law may be more of a central, complex and problematic case of the concept of law than is generally thought. Prior to this, it is important to say a few things about the approach adopted here. First, where this paper uses the terms 'jurisprudence' and 'Anglo-American jurisprudential discourse', it refers to the strand of primarily analytical, Anglo-American, English-language jurisprudential discourse within the legal academy, which comprises the tradition of text-based, doctrinal and conceptual legal theory, which is most concerned with issues such as the validity of law, the connection between law and morality and judicial interpretation, and much of which represents the direct or indirect legacy of the Hart-Fuller debate. Second, this paper does not offer a thorough historical analysis of the nature of the legal system in the Third Reich, but is focused instead on emphasising aspects of Nazi law to further the critique of the role and representation of Nazi law within jurisprudence presented herein. While some historiographical studies about Nazi law are cited, therefore, it does not make use of primary sources of Nazi law for this purpose. It is not possible within the scope of this article to provide a thorough account of the Nazi legal system, although some recent legal historical studies have contributed to developing such an account, including placing it within a broader temporal frame [43][44][45][46][47]. It is also not possible within the scope of this article to provide a comprehensive analysis of the representation of Nazi law within Anglo-American jurisprudential discourse. Instead, the commemorative collections and examples from the subsequent discourse are used as imperfect but suitably illustrative exemplars for how the field characterises and uses Nazi law. Finally, the arguments presented here are in no way an endorsement of Nazi law or legality, nor do they amount to a claim that Nazi law is necessarily paradigmatic of the central case of law or is the same in all respects as other instances of law. 2 The 50th Anniversary Literature: A Lacuna Where the Abyss Ought to Be It has been noted that the Anglo-American legal academy propagates the idea that Nazi Germany represented a gross departure from normal historical and legal development-an aberration-such that there are no substantive points of continuity between now and then worthy of examination. At the forefront of this commentary has been Frederick DeCoste, who argued in 1999 that 'the English-speaking academy has been especially resistant to exploring the moral, ethical, and political significance of events in Europe between 1933 and 1945' [7: 800]. On that occasion DeCoste made a connection between this claim and the Hart-Fuller debate: Despite a few, half-hearted and misdirected concessions immediately after the War -I am thinking particularly of the unaccountably influential Hart/Fuller debate … it is fair to say that, until very recently, the English-language legal academy has proved itself completely immune to the defining experience of this century. This attitude is all the more bizarre given the centrality of law and lawyers to European fascism generally and to the Holocaust in particular [7: 792-793]. He also added more recently that 'academic lawyers … have with very rare exception stood mute since the Hart-Fuller debate of the late 1950s' [9: 4]. This association, asserted but not explored, of the rupture thesis with the Hart-Fuller debate is interesting because the debate might otherwise be thought of as the forum that brought Nazi Germany to the attention of the legal academy and jurisprudence in particular, and attempted to tackle some of the thorny-not to say horrifyingissues it raised for law. On that basis, if parts of the legal academy do manifest a The Distorted Jurisprudential Discourse of Nazi Law… 749 rupture thesis in their representation of Nazi Germany, this could hardly be related to the otherwise esteemed and enduring contributions of Professors Hart and Fuller. However, an omission at the heart of the debate's 50th anniversary literature belies the notion that the debate really tackled Nazi law, or that the treatment of the Third Reich therein is unconnected to what followed in terms of Nazism's representation in jurisprudential discourse today. This is the absence of Nazi law and the Third Reich generally, as having anything meaningful to contribute to reflections on, analyses about, or reinterpretations of the issues considered central to the debate. If the darkest corner of Nazi Germany, the Holocaust, represents the abyss, then there is in the anniversary literature a lacuna where the abyss ought to be. Of all of the researchers who returned to the Hart-Fuller debate in order to revisit its fundamental components and reconsider its enduring significance five decades later, very few had anything to say about Nazi Germany or its legal system. The two anniversary collections each adopt a different approach to the Hart-Fuller debate. Whereas the NYULR symposium was focused on re-evaluating the terms and context of the debate and so examined its internal world, the Cane collection aimed to reinterpret issues in the debate and 'rethink them in light of social, political and intellectual developments in the past 50 years, and of changed ways of understanding law and other normative systems' [3: v], so adopted a more external and outward-looking jurisprudential perspective on the debate. This difference in approach has potential significance for the representation of Nazi law one is likely to find in each collection. An internal re-examination of the debate might be expected to reflect the role and importance of the Third Reich in the original debate, whereas an attempt to critique, or move outside of the terms of the debate and its key issues might include an alternative or deeper evaluation of something like Nazi law, which was part of the debate but increased reflection on which has not been one of its major legacies. The point has already been made that the internal perspective adopted by the NYULR symposium resulted in a collection that paid little attention to Nazi law. However, even while in some ways the contributions to the Cane volume do take the themes of the Hart-Fuller debate in new directions, more relevant to wider contemporary legal issues today, contributions to both volumes exhibit a number of similar characteristics, which are both representative of jurisprudential discourse generally and can be connected to the Hart-Fuller debate. First, they marginalise Nazi law, failing to treat it as an important feature of the debate or as substantively relevant to the main issues at stake. Second, where they use the case of Nazi Germany, they often do so in the service of theoretical debates between positivism and natural law rather than as an object of inquiry itself. Third, they replicate the rupture thesis, overwhelmingly (with one exception) assuming that the version of Nazi law adopted by the debate is the real, historical version of Nazi law. Nazi Law in The Hart-Fuller Debate in the Twenty-First Century The Cane collection is characterised by matching pairs of essays on different legal subjects and themes as they relate to the issues raised by the Hart-Fuller debate. These include human rights law [4,22], international criminal law [24,29], legal pluralism [6,49], law as a means [19,42], the commensurability of the debate's competing discourses [28,32], the relationship between norms and normativity [30,37], and legal reasoning [2,38]. Within this, references to Nazi law are not completely absent. It is important first to engage in some detail with the one exceptional, albeit brief, analysis of the use of Nazi law in the Hart-Fuller debate, offered by Desmond Manderson [28]. Manderson's main point is that the discourses employed by Hart and Fuller are incommensurable, 'mutually contradictory, and equally necessary' [28: 200], but a secondary aspect of his argument is that Hart and Fuller both misrepresented the nature of Nazi law. For Hart, he says: the appearance of law is all that matters. Politics and history are irrelevant to our inquiry. We rely instead on the simple surface and clear meaning of words -with the result in this case [of the 'grudge informer'] that we are seriously misled as to what those words actually meant to the people who were sentenced to death or … sent to the Eastern front. As we read Hart's account of the case, it surely seems plausible that the Nazi regime in fact depended on a kind of blindness to anything but the formal semblance of legality in order to gain legitimacy for its actions. Hart himself refuses to look behind the court's statements and treats as legally sufficient the mere 'tinsel of legal form' [28: 204-205]. This critique centres on the misrepresentation of Nazi law engendered by Hart's refusal to look beneath of the surface of the relevant law (in fact he hardly discusses actual Nazi laws at all), and instead only to rely on its formal characteristics to draw his conclusions. The absence of political and historical context actually distorts the meaning of the legal language employed, something to which Hart's analysis seems oblivious. Manderson's critique of Fuller is encapsulated in the following: Fuller does not acknowledge Nazism did not merely corrupt a legal system. It realised a vision of it informed by the anti-positivist ideologies of German Romanticism up to and including Heidegger and Schmitt. Neither does he acknowledge that the problem is not that law-makers might develop an 'immoral morality' or 'a more perfect realization of iniquity', but rather that we disagree about what goodness is in law or in laws. By assuming a core of goodness and a core of evil, which can never be confused, he simplifies the problem which confronts many societies, those who lived during the Third Reich not least [28: 212-213]. This critique centres on Fuller's claim that the Nazi regime simply undermined the pre-existing German Rechtsstaat, and the crude assumption this implies, that we can draw a clear distinction between the good of law and the evil that corrupts law. Again, this results in a distortion of the way the Nazi legal system operated, omitting its role in attempting to realise an alternative vision of legality. The argument Manderson offers with respect to the discourses of the debate is that 'we need both positions to make sense of law, but it is impossible to acknowledge them both at once' [28: 200], which has resulted in a jurisprudential discourse that is both polarised and irreconcilable. This is an important point, and The Distorted Jurisprudential Discourse of Nazi Law… 751 the polarised nature of a jurisprudential discourse centred on the twin paradigms of positivism and natural law as competing conceptions of law in the wake of the Hart-Fuller will be discussed further in the context of the current discourse later on in this article. Despite his own observations about the representation of Nazi law in the Hart-Fuller debate, however, three characteristics are apparent in Manderson's own piece, which are repeated in even starker form in many of the other anniversary collection contributions. The first is the absence of any historical evidence or sources for the claims made about the Nazi past. In this case the representation of Nazi law assumed by Manderson's critique has some merit, but no sources are presented to support this view. The analysis of Nazi law is, consequently, inevitably quite brief and superficial, and there can be no deeper or more detailed discussion of the nature of Nazi legality and its relevance for the concept of law. In other cases within jurisprudence, claims are made about the nature of Nazi law that are less supportable and are used to buttress theoretical arguments to which they have little relation and offer scant evidence, which reinforces the misrepresentation of Nazi law and impacts on our understanding of law both as a normative system and as a social discourse. The second is that Manderson appears to assume that there is no other way of interpreting Nazi law, external to the debate's paradigms, which might pose new or different questions for the concept of law. Despite rejecting both Hart and Fuller's representation of Nazi law, his conclusion is that both are necessary and it is the foil that they each offer one another that energises legal interpretation and moves it forward. While the issues in the Hart-Fuller debate have founded many energetic philosophical debates within jurisprudence in the subsequent period, Manderson's endorsement of this dialogic framework is potentially problematic. The more natural response to the jurisprudential misinterpretation of Nazi law would be to question the ability of positivism and natural law as paradigmatic constructions of the concept of law to come to terms with the real historical case of Nazi law, whereas Manderson comes close to the rationale adopted by the Hart-Fuller debate and jurisprudence, that the key to understanding law lies in the dialectical opposition of positivism and natural law. This is perhaps a product of Manderson's own superficial treatment of Nazi law. A deeper examination of Nazi legal history might lead his argument in the direction hinted at by his critique, that perhaps the real Nazi law says something fundamental about the concept of law that is not unique to the Third Reich and should not be easily dismissed. Ultimately, while his critique is useful, Manderson's contribution is primarily about the relationship between the discourses employed by Hart and Fuller and not about Nazi law and its role and importance in the Hart-Fuller debate. Consequently, he is pulled back into the paradigms employed in the original debate. The final characteristic is that Nazi law is not the main subject of discussion in Manderson's paper. This is best illustrated with reference to the contribution paired with Manderson's, Ngaire Naffine's assertion of the absolute commensurability of the discourses of Hart and Fuller. Naffine's theoretical approach is also quite promising for re-evaluating the role Nazi law plays in jurisprudential discourse, because it claims that in fact a shared understanding of morality underpins the discourses of Hart and Fuller, one based on constructing an opposition between an ideal type of good law and an archetypal evil law, of which Nazi law is the paradigmatic case. This analysis suggests that an alternative understanding of morality might be required to offer a different perspective on the concept of law and, consequently, properly engage with a case like Nazi law. Naffine, however, never really tackles the Nazi case, which is not the main object of her analysis. This is, it is argued, because of its peripheral status and minimal representation in the original debate, which gives the strong impression that Nazi law, despite its apparent import, is not particularly significant for the central concept of law under discussion. Within the Cane volume, Manderson's piece, even to the limited extent that it addresses Nazi law, is somewhat isolated, primarily a counterpoint to the themes occupying the other chapters rather than a companion; the only piece that comes close to elucidating anything of an alternative version of Nazi law. That its brief critique of the representation of Nazi law in the Hart-Fuller debate is a marginal issue even within its own dialogue with Naffine's paper, is indicative of the lack of seriousness and depth with which Nazi law is treated within Anglo-American jurisprudence generally; such that even if Hart and Fuller did not quite get it right about the Third Reich, this is not of critical importance when we seek to understand the nature of law and legality, nor should we upset the paradigmatic bases of the discourse or the central issues at play. Other contributions in the Cane collection that at least refer to Nazi law or the Third Reich are Karen Knop's consideration of human rights [22] and Martin Krygier's of transitional societies [23], but in these cases similar characteristics persist. A lack of historical depth and evidence, a marginal role in the argument, and a willingness to accept the framing terms of the Hart-Fuller debate. Knop's chapter is part of a dialogue with Hilary Charlesworth, and analyses international human rights law debate from the perspective of conflicts between two different systems, law and morality, an approach then used to explore conflicts between other different sorts of legal systems [22: 73]. This is applied to Nazi law as a system in conflict with systems that comply with Fuller's conception of legality. While the question of the implications of Hart and Fuller's positions on the conditions of validity for law for different types of legal systems is an important and interesting one, Knop accepts the premise of the Hart-Fuller debate in relation to Nazi law, which is that it is evil law against which a system of just law can be opposed. The case of Nazi law is represented variously as substantively unjust law in the Radbruchian mode, illegality according to Fuller, or Hartian law as a social fact. The Nazi legal system is not analysed in more depth to question these accepted possible interpretations, and reveal some of the legal and moral complexities of the system that may challenge such orthodoxies. Krygier's contribution is about how to achieve the rule of law in transitional societies. He offers something of a critique of Hart's use of Nazi law, similar in its themes to Manderson's in raising Hart's failure to dig beneath the formal surface to analyse the specific nature of Nazi laws, and his exclusion of most substantively significant aspects of the Third Reich to outside of the realm of law: Now one thing distinctive of Hart's approach is that he has almost nothing to say about the context in which the laws he discussed operated. He also says nothing about the character of Nazi laws, the way they were applied, or the specific characteristics and interrelations of the institutions applying them. He does not appear to think it necessary to examine the particular, peculiar, nature of the Nazi legal order or even the particular Nazi laws he discusses, other than to observe that the latter appear to have been formally legitimate, and they were nasty in content. … It's not that what else went on is of no account to him morally, but that he thinks it counts for nothing legally, and he is talking about law. Nazis had laws, and they were immoral; not a happy story but a simple one [23: 111]. Krygier's theoretical approach might again be promising for conducting a deeper legal historical examination of the Nazi legal system, to the extent it advocates for the need to find the right balance in a transitional context between adopting a universal approach and treating each case as unique, as well as emphasising the importance of understanding and incorporating the context of the law. However, he does not attempt to analyse Nazi law further himself because, again, that is not central to his essay. Krygier instead tends to assimilate a Fullerian natural law interpretation of Nazi law for his own critique of Hart, including with references to how 'pre-transitional despotisms' instrumentalise law and violate its internal morality [23: 120]. The tendency to use Nazi law to discredit the position of the other side in the positivism-natural law debate, in order to make room for one's own arguments in return, rather than as a authentic comment on the nature of Nazi law, is a common theme within jurisprudential discourse that will be discussed further in Sect. 4 of this article. In these other two contributions which, together with Manderson's, comprise the extent to which Nazi law is really debated at all in the Cane collection, the same characteristics are in evidence. There is no historical evidence to back up claims made about Nazi law, and instead the representations of Hart or Fuller in the debate itself are accepted. Even while an external theoretical perspective on the debate is sometimes adopted, often the paradigms of Hart and Fuller, as they represent Nazi law, are taken as the appropriate lens through which to view Nazi law. And Nazi law itself is not the main subject of discussion, so is never subject to the sort of close scrutiny that might challenge some of the claims made in the Hart-Fuller debate about it on a more fundamental level. References to Nazism in other contributions in the collection do no more than indicate Hart and Fuller's own use of the Third Reich, and accept Hart and/or Fuller's representation of Nazi law as accurate even if they scrutinise aspects of the theories they advance in other ways [for example, 38,42]. Nazi Germany is considered an intriguing and noteworthy background to the debate, but not a relevant issue requiring dedicated re-examination. These criticisms may appear unfair: the book's contributions are focused on re-interpreting the Hart-Fuller debate, and it is natural that they would focus their attention on the theoretical paradigms and key issues of the debate and, to the extent Nazi law is referenced, consider it as it is represented by Hart and Fuller. The failure to return to the core historical example that underpins the debate, nevertheless, merely reinforces the role and representation of Nazi law within the debate, and within jurisprudential discourse since the debate. This role is so marginal and its representation so distorted that it is necessary to question the theories and arguments that are overlaid on top of it and which it is used to support. Instead, even in cases where the theoretical approach adopted in a contribution would be commensurate with a reexamination of the nature of Nazi law and its significance for the concept of law, the representation of the Third Reich within jurisprudence is so entrenched that this does not begin to be realised. This possibility is not even on the agenda, and that is the omission, the lacuna, at the heart of the anniversary literature. Representing Nazi Law in the NYULR Symposium This omission is even more apparent in the NYULR symposium contributions, which are focused on what happened in the debate itself and revisiting the issues it raised. Its sophisticated re-evaluations of the terms of the debate are (and are intended to be) an internal critique operating within the debate's discursive parameters. One contribution in this collection does focus on the Nazi-related example on which the debate is based; David Dyzenhaus' revisitation of the case of the grudge informer [12]. Dyzenhaus has written extensively on the nature of wicked law and emergency situations [see 10,11,13], but this piece, too, best serves to illustrate how remote Nazi law actually was from the Hart-Fuller debate. The grudge informer cases, of which the case discussed by Hart and Fuller is one example, took place in the post-Nazi context and were about Germans who informed on their partner to the Nazi authorities for violating draconian legal provisions, in order to get rid of them, 1 rather than about the nature of Nazi law itself. Therefore, even while the debate is generally understood as dealing directly with Nazi Germany, this is viewed through the prism of an entirely separate postwar legal case, and has very little to say about the nature of the Nazi legal system in important areas central to its systemic functioning such as constitutional law or the use of law for the persecution of Jews and other groups. The grudge informer case is centrally concerned with, while also at one remove from, a very specific type of law in the Third Reich, that under which the informant's victims were prosecuted, rather than the nature of the Nazi legal system as a whole, and exploration of it is more specifically limited to what a post-war court should do about such cases. It is telling that Dyzenhaus recalls the grudge informer case for its drama rather than for its substance, in order to look again at how positivism and natural law respond to the case, and goes no further into Nazi law than this. This reflects the tenor of the other contributions to the NYULR symposium. Most of the articles focus on the central issues raised in the debate and refer to Nazi law only as the background rather than as an independent subject of inquiry. Their fidelity to the conflict between positivism and natural law at the heart of the debate is reflected in a number of the papers. These include the defences of Hart's approach offered by Benjamin Zipursky [51] and Leslie Green [18], and Frederick Schauer's account of the influence of the 'vehicles in the park' aspect of the debate [41]. Where the debate comes under criticism in the NYULR symposium, the focus is on areas such as the fundamental misunderstanding between its protagonists, and its analytical bias [25: 1062-1063]. Where subsequent jurisprudential discourse comes under scrutiny in light of the undue influence of the debate, this is for its misunderstanding of Fuller's natural law theory [12], or its preoccupation with the issues raised by the debate, to its own detriment [48]. Some of these criticisms are certainly well founded, but their weddedness to one or other of the positions advocated by Hart and Fuller typically results in a version of a re-run of the debate. It is this jurisprudential preoccupation with the conflict between natural law and positivism, the validity question and the separability question that has resulted in the enduring misrepresentation of Nazi law within the discourse. For positivism, it is the archetypal wicked legal system that demonstrates law can be both valid and used for evil ends, but says nothing more about law because all of its other features fall outside the realm of law. Meanwhile, positivist analysis hardly touches on the legal provisions themselves and certainly does not explore underneath their surface. For natural law, it is the archetypal wicked legal system that demonstrates law can be so unjust as to be invalid, non-law; but as an extreme case of non-law it need not trouble the concept of law any further. Both theoretical paradigms rely on a strikingly similar understanding of the Nazi legal system, one that is historiographically problematic and finds its jurisprudential roots in the Hart-Fuller debate. The NYULR Symposium and the Cane collection both in important ways reproduce fundamental features of the way the Hart-Fuller debate represents Nazi law, and also reflect jurisprudential discourse in these aspects. Nazi law is marginalised from relevance to the concept of law, its representation is frequently adopted from the debate without question or reference to historical sources, and it is not considered an aspect of the debate worthy of re-examination. Consequently, the potential of the real case of Nazi law to usurp its marginal position within jurisprudential discourse, and challenge the role it is given in support of both positivism and natural law, is perpetually unrealised. There is an absence within the literature where the historical case of the Nazi legal system-which has increasingly come to light in more recent historical studies-might otherwise be, and it is a lacuna founded in the Hart-Fuller debate. Hart-Fuller and the Representation of Nazi Law It has been noted that the anniversary literature reinforces a number of characteristics of the Hart-Fuller debate's representation of Nazi law. Nazi law was not discussed in great depth, particularly by Hart; where it was, its nature and characteristics were often misrepresented and their implications consequently misinterpreted; it was marginal to the main issues in the debate, which was demonstrably not about Nazi law; and the way it was used rendered it substantively irrelevant to the debate's key questions-the conditions of validity for law and the connection between law and morality-as well as the protagonists' key conclusions about which theoretical paradigm is to be preferred. It was an intriguing, somewhat alien, backdrop against which to set a dispute about whether positivism or natural law best explains the concept of law, but not a key player in the debate itself. The context and key terms of the Hart-Fuller debate are well known and do not need repeating, so this section will instead focus on the role and representation of Nazi law within the debate. Through this evaluation, it is argued that the debate's profound influence on jurisprudence extends to the trivial role played by Nazi law within the discourse, as an example used to support one or other strand of argument within the paradigms of positivism and natural law, and the reproduction of its historically uninformed misrepresentation. The presence of these elements within the current discourse will then be considered, in the next section. It is, however, helpful to outline in the briefest terms the high level arguments of Hart and Fuller in order to assess their relationship to Nazi law. Hart claimed law is a social fact validated by its promulgation according to procedures laid down by higher legal rules within the legal system, 2 and is thus amenable to abstract, conceptual analysis. It contains within it a stable core of purely legal interpretation not determined by reference to external, non-legal factors. This means that law is separable from morality and is susceptible to use for both moral and immoral ends. It consequently does not rely on morality as a condition for its validity, which is instead determined by other, formal and procedural features of the legal system. This is presented as the most accurate description of the concept of law, and also the most desirable: it is best to separate law from morality, as the latter can then be discretely adjudicated by agents within the legal system, and applied as an external standard of criticism to the law. As a system it is dependent only on very minimal and contingent principles of morality for its functional existence, relating to a prohibition on violence and a minimum protection of property rights, and the principles of objectivity and neutrality in the administration of the law-the idea of treating like cases alike [20: 623-624]. By contrast, Fuller argued that the higher systemic rules that allow laws to exist and encourage people to honour them can be considered moral rules. These procedural rules work within law and manifest an intrinsic connection between law and morality. Compliance with them is also a condition of validity for a legal system, and therefore it is possible for law to be so unjust as to be considered invalid as law, or non-law. Fuller also observed an implicit connection between coherence and morality, such that legal rules that are more coherent, in line with the principles of the 'inner morality of law', are also more likely to be moral rules and less likely to be susceptible to manipulation for nefarious purposes. The real danger for Fuller arises when law and morality are disassociated because it makes it possible to resort to mere formalism in order to justify the validity and application of wicked laws. 3 For Fuller, this theoretical paradigm represents both the most accurate and desirable way of conceptualising law, especially in the wake of the Third Reich. 2 Hart did not describe in detail his concept of primary and secondary rules, and particularly rules of recognition, in his Harvard Law Review article; see [21]. 3 Scholars such as Vivian Grosswald Curran have convincingly challenged the claim that the influence of formalism on jurists in the Nazi state caused the acquiescence of the legal profession to its barbarism. See [5]. The Distorted Jurisprudential Discourse of Nazi Law… 757 The case of the grudge informer referred to by Hart and Fuller involved a woman denouncing her husband to the authorities under the Nazi regime for private remarks he had made about Hitler. The woman reported his remarks apparently with the intention of getting rid of him. He was convicted and sentenced to death for undermining the regime, according to Nazi statute law [12: 1004]. After the war, the woman was convicted by a provincial court of appeal in the Federal Republic of Germany (FRG) in 1949, for the unlawful deprivation of her husband's liberty [34: 263]. In the initial debate, Hart's representation of the reasoning of the German court was inadvertently incorrect due to an erroneous case report. He understood the court in question to have invalidated the relevant Nazi law on an essentially Radbruchian basis because of its degree of substantive injustice. In fact, the Nazi statutes were upheld by the FRG court, but the woman was convicted because of her personal motivation for denouncing and her awareness of the serious consequences of her actions [34]. The conclusions of Hart and Fuller's deliberations about the nature of the concept of law are focused on what would be the most appropriate way for the court to respond to a situation where an undesirable law was used in the past to achieve unjust ends. However, the case really only represents a framing of the arguments involved, and does not offer a transparent window on the nature of the Nazi legal system. The fact that the misreading of the grudge informer case has no impact on their ultimate arguments exemplifies its lack of direct relevance to the arguments made. The secondary background case, that of Nazi law itself, is of even less significance to the main points of contention in the debate. This is evident from both the relative lack of attention given to the Nazi legal system, particularly by Hart, and the fact that the actual characteristics of Nazi law, over which Hart and Fuller do not disagree, make no difference to their arguments. Their use of Nazi law is determined by their desire to advance a claim about the nature of the concept of law based on respectively positivism and natural law. Hart claimed to be interested in Nazi Germany because it 'prompts inquiry into why emphasis on the slogan ''law is law,'' and the distinction between law and morals, acquired a sinister character in Germany, but elsewhere … went along with the most enlightened liberal attitudes' [20: 618]. However, the first misinterpretation of Nazi law is already apparent here. Despite some claims to the contrary, there was not a clear distinction between law and morality in Nazi Germany, and so law was not actually 'law' in the sense implied [35,36,45]. This is an example of Hart basing his analysis of Nazi law on unexamined claims about its nature. His assertions that 'law was law' in the Third Reich, that the Nazi legal system conformed to his formal notion of legal validity, and that the system as a whole could be encapsulated by the case of the grudge informer, are not referenced to historical or historiographical sources and do not fit well with the prevailing historical evidence. Fuller raised this in his criticisms, that Hart 'assume[d] that something must have persisted that still deserved the name of law' without inquiring further into the Nazi legal system [17: 633]. He considered it 'seriously mistaken' to make the assumption 'that the only difference between Nazi law and, say, English law is that the Nazis used their laws to achieve ends that are odious to an Englishman' [17: 650]. However, while Fuller was more interested in the content and application of Nazi law, at least the particular laws subject of the FRG court case, how Nazi law actually worked does not really impact on the arguments made within their respective theoretical paradigms. For Hart, whether or not Nazi Germany ultimately had a positivist, formalist approach to law and legal interpretation would not alter his position, because he was concerned in this contribution with abstract, conceptual analysis and not sociological, historical analysis. Equally, any moral content of Nazi law would not be considered relevant to its conceptual nature-law and morality are separable but not always separated. Fuller did delve a little further into some Nazi laws, and gave examples to illustrate principles considered to be utilised in the Nazi legal regime. These included retroactive legislation and secret laws, both of which were employed in the Third Reich, and Fuller argued that their pervasiveness in Nazi Germany erodes and compromises the validity of its legal system [17: 650-652]. He also cited the tendency to bypass law entirely and resort to street violence, and the willingness of the courts to ignore legislation 'if this suited their convenience or if they feared that a lawyer-like interpretation might incur displeasure ''above''' [17: 652]. Consequently he appeared to recognise that it is not so simple to say that 'law is law' in Nazi Germany, because Nazi judges could upturn the literal meaning of the law and by-pass legal forms if necessary. Taken together, Fuller argued that these characteristics meant the denigration of the morality of law was so complete that the Nazi legal system could not be called law at all: When a system calling itself law is predicated upon a general disregard by judges of the terms of the laws they purport to enforce, when this system habitually cures its legal irregularities, even the grossest, by retroactive statutes, when it has only to resort to forays of terror in the streets, which no one dares challenge, in order to escape even those scant restraints imposed by the pretence of legality -when all these things have become true of a dictatorship, it is not hard for me, at least, to deny to it the name of law [17: 660]. However, at the same time, there are important underlying similarities in the way Hart and Fuller characterise the Nazi legal system. Notwithstanding the above, Fuller insists that the regime clothed itself in legal form, and actually reiterates the assumption that 'law was law' in the Third Reich. He uses this to maintain that positivism contributes to the corruption of law while natural law acts as a barrier against it, without addressing in this context the challenge posed to this assertion by aspects of his own analysis: law is not simply law in a system where its meaning and forms could be readily by-passed. Meanwhile, Hart's methodological disinclination to engage with the history of Nazi Germany means he assumes it to be a formally valid legal system that, like any other legal system, largely inhabits the settled core of meaning within law, with only the usual minority of hard cases falling within the outlying penumbra. Fuller does not challenge Hart's claim that the Nazi legal system did exhibit the characteristics of a formally valid legal system; that laws were created in a certain way using certain procedures inscribed within the legal system. Hart, on the other hand, is not The Distorted Jurisprudential Discourse of Nazi Law… 759 concerned by Fuller's claim that Nazi laws were substantively unjust, or that they had certain undesirable formal characteristics. Aside from this, there are some important ways in which Fuller inadvertently misrepresents the nature of Nazi law. The first, already mentioned, is that Nazi law aimed to realise an alternative vision of law, infused and informed by ideology, rather than merely instrumentalising and undermining the rule of law for immoral and oppressive purposes [45]. The laws Fuller looks at mean that-while he acknowledges that their literal meaning was sometimes ignored, or their requirements by-passed, and that retroactivity and secrecy were employed within the legal system-he implies that these characteristics existed simply as a matter of corrupting the legal system, to retain power and repress dissent. He does not appear to consider that these characteristics were symptoms of the implementation of a Nazi ethic, including a legal ethic, that sought for a positive (as in proactive) change in the relationship between law and morality, and not simply of an effort to do whatever was necessary to retain power [see 1]. A more thorough examination of Nazi legality, beyond the laws that were pertinent to the FRG case, would have revealed a different picture to that painted by Fuller. The second misrepresentation is relevant to the association Fuller makes between coherence and morality. This is important for Fuller because the claim that procedure tends towards coherence which tends towards morality, links his procedural principles of legality to the conditions for fidelity to law. Fuller uses this to argue that the situation of a judge falling back on strict formalistic interpretation to resolve her fundamental moral aversion to a binding superior court ruling would be unlikely to arise in a nation 'where lawyers are still at least as interested in asking ''What is good law?'' as they are in asking ''What is law?''' [17: 648]. However, this defence against immorality is reliant on a clear distinction being drawn between morality and immorality, that legal actors are able to recognise 'good law', possibly because of its correlation with coherent law. The real, historical tension under the Nazi regime between those actively striving for a Nazi vision of 'good law' and those who felt that the denigration of liberal principles of legality already ruled out the possibility of good law, however, means that it was not always easy to recognise what was good law, because of the ideological conflict internal to the system. Yet many Nazi lawyers were very interested in asking about 'good law' and not very interested in what was simply 'law', even though some had a very different idea of 'good law' to that endorsed by Fuller. 4 This fundamental feature of the Nazi legal system is apparently not recognised by Fuller at all. The connection between coherence, morality and law asserted is not straightforwardly supported by the Third Reich, because Nazi law was infused with a different vision of natural law, not merely a corruption of the rule of law. For similar reasons it does not unproblematically substantiate the claim that formalism and positivism are more consistent with wicked law than natural law. Fuller's claim about coherence is under threat because his conception of how morality functions in a system that strives for 'good law' is not necessarily corroborated by the historical case from which he purports to draw his supporting evidence. Equally, Hart's acceptance of Nazi law as the cynical manipulation of the principles of legality for oppressive purposes by the Nazi elite is illustrated by his contention that 'under the Nazi regime men were sentenced by courts for criticism of the regime. There the choice of sentence might be guided exclusively by consideration of what was needed to maintain the state's effective tyranny' [20: 613]. The assertion that punishment for criticism of the regime might be guided exclusively by tyranny is also not corroborated by any historical evidence. It is not possible to draw a conclusion about the sentencing policy of the whole regime on the basis of one example, that of the 'grudge informer's' victim, from a particular time, and in any event an unsubstantiated one. The type of defendants, the harshness of sentencing and the motivations for sentencing changed over time based on the fluctuating circumstances of the regime, and cannot be said to be guided exclusively by the need to oppress and retain power, even while acknowledging that this would have been one element. One of the key problems with the way the Hart-Fuller debate represents Nazi law, which also comprises one of its enduring legacies for jurisprudential discourse, is in its reliance on a moral and/or legal discontinuity between Nazi law and the central concept of law. This assertion of discontinuity is at the heart of the rupture thesis, and results within jurisprudential discourse in the use of Nazi law as a paradigmatic wicked legal system, a limit-case set-out in order to demonstrate the veracity or alternatively the erroneousness of aspects of positivism and natural law. The jurisprudential combination of ahistorical methodology and commitment to particular paradigms of the concept of law and issues for consideration, mean that there is no attempt to question whether the case of Nazi law really can be explained by, or used to support, the positions advanced within these parameters. Fuller asserts a clear legal discontinuity, or rupture, between Nazi 'law' and the concept of law, in the straightforward sense that the latter has the characteristics of valid law whereas the former did not. This approach leads to a series of further denials about the nature and implications of Nazi Germany, chief among which is that law was in any way complicit with the Nazi regime. This carries with it a range of implications: that less wicked forms of law have nothing to do with Nazi law, and that law, good law, was able to return after the fall of the regime in 1945, to condemn the Nazi perpetrators and 'reconsecrate the temple of justice' [50], saving us from the wicked aims of a criminal regime [see 15,16]. Fuller's approach also helps to erect a sort of moral, and therefore legal, barrier between the rule of law legality that conforms to Fuller's inner morality, and underpins many modern legal systems, and Nazi law. The correlation between good law and the rule of law is matched by condemnation and crucially theoretical exclusion of the Nazi legal system as a whole. Hartian positivism is also disconnected from the Nazi legal system, notwithstanding its formal embrace of Nazi law as law and, counterintuitively, this rupture is the more apparent in the anniversary literature [e.g. 23]. Hart's justifiable moral aversion to Nazism works together with his analytical methodology to keep the realities of Nazi law at arm's length. This is exacerbated by the fact that Nazi law is so remote from Hart's arguments that any wicked legal system would suffice for his purposes, which disregards the specificities of Nazi law. Hart is interested in the fact that Nazi law is an example of such extreme evil and yet can still be a case of valid law, as a result of the separability of law and morality, but he does not test the nature of the relationship between law and morality within the Third Reich, nor even whether Nazi law actually conforms to the procedural or (minimal) substantive validity requirements of Hartian positivism. He is simply not interested as a matter of law in the moral/ideological aspects of the Nazi legal system, and so it is almost a premise of his argument that a regime like Nazi Germany can be by-passed without deeper examination. On closer inspection, however, there are good reasons for arguing that Nazi law does not comply, at least not straightforwardly, with the sociological aspects of Hart's version of positivism: the minimal content of natural law and the operation of the rule of recognition to validate the other laws within the system [45]. And such is the connection between law and morality (the Nazi ethic) that there is reason to believe Nazi law has something to add to the debate about the separability of law and morality too. Hart's legal and moral approach shuts down these avenues of inquiry before they have begun, which constructs a discontinuity between Nazi law and the concept of law that is the object of Hart's analysis. For Hart the continuities with Nazi law are limited to nothing more theoretically substantive than a semblance of legal form. For Fuller, continuities, whether legal or moral, do not appear to exist at all. Both Hart and Fuller in their standpoint on Nazi law appear to be rooted in paradigms of the concept of law founded upon, and primarily suited to evaluation of, liberal, democratic, rule of law-based legal systems. This is more readily apparent in Fuller's case because the morality of the 'good law' to which he opposes the Nazi system is closely related to liberal principles of legality and the eight principles of the inner morality of law he subsequently described are key tenets of the rule of law. In Hart's case, as Naffine has argued, he too is on the same moral page, concerned with opposing an ideal good law and an archetypal evil law [32], whether or not the latter is bestowed the formal status of law. The failure to take a broader perspective on the nature of unjust law and of law in the Third Reich in particular, that moves outside of the rule-of-law paradigm of law, is ostensible in many of the contributions to the anniversary literature as well as in subsequent jurisprudential discourse. The insertion of points of legal rupture around Nazi Germany prevents us as a matter of legal theory from examining the Nazi regime further to see whether there is anything in its use of law and legality that ought to concern us about the concept of law generally. Its cursory treatment in the Hart-Fuller debate and consequent marginalisation, which is demonstrated to have endured by the anniversary literature and will be assessed in subsequent jurisprudential discourse below, means it is not taken seriously as a matter of law [27]. This leads on to the use of Nazi Germany as a limit-case, not to test the concept of law, but to prove one or other concept of law. If Nazi Germany is seen only as a limit-case-the absolute paradigm of an archetypal wicked legal system-then it can also be treated as, at most, a peripheral example of law. Once it has been subjected to scrutiny once and found to be at the outer limits of law, then it does not really merit further jurisprudential consideration. However, the Hart-Fuller debate, it is argued, does not properly consider the Nazi legal system in order to establish that it is a limit-case, and nor do jurisprudential scholars today. Such is the notoriety of the Nazi regime, that the starting point is that it is a case at the margins of both morality and legality, and this is never really questioned. The only question that relates to Nazi law is whether it can possibly be law, and this has been answered and is rarely opened up for further examination. What is left for jurisprudential discourse today is a lack of historical rigour in references to the Third Reich, and the outrageous moral potential of Nazi-related examples. Representing Nazi Law in Current Jurisprudential Discourse Throughout this paper, certain key characteristics of the representation of Nazi law in the Hart-Fuller debate and in the 50th anniversary literature are highlighted as also being present in general jurisprudential discourse which, it has been claimed, represents Nazi law as a limit-case, the absolute paradigm of an archetypal wicked legal system. These characteristics include representation of the Third Reich without reference to supporting historical or historiographical evidence; resort to Nazi law generally to discredit an opposing claim or further one's own argument about an element of positivism or natural law when it has no real connection to the theoretical arguments made; its use primarily as a source of evil examples or hypothetical scenarios; and its marginalisation from substantive relevance to questions about the concept of law. It has also been claimed that jurisprudential discourse is both polarised around the twin competing theoretical paradigms of positivism and natural law and continues to be primary concerned with the validity question and the separability question. These characteristics will now be briefly explored for their presence in jurisprudential discourse generally in recent years, in order to demonstrate the ongoing reproduction of key features of the Hart-Fuller debate within the discourse, primarily in terms of its treatment of Nazi law, but also secondarily in terms of how the discourse is structured and operates to reproduce this treatment. The scope of this article does not allow for a comprehensive analysis of representations of the Third Reich within current jurisprudential discourse, or a detailed examination of the representation of Nazi Germany within a particular example of scholarship. Nevertheless, a review of how Nazi Germany is treated in jurisprudential scholarship in recent years enables the selective elucidation of certain characteristics and highlights points that are illustrative of the ongoing marginalisation of Nazi law. 5 It is evident that validity and separability continue to remain key jurisprudential questions within the literature and positivism and natural law, in various forms and versions but with Hartian positivism and Fullerian natural law still prominent examples, are the overarching paradigms within which much of the debate is held. What is more, the combination of the focus on these issues and the dominance of these paradigms has resulted in the proliferation of a large number of sub-questions and issues, each of which is intensely debated between proponents of the two theories of the concept of law, and which feed into the broader struggle for supremacy between the two paradigms. Within these discursive parameters, the case of Nazi Germany continues to have a strong presence, being referred to fairly often, if only typically in passing. Interestingly, however, much as with the anniversary literature, there are very few references to the Nazi legal system or Nazi law specifically, and those that there are tend to subsume Nazi law within the broader category of wicked legal systems, albeit as the archetypical example. The fact that Nazi Germany in general is represented much more commonly than its legal system specifically, begins to explain the extent to which Nazi law is treated as irrelevant to the substantive theoretical issues under discussion in most cases where the Third Reich is referenced. When jurisprudential texts refer to Nazi Germany, they generally do so because of its status as an especially evil historical subject, and as such it is a rich source of extreme examples which, because of the moral weight it carries, can be used simply and effectively to support a particular theoretical position or, more often, discredit an opposing argument. Indeed, it is common for representations using terms such as 'Nazism' or 'Hitler' to be referring to hypothetical scenarios rather than actual examples, with certain words employed to convey moral weight without the historical reality of the Third Reich being engaged at all. In this context, and with respect to the analytical methodology prevalent within the discourse, it is not surprising to find that historical and historiographical sourcing for claims about Nazi Germany and its legal system are virtually nonexistent within the discourse. The various references to, representations of, and examples extracted from Nazi Germany are as substantively divorced from the historical situation as they are from the philosophical issues under discussion. Indeed, it is not generally the point of such references to capture the reality of the Third Reich or to investigate Nazi law in order to say something profound about the concept of law. Their purpose is much more instrumental, and treats any relevant implications of the actual case of Nazi law as at best a marginal issue. This is possible, it is argued, because of the way the Hart-Fuller debate directed the current discourse at its inception over 50 years ago. The Hart-Fuller debate resolved the question of the relevance of Nazi law for modern jurisprudence with the answer 'not very', by settling its status as a marginal case of wicked law with little of relevance to contribute to the central concept of law. It also set the pattern for the treatment of Nazi law in its relative lack of historical engagement with examples of law from the Third Reich and their inability to concretely influence the central arguments being advanced by its protagonists. This position within the debate itself is apparent in the absence of reflection on Nazi law within the anniversary literature examined above, and now understanding that Nazi law is not to be taken seriously as a jurisprudential matter [27], current jurisprudence has reproduced and exacerbated its marginalisation by both unquestioningly accepting this status and being intrinsically uncommitted to revisiting it. This section has begun to show the extent to which the representation of Nazi law in the Hart-Fuller debate continues to be reproduced in jurisprudential discourse today. The treatment of Nazi law, when it is considered at all, is as the paradigmatic archetypal wicked legal system, with the Third Reich generally used as a source of extreme examples and evil hypotheticals to support intricate theoretical arguments over various sub-strands of the validity question and the separability question, in the service of the overarching paradigms of positivism and natural law. Such representations are rarely historically or historiographically sourced, often unconnected with the reality of the Third Reich, especially in the case of hypothetical scenarios, and not substantively or directly connected to the theoretical issues being discussed. The reproduction of key tenets of the way Nazi law was dealt with in the Hart-Fuller debate leads to its ongoing exclusion from the realm of the concept of law within Anglo-American jurisprudence, and has contributed to the general prevalence of the rupture thesis within the legal academy's dealing with Nazi Germany. Conclusion: Nazi Law, Positivism and Natural Law It is important to uncover and challenge the prevalence of the rupture thesis within jurisprudential discourse, and the enduring influence of the Hart-Fuller debate on the representation of Nazi law, for three broad reasons. Firstly, from a historical perspective, because legal theory can help us to better understand the nature and operation of the Nazi legal system and the role it played in structuring society and implementing the worst excesses of the regime. Secondly, from a legal-theoretical perspective, because historical cases of law, and specifically the Nazi legal system, do have something relevant to contribute to our theoretical understanding of the concept of law. Thirdly, with reference to legal discourse, because Nazi Germany was used, and continues to be used, in the service of philosophical arguments about the nature of law that it simply does not support, both by positivists and natural lawyers, whereas it could instead be used as evidence to disrupt the dialogic dominance these paradigms currently enjoy. The historical case of Nazi law that has increasingly become apparent through historical and legal historical research into the Third Reich in recent decades, has implications for both philosophical paradigms. For natural law, one of these is that the very prominent ideological, not to say ethical, content of Nazi laws challenges some of its central features, notably that there is a recognisable morality associated with law that can be used to resist wicked regimes, and, with regard to Fuller in particular, that coherence, morality and law have a correlative connection. For positivism, the Nazi legal regime does not straightforwardly adhere to Hart's minimal content of natural law or his procedural mechanisms for ensuring validity, and normative ideology becomes such an important part of Nazi law that it is questionable what it means for law and morality to be separable in this specific context. It may be argued that Nazi law is not a helpful case for either Hartian positivism or many versions of natural law, because it defies many of the tenets that underlie both paradigms and the merits of a discourse structurally based around the opposition between the two. A discourse that is founded on and rooted within a liberal, democratic, rule-or-law understanding of the concept of law is arguably inherently unsuited to evaluating the nature of extreme unjust law that departs from this paradigm. This is particularly apparent when the case of unjust law in question is manifested in a previously democratic legal system that was itself a strong exponent of the Rechtsstaat. The discourse perhaps does not intend to connect with unjust law-or legal systems based on different paradigms of good law-in the rest of the world but it encounters a potentially impassable obstacle in such a grotesque wicked legal system having persisted right within its intended sphere of explanatory influence. In that sense Nazi law is a limit-case for Anglo-American jurisprudence, because it challenges the assumptions of its discourse at every turn, and disrupts the established role it has been assigned within it. There is a strong connection between the role and representation of Nazi law in the Hart-Fuller debate, and that in the anniversary literature and subsequent jurisprudential literature. Given the enduring significance of the debate, it is no coincidence that the contributions in the two anniversary collections analysed herein do not focus much attention on Nazi law, attempt to re-evaluate its significance for the concept of law, or make much reference to historical sources to support their critique of Hart's and Fuller's treatment of Nazi law on the few occasions such a critique arises. While the historical Nazi case is asserted at the heart of the Hart-Fuller debate, the actual weight it is given in the arguments supporting the two paradigmatic theories of the concept of law advanced in the debate is minimal, and its systemic legal nature is never properly examined. This is, in fact, not primarily a criticism of Hart and Fuller because at the time of the debate a lot less was known about the Nazi regime than is now known, and the idea of the Third Reich being a lawless and criminal state was prevalent in the post-war period [see 15,16,33,39]. However, it is a criticism of the discourse because the role and representation of Nazi law in the Hart-Fuller debate has become entrenched within jurisprudence to such an extent that even a series of papers analysing all manner of aspects of the debate do not have much to add about the Third Reich and its legal system. It is not of contemporary relevance because its potential for jurisprudence was considered exhaustively examined in the original debate itself. To conclude, we can return to two aspects of DeCoste's comments about Nazi Germany and the legal academy noted earlier. These are first the claim that the Hart-Fuller debate was little more than a 'misdirected concession' in terms of its engagement with Nazi Germany and is 'unaccountably influential' as such. Second is that the Third Reich has relevance for jurisprudence beyond the question of whether its legal system was or was not 'law', because of the 'centrality of law and lawyers' in Nazi Germany. In the case of jurisprudence, these two points are inextricably entwined. The Hart-Fuller debate has proved to be of such enduring significance within jurisprudence that its 'misdirected' quality in respect of the Nazi past set the discourse off on a path that viewed Nazi Germany as substantively irrelevant to the key theoretical questions about the concept of law that have come to dominate the discourse. This enduring impairment is important beyond the mere fact that it tends to exclude consideration of National Socialism from any more than a peripheral role in jurisprudence. The historical reality of the functioning of law in the operation of the Nazi state does upset attempts to label the whole system either simply 'law' or 'non-law' and in either case of no further substantive interest for the central concept of law. It also challenges the discourse at a deeper level in exposing contradictions in the debate between natural law and positivism and calling into question whether these are the most effective theoretical paradigms for constructing and evaluating the concept of law in wicked legal systems. In addition to the treatment of Nazi law, others have drawn attention to the Hart-Fuller debate's significance in relation to Nazi Germany. Thomas Mertens has said: In the Anglo-Saxon world, the discussion on the 'legality' of Nazi Germany or the lack thereof took place primarily within the confines of the Hart/Fuller debate for a very long time. This meant that it could safely be isolated and that legal theory could restrict itself to the rule of law as something primarily 'good' [31: 539]. The isolation of Nazi law from the substantive theoretical issues that occupy the concept of law, and the accompanying ability of both positivism and natural law to distance it from unjust law and maintain the connection between law, the rule of law and positive morality, is one of the main points elucidated in this article. Equally, Kristen Rundle has expressed concern over 'the extent to which debates on the question of the separability thesis have focused on the example of the ''wicked legal system'' when mapping the territory that divides the respective philosophical camps' [40: 433]. According to Rundle's critique, the positivist formal embrace of morally questionable regimes such as Nazi Germany means they have been incorporated into a discourse fixated on the separability question and dictated by the positivist standpoint, leading to 'a lowest common denominator level of debate' [40: 433]. This has resulted in other philosophical positions, specifically Fullerian natural law, being side-lined. Rundle's concern is that Nazi Germany, as the paradigmatic wicked legal regime, has played too great a role in jurisprudential discourse and been overemphasised in mapping the differences between the two jurisprudential camps. While Rundle's general criticism of the disproportionate influence of the Hart-Fuller debate is well founded, her concerns about this appear somewhat misdirected. The debate is often of the lowest common denominator because it relies on a superficial (mis)representation of the Third Reich as a wicked legal system, not because, as Rundle appears to suggest, the Nazi legal system is overused within the discourse and exists at the margins of law and so is not the best case from which to explore the issues at play. It is not, per se, that there has been too much of Nazi Germany in the discourse, skewing it towards simplistic versions of how a wicked legal system impacts on key debates by virtue of its inherent wickedness. Rather it is, as this article has sought to demonstrate, that there has been too much acquiescence to a particular misrepresentation of Nazi law, and not enough rigorous engagement with the law and history of National Socialism. The Hart-Fuller debate's profound legacy for jurisprudence is apparent in its enduring fascination, manifested by the extensive 50th anniversary literature and ostensible in the way the key issues of the validity question and the separability question and the opposition of positivism and natural law have continued to permeate the discourse. However, its profound legacy for the jurisprudential representation of Nazi law is also evident in the same places. The Holocaust, the culmination of Nazi ideology, policy and law, is considered an abyss because of its earth-shattering and perceived incomprehensible nature. This abyss too often leads us to posit a breach between the Third Reich and 'normal' law and history, and should not result in a lacuna in jurisprudential discourse, where the rupture thesis predominates the representation of Nazi law and few serious attempts are made to understand the relationship between law, legality and morality in the Nazi regime, and its implications for the central concept of law.
v3-fos-license
2023-01-21T14:16:55.772Z
2017-10-01T00:00:00.000
256045077
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP10(2017)004.pdf", "pdf_hash": "b8b20a02657a2b8588e98d6ead1c37ace903f5fe", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:191", "s2fieldsofstudy": [ "Mathematics" ], "sha1": "b8b20a02657a2b8588e98d6ead1c37ace903f5fe", "year": 2017 }
pes2o/s2orc
Particle actions and brane tensions from double and exceptional geometry Massless particles in n + 1 dimensions lead to massive particles in n dimensions on Kaluza-Klein reduction. In string theory, wrapped branes lead to multiplets of massive particles in n dimensions, in representations of a duality group G. By encoding the masses of these particles in auxiliary worldline scalars, also transforming under G, we write an action which resembles that for a massless particle on an extended spacetime. We associate this extended spacetime with that appearing in double field theory and exceptional field theory, and formulate a version of the action which is invariant under the generalised diffeomorphism symmetry of these theories. This provides a higher-dimensional perspective on the origin of mass and tension in string theory and M-theory. Finally, we consider the reduction of exceptional field theory on a twisted torus, which is known to give the massive IIA theory of Romans. In this case, our particle action leads naturally to the action for a D0 brane in massive IIA. Here an extra vector field is present on the worldline, whose origin in exceptional field theory is a vector field introduced to ensure invariance under generalised diffeomorphisms. Introduction Massless particles in n + 1 dimensions give rise on Kaluza-Klein reduction to massive particles in n dimensions. Consider the action for an n + 1 dimensional massless particle: JHEP10(2017)004 writing the n + 1 dimensional metric in terms of an n-dimensional metric, g µν , an ndimensional vector, A µ , and an n-dimensional scalar φ. We reduce assuming none of these fields depend on Y , so that it is a cyclic coordinate. We can eliminate it from the action by defining the Routhian, or partial Hamiltonian, where the momentum conjugate to Y is P Y = λφ(Ẏ +A µẊ µ ). The Routhian is calculated to If we write the action as S = dτ (Ẏ P Y −H Y ), then the Y equation of motion isṖ Y = 0. Writing p for the constant value of P Y , and then integrating out the Lagrange multiplier λ, we obtain This is a massive particle in n dimensions, whose mass is given by the asymptotic value of φ −1 p 2 . Let us run this argument backwards. Given an action of the form (1.3) for a massive particle in n dimensions, one can encode the mass in terms of an auxiliary worldline variable, Y , using an action of the form (1.1). Then this action can be given a higher-dimensional interpretation. In string theory, the above thinking is used to give the D0 brane an M-theory origin as arising from 11-dimensional momentum modes. Further reduction leads to more massive particle states arising from strings and branes wrapping compact cycles. On toroidal reductions, these particles will form multiplets of a duality group, G. In this paper, we will seek to understand a Kaluza-Klein-esque oxidation of these particles, where the higherdimensional theory will appear to exist in more than 11 dimensions. The masses of the particles -or equivalently the tensions of the branes from which they arise -are encoded very simply in the radii of the extra dimensions. These ideas have antecedents going back many years. A Kaluza-Klein origin for string and brane tensions was investigated in [1][2][3]. The idea followed is to replace the tension of a brane with a (1 + p)-dimensional worldvolume with a dynamical p-form field living on the worldvolume. In the case p = 0, for particles, there is a natural interpretation of this extra field as a higher-dimensional coordinate. This interpretation is not so clear for p ≥ 1. However, this approach leads to some nice results. For instance, in the IIB theory, for p = 1, the resulting tension 1-form can be combined with the worldvolume gauge field living on the D1 brane worldvolume to provide an SL(2) invariant description of the F1 and D1 [4,5]. This approach can be generalised to SL(2) invariant actions for particles in 9 dimensions [6] and hence for more general SL(2) invariant brane actions in type IIB [7]. Indeed, the starting point for the investigations described in this paper was to use the results of [6] for SL (2) invariant particles in 9 dimensions to guess the form of a general action for particles in n dimensions invariant not under SL(2) but under some larger duality group G. This action is: (1.4) JHEP10(2017)004 Let us explain what appears. We have a multiplet of particles transforming in a representation R 1 of G. The vector field A µ M is also in the same representation, and we have introduced charges -or generalised momenta -p M , transforming in the representation conjugate to R 1 . Instead of the single Kaluza-Klein scalar φ appearing in (1.3), we have a set of scalars encoded in a generalised metric, M M N , which is constrained to parametrise a coset G/H, where H is the maximal compact subgroup of the group G. We will check in section 2 that this action reproduces the particle actions obtained by dimensional reduction of various brane actions exactly as expected. To give this action a higher-dimensional interpretation, we will encode the charges p M in terms of auxiliary worldline scalars, Y M . This can be done using the action where λ is a Lagrange multiplier. We can treat the Y M as cyclic coordinates in a manner identical to that used above. The conjugate momenta are We calculate the Routhian given by Legendre transforming the Lagrangian L with respect to Y M but not X µ , and then trivially rewrite the action as S = dτ (−Y MṖ M − H Y ). Now Y M appears only as a Lagrange multiplier enforcing the fact that P M is constant. We therefore replace P M = p M , with p M constant, so that which after integrating out λ corresponds to (1.4). The form of the action (1.5) suggests an interpretation in terms of a larger space with coordinates (X µ , Y M ), with a metric apparently defined by (g µν , M M N , A µ M ). It would be surprising if there was a conventional higher-dimensional description, as the number of coordinates involved will be greater than 11. Instead, we will argue for an interpretation in terms of the structures appearing in double field theory/exceptional field theory. These theories are reformulations of supergravity involving the set of G-covariant coordinates (X µ , Y M ), with the underlying symmetries including "generalised diffeomorphisms" which realise local G transformations. Recall that global G is the duality group on reduction to n dimensions on a D-torus. In double or exceptional field theory in general, one should really not call it a "duality group" -duality is a statement about symmetries in certain backgrounds, such as those corresponds to toroidal reductions -but perhaps one can refer to it here as the generalised diffeomorphism group. It plays the same role as GL(D) in general relativity. A key property of generalised diffeomorphisms is that they do not form a consistent algebra unless the dependence of fields and gauge parameters on the extra coordinates Y M is restricted. The simplest restriction is to impose the so-called "section condition", which forces one to choose a subset Y i of the Y M as the "physical" coordinates on which the fields of the theory can depend. In double field theory (DFT) [8][9][10][11][12][13] the group G is O(D, D). The coordinates Y M are in the fundamental of O(D, D), and correspond to a doubling of a subset of (or all of) the dimensions of the original spacetime theory. In exceptional field theory, the group G is E D,D (where E D,D , a split real form of the exceptional groups E D , is originally found as the U-duality group obtained on reducing 11-dimensional supergravity on a D-torus). This sequence of groups, and the R 1 representation of the coordinates Y M , is listed in table 1. The development of EFT originally focused just on the subsector containing these coordinates alone [14][15][16][17], truncating the field content and the dependence on the coordinates X µ , but the full reformulation of the bosonic sector of 11-dimensional supergravity has now been carried out for every group in table 1, from SL(2) × R + to E 8 , in [18][19][20][21][22][23][24]. The supersymmetric versions for the E 6 and E 7 theories have also been obtained [25,26]. We will begin our interpretation of the action (1.5) in terms of these theories in section 2, where we essentially only consider a higher-dimensional space which is an extended torus. In section 3 however we will really allow all the fields to depend on the new coordinates Y M . Doing so requires the introduction of an extra worldline vector transforming in the R 1 representation under global G (but subject to some restrictions, as we will see). This extra vector field appears to gauge the redundancy introduced by including extra coordinates, an idea that has been used in [27,28] in reducing a doubled string worldsheet model to the usual string theory (similar also to the gauging procedure of [29]). It can also be seen as due to the fact that the naive "line element" for the extended space does not transform covariantly under the local symmetries of DFT/EFT, as was realised for DFT in [30,31]. So this extra vector is a consequence of the fact that our local symmetries are generalised diffeomorphisms, and is fundamentally tied to the fact that this symmetry constrains the coordinate dependence of the theory through the section condition. Integrating out the extra coordinates and gauge fields will reduce us to particle actions in 11, 10 and n dimensions. JHEP10(2017)004 One could perhaps think of these dual directions as being somewhat similar to "special isometry" directions, such as occur in a Kaluza-Klein monopole background. The worldvolume action for such a brane involves an extra worldvolume vector field which gauges this isometry [32] and is used to eliminate what would otherwise be an extra degree of freedom corresponding to the special isometry coordinate. In section 4, we will point out an example where the gauge field actually survives in the reduction to 10 dimensions. This is the massive IIA supergravity of Romans [33]. This is a deformation of the 10-dimensional type IIA theory which does not have a conventional 11-dimensional description. However, it can be described within DFT and EFT in an interesting manner. In DFT one introduces a deformation by allowing the RR sector to depend linearly on a dual coordinate [34]. In EFT, the Romans deformation can be described as a deformation of the generalised diffeomorphism symmetry [35], which can be viewed as deriving from a generalised Scherk-Schwarz reduction of EFT [36][37][38][39][40] in which the twist matrices depend again on dual coordinates. The Romans supergravity can also be described in generalised geometry -which realises O(D, D) or E D,D symmetries on a generalised tangent bundle [41][42][43][44][45] -using similar deformations of the generalised Lie derivative [46]. Using the Scherk-Schwarz reduction procedure, our particle action gives rise to the action of a D0 brane in massive IIA, on which an extra vector field appears [47]. Our derivation of this fact will take a detour to highlight the fact that the EFT picture also includes the 11-dimensional non-covariant uplift of Romans supergravity described in [47]. Our work hopefully sheds some light on the possible description within exceptional field theory of some parts of the brane spectrum of string theory and M-theory. The search for "duality covariant" brane actions has a long history, including many papers especially relevant to the development of DFT and EFT [8,9,27,28,[48][49][50][51]. It has not been entirely clear how one might describe branes within EFT, where G transformations relate branes of different worldvolume dimension (some other difficulties are described in [52]). One attempt is [53]. The papers [54,55] study a superparticle model in which the section condition of EFT appears. In a sense, we are restricting ourselves to describing some aspects of the branes whose spatial worldvolumes completely wrap the internal space (and so appear as particles if we reduce to n dimensions). These are the set of states that appear as waves -i.e. massless particle excitations -in the extended space, as studied as solutions of DFT/EFT in [56][57][58] (see also [59][60][61] for the confirmation that these carry the appropriate notion of generalised momentum). The philosophy here is to think of DFT/EFT as a theory containing only massless objects, which appear as usual (massive) branes or particles on restricting to the physical spacetime. 1 JHEP10(2017)004 2 Duality covariant particle actions in n dimensions The actions We repeat the two actions we wrote down in the introduction: first, the higher-dimensional form which was equivalent to We may think of these as worldline actions for particles in an n-dimensional spacetime. Let us repeat our description of the fields appearing. On the worldline we have scalar fields X µ and Y M . The former can be viewed as standard n-dimensional spacetime coordinates, while the latter will lie, as we have said, in the representation R 1 of the group G, either given by table 1 or by G = O(D, D) with R 1 = 2D. We have an n-dimensional metric, g µν , and a symmetric matrix, M M N , which parametrises a coset G/H, and which we refer to as the generalised metric. The vector field, A µ M also transforms in the R 1 representation of G. For the moment, we only allow our fields to depend on the coordinates X µ . To check that this action indeed corresponds to the reduction of various brane states, we should specify n and G. First, let us check whether the above action corresponds to the reduction of the action for point particle states to n dimensions. We begin with the action for a massless particle in 10 or 11 dimensions, with metricĝμν and coordinates Xμ: We split Xμ = (X µ , Y i ) and Kaluza-Klein reduce supposing the metric is independent of Y i , using the decomposition We include a conformal factor Ω. This can be specified in order to make g µν either an Einstein frame metric (this is appropriate for reductions exhibiting the U-duality groups of table 1) or a string frame metric (appropriate for reductions exhibiting the T-duality group O(D, D)). In the latter case we have Ω = 1 ifĝμν is a 10-dimensional string frame metric. In the former case, ifĝμν is 10-or 11-dimensional Einstein frame metric then Ω = | det φ| −1/(n−2) , while ifĝμν is the 10-dimensional string frame metric we have Ω = | det φ| −1/(n−2) e 4Φ/(n−2) . We can eliminate the coordinates Y i in a fashion identical to the above. The momenta conjugate to Y i is JHEP10(2017)004 and the action can be written after setting P i = p i constant and dropping the total derivative termẎ i p i . Now, returning to (2.2), we find that it matches the reduction (2.6) if One can check that this agrees with the explicit form of the matrix components M ij in all cases. 2 Let us now check that the action (2.2) corresponds to reductions of wrapped branes, and in doing so begin to comment on the relationship to double field theory and exceptional field theory. in the fundamental. There is also a dilaton, which will not appear, completing the NSNS sector fields (we will not need the RR fields). In the double field theory [8][9][10][11][12][13] based on this O(D, D), all fields depend on the coordinates (X µ , Y M ) and transform under local O(D, D) generalised diffeomorphisms (note that this corresponds to the formulation in [62], which is most similar to the setup of exceptional field theory, with not all directions doubled). For consistency, one can impose the section condition, ∂ i ⊗∂ i = 0. The canonical solution∂ i = 0 identifies the coordinates Y i as physical so that (X µ , Y i ) are the genuine 10 dimensional coordinates, and the theory can be identified with (the NSNS sector of) 10-dimensional supergravity. We can construct a dictionary between the O(D, D) covariant multiplets and the original fieldsĝμν andBμν in 10 dimensions. We decompose the latter as (for the metric, this is the Ω = 1 case of (2.4)): Then the appropriate field multiplets for O(D, D) are: (2.10) JHEP10(2017)004 Particles: fundamental string. Start with the Nambu-Goto form of the string action where a, b = (τ, σ) are worldsheet indices, the induced worldsheet metric is γ ab = ∂ a Xμ∂ b Xνĝμν, andB 2 denotes the pullback of the B-field. We split the 10-dimensional coordinates Xμ into n + D coordinates (X µ , Y i ) and decompose the spacetime fields as above, assuming the fields only depend on X µ . On the worldsheet, we will carry out a generalised double dimensional reduction, setting The action is then We now calculate the momentum conjugate to Y i , finding (2.14) Routhian. It is We then use the Y i equation of motion in the action S = dτẎ i P i − H Y to set P i = p i to be constant. Then it is easy to see that the reduced action takes exactly the form (2.2), with the generalised metric and one-form defined in (2.10), and the momenta For a toroidal reduction, with torus radii R (i) , Then this momenta is where we have introduced the T-dual radiiR (i) = l 2 s /R (i) . We note that the momenta appearing look exactly like Kaluza-Klein momenta on a doubled torus with radii (R (i) ,R (i) ). We will discuss this higher-dimensional interpretation further in section 2.4. JHEP10(2017)004 We must however notice that the momentum (2.14) obeys w i P i = 0, or m i k i = 0, restricting us to have only either momenta or winding in each direction. This is a manifestation of the level-matching condition of the string. and additional form fields which do not enter the discussion at present. We can construct an exceptional field theory invariant under local SL(2)×R + involving the full set of 9 + 3 coordinates (X µ , Y M ), as detailed in [18]. The section condition for this theory is [63] ∂ α ⊗ ∂ s = 0. The solution ∂ s = 0 corresponds to IIB supergravity, so we call Y s the IIB coordinate, while ∂ α = 0 corresponds to 11-dimensional supergravity. In our conventions, reduction on Y 1 leads to IIA supergravity in 10 dimensions, so we call Y 1 the M-theory direction and Y 2 the IIA direction. (2.23) This is easily seen to be the double dimensional reduction of the Nambu-Goto action for a fundamental string. The choice of charge vector corresponds to momentum in the IIB direction, Y s , as expected. In this setup, the string is wrapped on the Y 2 direction with radius R 2 , with the T-dual IIB radius R s = l 2 s /R 2 . We need to identify which again exactly resembles a Kaluza-Klein momenta coming from the higherdimensional action (2.1), as we will further discuss in section 2.4. Note that the choice of sign of p s corresponds to the orientation of the wound string. IIA particles: D0 brane. Let us take p M = (p 1 , p 2 , 0). The action (2.2) is (2.25) This is the dimensional reduction of a D0 brane carrying momentum in the direction on which we have reduced. To see this, consider the D0 action and reduce using the above decomposition. We let Z ≡ X 9 be the direction on which we will reduce. The action is independent of Z so the momentum in the Z direction is conserved. This momentum is where g ≡ φ −1/7 e 4Φ/7 g µνẊ µẊ ν . After Legendre transforming, the action can be written as (2.28) We solve the Z equation of motion by letting P Z be constant. If the Z direction has radius R, then let P Z = p/R. Substituting back in and dropping theŻ term which is now a total derivative, we find the action (2.25) with the identifications Here, we see that standard identification of the D0 tension with Kaluza-Klein momentum on the M-theory circle is entirely consistent with a higher-dimensional interpretation of our particle action (2.1) as describing a particle moving in the extended spacetime. JHEP10(2017)004 IIA particles: the pp-wave. Finally, we take p M = (0, p 2 , 0) so that This is the action for a momentum mode (compare the discussion in section 2.1). It is written in terms of the lower-dimensional Einstein frame metric. Note the string frame metric in 9 dimensions would beḡ µν = φ −1/7 e 4Φ/7 g µν . It is trivial to identity p 2 = p/R with p ∈ Z. IIB decomposition. The bosonic fields of 10-dimensional type IIB supergravity are the Einstein frame metric,ĝ Ê µν , the B-field,Bμν, the dilaton ϕ, the RR 0-form C 0 , 2-form,Ĉμν and self-dual 4-formĈμνρσ. We split the coordinates as Xμ = (X µ , Y s ). We decompose the metric as in (2.4) with Ω = φ −1/7 . In the convention that α = 1 is an RR field index and α = 2 is an NSNS field index (this is the opposite to what is stated explicitly in [18] but seems to correspond to the explicit parametrisations used there), the unit determinant part of the generalised metric can be written as (2.31) We have Finally, the one-form components are IIB particles: the pq string. Take p M = (q α , 0), then This matches the action for the dimensional reduction of a pq string, equation (2.15) of [6] (excluding the Scherk-Schwarz term). We discuss the quantisation of the charges below. IIB particles: pp wave. Take p M = (0, p), then which is a pp wave for the same reasons as above. Interpretation from double and exceptional field theory We have seen that the action (2.2) describes n-dimensional particles obtained by reducing particle, string and brane actions from 10 or 11 dimensions. The masses of these particles are encoded in terms of the constants p M , which we saw should be taken to be quantised in units of inverse radii -with the radii appearing being both the physical radii that we have JHEP10(2017)004 reduced on and also dual radii. In this section, we will encode these radii in the generalised metric M M N . Of course, this is all in accordance with standard duality relationships. We want to emphasise in this section how this emerges from the geometry of double and exceptional field theory, given the action (2.1), so we will take the time to spell things out quite explicitly. The action (2.1) involves what looks like the pull-back to the worldline of a "generalised line element" The Lagrange multiplier λ then suggests to think of this action as describing massless particle-like states in an extended geometry. Let us focus on the particular case where the directions Y M parametrise a torus. We where the dimensionful quantity l can be taken as either the string length or the 11-dimensional Planck length. We denote the radius of the Y M direction by R (M ) . As usual, momenta in these directions should be quantised as Such momentum states will have mass, as measured using the metric g µν , equal to δ M N P M P N . 3 Let us note one can really see these standard results by applying simple particle quantum mecahnics to the action (2.1). The Hamiltonian is (setting A µ M = 0 for simplicity In this set-up, picking a solution to the section condition means selecting which D of the Y M to consider as the physical coordinates. Momenta in dual directions gives rise to particles in n dimensions which we would interpret ordinarily as arising from branes wrapped on the physical torus. In the action (2.1), we describe all such states as particles on the extended torus. These particles are all massless in double or exceptional field theory, as is implied by the Lagrange multiplier λ in the action (2.1). This is consistent with the point of view of [56][57][58], which argued that the supergravity solutions corresponding to such totally wrapped branes appear as waves in the extended space. We emphasise that our appproach in this paper is to take the generalised line element (3.16) to be only relevant as a part of a worldline (or worldvolume) action like (2.1). We will not think of it as corresponding to a genuine line element on the extended space (though see the paper [64] which defines a metric on doubled space of DFT using an extra gauge field which can be integrated out using a path integral approach. We will meet this gauge field in the next section). Yet because it appears in the worldline action we can use it as proxy for inferring how point particle -or fully wrapped brane -states perceive the background of the doubled or exceptional geometry. Let us confirm the generalised momenta coming from the double field theory generalised line element are what we expect. On a doubled torus we have (writing only the part of (3.16) JHEP10(2017)004 corresponding solely to the Y M directions) so we see that this involves both the physical radii R (i) for the directions Y i and the dual radiiR (i) = l 2 s /R (i) for the directionsỸ i . The momenta appearing in the action (2.2) are then . This is exactly what we saw in section 2.2 (where to be fully consistent we should there write φ ij = δ ij while absorbing the radii into the definition of the coordinates Y M ∈ [0, 2πR (M ) ]). Now let us turn to exceptional field theory. There is a subtlety related to the fact that a conformal factor Ω appears in the dictionary relating the EFT fields to the decomposition of the 10 or 11 dimensional metric (2.4), with g µν = Ω −1ĝ µν +. . . . We mentioned already that the inverse generalised metric has components M ij = Ωφ ij ; similarly one will generically have that M ij = Ω −1 φ ij + . . . . This means that on an extended torus one has whereR (M ) are the radii that would be seen using the 11/10 dimensional metric. These differ from the radii R (M ) that seem to be encoded in the generalised metric, which are those seen by the metric g µν . In fact, one has in general that, picking some subset Y i as the physical coordinates, = Ω −1 ĝμνdXμdXν + . . . , (2.39) where the dots denote extra terms involving both the Y i and dual coordinates. We see here the appearance of the 10/11-dimensional metricĝμν. The masses measured using the metric g µν would be δ M N P M P N with P M = k M /R (M ) as before. We can define momentaP M = Ω −1/2 P M instead: the mass δ M NP MPN then corresponds to what would be measured usingĝμν. This can be viewed as a choice of redefinition of the Lagrange multiplier λ. The freedom to redefine λ is equivalent to rescaling both g µν and M M N by a conformal factor. On choosing a parametrisation of M M N corresponding to a particular 10/11 dimensional theory, one can choose this conformal factor so that whatever radii appear correspond to those seen by the 10 or 11 dimensional metricĝμν. In particular, we would define a new Lagrange multiplierλ = λΩ −1 . Note that as the generalised line element is meant to only carry meaning on the worldline action, the generalised momenta defined from the action are actually unchanged: (2.40) Settingλ = 1 in the action (2.1) corresponds to the standard results for the masses as seen in the usual 10/11 dimensional theory. This also leads to the momenta that we wrote down JHEP10(2017)004 in section (2.3). Ultimately, this is only really a matter of convention: we are choosing to express the masses not in terms of the n-dimensional metric g µν but in a more familiar way. We will now show how to use this to extract all the expected masses for particles in 9d from the SL(2) × R + EFT. The results will of course be consistent with the standard duality relationships between the branes of M-theory, IIA and IIB. In the below we drop the external metric, and write "ds 2 " = M M N dY M dY N only. On choosing a section, we explicitly extract the prefactor Ω −1 which will cancel against theλΩ in (2.40). For IIA, we write showing the prefactor Ω −1 = φ 1/7 e −4Φ/7 . The quantity inside the large brackets then provides what we call the "effective radii". We suppose that φ = (R 2 /l s ) 2 , and the dilaton is constant and equal to the IIA string coupling, e Φ = g A s . Then we have The momenta p M = k M /R (M ) gives exactly the tensions/masses for the fundamental string wrapped on Y 2 , the D0 brane and the pp-wave with momentum in the Y 2 direction. For IIB, we have Note that here φ = g E ss for the Einstein frame metric. We therefore have a few extra steps to obtain results for the momenta that correspond to the masses that would be measured in the IIB string frame (we do this simply because the string frame expressions are more familiar). Letting φ = (R E s /l s ) 2 and e ϕ = g B s , we have (2.45) We have the relationshipĝ Ê µν = e −ϕ/2ĝμν for the 10-dimensional string frameĝμν. Thus, In terms of string frame quantities, we therefore have 4 JHEP10(2017)004 The "effective radii" areR The momenta p M = k M /R (M ) gives exactly the tensions/masses for the pp-wave with momentum in the Y 2 direction, the D1 brane wrapped on Y s and the fundamental string wrapped on Y s . Generalised diffeomorphism covariant particle action in extended dimensions We have already seen how the background (g µν , A µ M , M M N ) and coordinates (X µ , Y M ) appearing in the action (2.1) can be interpreted in terms of the fields and coordinates of double or exceptional field theory. So far we just considered the dictionary relating these fields to the (toroidal) reductions of brane actions to n dimensions. In this section, we want to really interpret the action (2.1) in the full DFT/EFT framework. Local symmetries of double and exceptional field theory The generalised Lie derivative. The local symmetry transformations of these theories include "external diffeomorphisms", parametrised by vectors ξ µ (X, Y ), and "generalised diffeomorphisms", parametrised by generalised vectors, Putting DFT or EFT on a torus, global transformations of the group G become the standard duality group of n-dimensional supergravity. The definition of generalised diffeomorphisms δ Λ (equivalently, of the generalised Lie derivative L Λ ) acting on a generalised vector V M is [17,44]: Here λ V denotes the weight of the vector V , while we also have a sort of inherent weight ω. In DFT, ω = 0, while in EFT we have ω = − 1 n−2 . The tensor Y M N P Q is constructed using invariants of the group G, and its presence ensures that the generalised Lie derivative preserves these invariants. For this to happen, the form of the Y -tensor is restricted and can be worked out group by group [17]. For G = O(D, D), for instance, it is Y M N P Q = η M N η P Q (note that in general it does not factorise in this way), while for SL(2)×R + , where the index M = (α, s), the non-vanishing components are Y αs βs = δ α β and those related by symmetry (it is symmetric on upper and lower indices except for the case of E 7 ). The gauge parameters themselves are taken to have weight λ Λ = −ω. The closure of the algebra of such transformations, is not guaranteed. Consistency conditions must be imposed. The simplest such condition is the section condition: 3) JHEP10(2017)004 whose solutions reduce the coordinate dependence of DFT to at most 10 dimensions and that of EFT to at most 11 or 10 dimensions (there are distinct solutions giving maximal supergravity in 11 and type IIB in 10 dimensions [22,65]). The section condition effectively kills all dependence on the dual coordinates. Alternatively, by requiring all fields factorise in a Scherk-Schwarz (twisted) ansatz, one can find weaker conditions in which some dependence on the dual coordinates gives rise to interesting gaugings of supergravity. The fields (g µν , A µ M , M M N ) that appear in our wordline action transform as follows under generalised diffeomorphisms. The external metric g µν is a scalar of weight −2ω. The generalised metric M M N is a tensor of zero weight. The vector field A µ M actually can be thought of as a gauge field for these transformations. Its transformation is given by We take A µ M to have weight −ω. The derivative D µ = ∂ µ −L Aµ is a covariantisation of the partial derivative ∂ µ with respect to generalised diffeomorphisms. It is used in writing the action and in defining external diffeomorphisms: these are given by the usual Lie derivative with respect to parameters ξ µ , but with ∂ µ replaced by D µ . The field strength for A µ M is defined as follows: in which a new two-form gauge field B µν appears. This field transforms in a representation of G which we denote by R 2 . (Recall that generalised vectors, and the gauge field A µ M transform in what we call R 1 .) The derivative∂ : R 2 → R 1 is a nilpotent operator [63,66], constructed using group invariants and the derivatives ∂ M , which maps from R 2 to R 1 . The representation R 2 is contained in the symmetric part of the tensor product R 1 ⊗ R 1 and generally we can take The gauge field B µν (M N ) has gauge transformations parametrised by one-forms λ µ (P Q) , One can go on to construct a field strength for B µν , which necessitates the introduction of a further form field C µνρ , and so on leading to a "tensor hierarchy" (note that not all the fields that appear in this hierarchy are actually needed in the action: the point at which this occurs depends on the duality group -in E 7 and E 6 the 3-form is not used). We will not need these intricate details. Local symmetries including twists. In order to be as general as possible in specifying a particle action invariant under generalised diffeomorphisms, let us also include deformations. This partially pre-empts some of section 4. There, we will describe how to write down a generalised Scherk-Schwarz ansatz of DFT or EFT. Such an ansatz involves a factorisation of the fields in terms of Y M -dependent twists, which appear in the transformation rules of the fields only in certain combinations. We call these combinations Θ M N P and JHEP10(2017)004 θ M : they must obey various consistency constraints, the first of which is that they must be constant. These then amount to deformations of generalised diffeomorphisms. (The spacetime interpretation is that they provide gaugings turning supergravity into gauged supergravity -Θ is the embedding tensor, and θ is a trombone gauging.) The precise definitions in terms of twist matrices of the Scherk-Schwarz ansatz are (4.8) and (4.11). For now, we will simply specify how they end up appearing in the symmetry transformations of our fields. First, define a combination of these which appears naturally by The deformed generalised Lie derivative acting on a vector V M of weight λ V is: The additional gauge transformation of A µ M given in (3.7) can also be twisted, leading to (3.10) The action The result. We now want to use the above information to think about how to write down a worldline action for a particle state coupled to the background (g µν , A µ M , M M N ), which respects the invariance under generalised diffeomorphisms described above. To do so, we need to follow [27,28,31,67] and introduce an auxiliary worldline vector field A M , transforming in the R 1 representation of global G (subject to the restrictions which we will come to below). The action we find is where under generalised diffeomorphisms (3.9) including twists we will require 12) and also that the Lagrange multiplier λ transform as a scalar with weight +2ω. (This follows from the fact that the quantity in bracket naturally transforms with weight −2ω, as is clear from the fact g µν itself does. This transformation of the Lagrange multiplier seems reminiscent of, and is perhaps ultimately inherited from, the transformation of the worldvolume metric of the M2 under duality transformations as mentioned in [49]. Note that for G = O(D, D), ω = 0.) The reasons. It is convenient to phrase the discussion in terms of the generalised line element: JHEP10(2017)004 Again, we do not propose to treat this as a true metric on some extended spacetime transforming under generalised diffeomorphisms. We shall see that -as pointed out for double field theory in [30] -this quantity does not transform correctly under generalised diffeomorphisms. To remedy this, the additional field A M was then introduced in [31]. A second motivation for introducing this gauge field is the observation [30] that the section condition leads to an identification of coordinates: the points Y M and (where λ (P Q) lives in the R 2 representation) may be viewed as equivalent 5 and then the gauge field A M is introduced for this redundancy. This is akin to the gauging of [27,28], where a shift symmetry in dual directions is gauged, which is what is captured by the above equivalence. Our interpretation in this paper will be to treat the gauge field A M as an auxiliary worldline (or worldvolume) variable, which appears when writing particle (or brane) actions for DFT or EFT backgrounds. So we view the above "line element" as only having meaning on the worldline of a particle (or other brane). We mention again that one can make use of the introduction of A M to define a metric on the doubled space as in [64]. The field A M is restricted to obey [31] A which is preserved by the gauge shifts δ λ A M = (∂λ) M . This means after solving the section condition, it only has components in the dual directions. As nothing depends on these directions, they are a sort of "special isometry" direction. Any brane in the extended space could be thought of as having such directions in addition to its usual worldvolume, transverse and special isometry directions in the physical section. Then the appearance of this vector is similar to the introducing auxiliary worldvolume vectors to gauge special isometry directions for brane action. Possible further restrictions on A M will be discussed below. The details. We now come to the details leading to the result (3.12) for the transformation of A. We will consider the gauged generalised line element and ask how A M must transform for this to behave covariantly under generalised diffeomorphisms. For convenience, we will continue to write everything in terms of differentials dY M with the understanding that we really only want to consider such quantities within a worldline (or worldvolume) action, where we will replace them with worldline derivatives, dY M →Ẏ M . 5 There is also an equivalence of generalised diffeomorphism parameters Λ M and Λ M + Y M N P Q∂N λ (P Q) , due to the section condition, which is a manifestation of the reducibility of p-form gauge transformations. The motivation for the coordinate identification is to consider some function f (Y M + (∂λ) M ) = f (Y M ) + (∂λ) M ∂M f (Y ) + · · · = f (Y M ) after Taylor expanding and using the section condition. JHEP10(2017)004 Suppose we start with transformed background fields and coordinates: where we always work to first order in Λ. In addition, allowing for the possibility of an extra gauge transformation which we will specify below, and also Note that we define the transformation under generalised diffeomorphisms by which differs by the transport term Λ N ∂ N T (Y ) from the total transformationδ Λ = T ′ (Y ′ ) − T (Y ). We would like, ideally, to show that the transformed expression (3.17) equals the unprimed one (3.16). Expanding (3.17) gives (3.23) Here we abbreviated DY M ≡ dY M + A M + dX µ A µ M . Now, let us specify the generalised Lie derivative. We use the general form, including twists, given in (3.9). Then, using λ M = 0, λ Λ = −ω, we have we can kill off the last two lines. We will absorb much of the remaining terms into our definition of the transformation δ Λ A M . However, before we do so let us note that there is an issue with the weights. Setting the Y -tensor, twists, A µ and A to zero, we should recover ordinary differential geometry. However, in this case the unwanted terms (3.26) do not all vanish: an anomalous +ω∂ P Λ P dY M term will still appear. This reflects the fact that the following quantity: where α is any non-zero number, is not an invariant line element. The issue is that the generalised Lie derivative is defined such that M M N carries an intrinsic weight, while the external metric g µν has weight −2ω. This means that we have to relax our requirement that the quantity be invariant under generalised diffeomorphisms. Instead, it transforms as a density, provided we take the transformation which on the worldline is (3.12). We note that term here involving the Y -tensor is consistent with the transformation given in [67] (note they specify the transformationδ and have used the condition A M ∂ M = 0 which we have kept only in the back of our heads throughout the above calculation). We also note that this means A M should also be taken to have the special weight −ω under generalised diffeomorphisms. If all we are interested in is the action (2.1), then the lack of invariance can be compensated for using the Lagrange multiplier λ, leading to the action (3.11). Reduction to massless particles in 10/11 dimensions We now study reductions of the action (3.11) corresponding to standard solutions of the section condition Y M N P Q ∂ P ⊗ ∂ Q = 0 (this means that the extra twists τ M N P and θ M can be set to zero for the remainder of this section of the paper -they will reappear naturally in the Scherk-Schwarz reduction of section 4). JHEP10(2017)004 In obtaining the action (2.2) from (2.1), we assumed that the fields were independent of all the extended directions Y M . Now that we have figured out how to allow for field dependence on all these coordinates, subject to the section condition, we can ask what happens if we allow the fields to depend on a physical subset Y i ? Then the remaining coordinates -let us call them Y A -are cyclic and can easily be integrated out. The condition A M ∂ M = 0 implies that we only have A A = 0. To do so, we write (3.11) in the form We consider the momenta conjugate to Y A , and use the same Routhian procedure as before. Another result from DFT and EFT is that while the component A µ i is identified with the vector appearing the decomposition (2.4) of the 10-or 11-dimensional metricĝμν. As a result, withλ = Ω −1 λ, (3.32) Naively, we might then integrate outλ to find the action for a particle in 10 or 11 dimensions of "mass" where the constant p A , arising as the constant value of the momenta appears to correspond to there being non-zero momenta in a dual direction, which one might attempt to interpret as arising from a brane winding. However, we've not made any assumptions about compact directions here, and furthermore we must not forget about the gauge field A A . Its equation of motion set p A = 0. Then in fact the action (3.32) becomes just that of a massless particle in 10 or 11 dimensions: The redefinition of the Lagrange multiplier is crucial here in order to match with the usual 10-or 11-dimensional metric. This redefinition of course corresponds exactly to the discussion in section 3.2. We could have also integrated out A A , or the combinationẎ A + A A , directly, getting the same result. This is the procedure adopted in [31] for a doubled string action and [67] JHEP10 (2017)004 for particles (where they actually start explicitly with a massive particle in the doubled space. We prefer to begin with a massless particle in order to obtain the particle and wrapped brane states of string theory). Reduction to massive particles in n dimensions We would also like to reobtain the n-dimensional action (2.2) from the generalised diffeomorphism invariant action (3.11). Assume our background is independent of all the extended coordinates Y M , so that we can integrate these out entirely. The condition A M ∂ M = 0 does not restrict the worldline vector A M at all. Then after integrating out we will obtain a term dτ p M A M , and the equation of motion of A M then implies that p M = 0, so that we can only obtain in this way a massless particle in n dimensions. We would prefer to be able to use the action (2.2) with arbitrary p M . However, we see that the role of A M in n dimensions is to kill generalised momenta in the directions in which A M has non-zero components. It is possible that there are some extra ingredients that allow us to avoid being led to p M = 0. Firstly, we should note that we have not considered a supersymmetric form of the action (3.11). Secondly, we could consider restricting A M in different ways, by formulating constraints on A M , which may either replace, imply or live alongside the condition A M ∂ M = 0. This includes the possibility that in certain backgrounds it may be consistent to choose A M = 0, i.e. not introduce the gauge field at all. We note that in general different choices of which components of A M are non-zero should correspond to what set of wrapped branes would exist in 10/11 dimensions, and so additional restrictions on A M may contain information about what branes are present. This may pertain also to topological or global information about the extended spacetime. Let us now discuss these possibilities. Supersymmetry. The actions that we are studying have been solely bosonic. It is possible that the supersymmetric versions of (3.11) will include couplings of A M to fermions, so that the equation of motion of the A M would be modified to p M = 0. Something similar happens in the case of the D0 brane in massive IIA, for which the bosonic action includes an extra vector field (which in section 4 we will see is actually a component of A M ) whose equation of motion appears to set the Romans mass to zero. Including fermions is consistent with non-zero Romans mass [47]. Restrictions on A M . Let us discuss possible restrictions on A M in more detail. In [64,67], the gauge field A M does not just obey A M ∂ M = 0, but also is required to be null with respect to η M N , the O(D, D) structure: η M N A M A N = 0. The motivation is that A M is the gauge field for what [30] called the "coordinate gauge symmetry" Y M ∼ Y M + ∆ M with ∆ M = φ 1 η M N ∂ N φ 2 , and the gauge field is supposed to have the same behaviour as the gauge generator ∆ M which evidently satisfies η M N ∆ M ∆ N = 0 by the section condition. Suppose we imposed this in the action (3.11) by a Lagrange multiplier, ϕ, including a term JHEP10(2017)004 Integrating out firstẎ M leads to (3.37) and then the equation of motion for A M leads to The Lagrange multiplier ϕ now restricts p M to be null with respect to η. Recall that in section 2.2, we found that the generalised momenta arising from the direct dimensional reduction of the Nambu-Goto string action obeyed the condition η M N p M p N = 0 that we impose here. The particle action (2.1) was that for a massless or null particle in the doubled or extended space. If the generalised momenta are restricted to also obey the section condition, which in DFT is that they are null with respect to the O(D, D) structure, we find that our actions are in a sense "doubly null". This is interesting. Does it generalise to EFT? There, we have and it is not generally true that Y M N P Q ∆ P ∆ Q = 0. We note that in the case of DFT, the number of dual directions equals the number of physical directions. It is therefore something of an accident that one can have A M be null with respect to η M N and find this is compatible with enforcing the momenta also be null with respect to η M N . In EFT, the condition Y M N P Q A P A Q = 0 would impose that there are the same number of non-zero components of A M as ∂ M : but this number will be less than the number of dual coordinates on picking the section ∂ i = 0, and so be more restrictive than (and generally incompatible with) A M ∂ M = 0. The condition Y M N P Q A P A Q = 0 can be viewed as a "purity condition" on the R 1 valued tensor A M (we will explain below the reason for the terminology). (In the language of the generalised Cartan calculus [63,66] it is that the product A • A ∈ R 2 vanishes.) One can develop a general notion of pure G tensors to describe branes in DFT/EFT [68][69][70]: given A M restricted as above one can formulate a differential condition defining a brane whose spatial components are wholly wrapped in the physical section. It is possible that requiring such a condition on this A M , or on some other pure object with which A M must be appropriately compatible, relates to this idea. An approach which is similar in spirit is to use linear constraints to implement the condition A M ∂ M = 0. This is based on [17], which shows how to reformulate the section condition (a quadratic condition) as a linear condition using an auxiliary "pure" tensor. This auxiliary object Λ transforms in some representation of G and obeys a purity condition Λ ⊗ Λ| P = 0, where | P denotes the restriction to a particular representation (or set of representations) P of G. The section condition can be imposed via Λ ⊗ ∂| N = 0, where N is again some particular representation of G. In DFT, the section condition can be formulated in this way using a pure spinor Λ In appendix A, we show how to implement similar linear constraints for the EFT groups G = SL(2) × R + and G = SL (5). We note that the section condition on momenta is closely related to the BPS condition, and this may account for why it appears in this way. A particle in n dimensions with arbitrary momenta p M could not be thought of as arising from the reduction of a single (BPS) brane in higher dimensions -rather, it could have momenta corresponding to e.g. M2 winding and M5 winding simultaneously. This is one physical interpretation of the condition that the generalised momenta obey the section condition. Again, everything we are doing is bosonic and it would be interesting to construct the supersymmetric version of the particle action (3.11) to learn more about these ideas. Setting A M = 0. Finally, let us consider what it means in general to be able to choose A M = 0 (which is of course one solution to the above constraints). We are interested in backgrounds in which we can take ∂ M = 0. We can think of this as the most simple and extreme solution to the section condition. If so, following the general philosophy of solving the section condition means we should be applying ∂ M = 0 not only to our fields but also to our gauge parameters. Evidently, this is very restrictive. If the parameters of generalised diffeomorphisms are indeed restricted to be independent of the coordinates Y M , then the action (2. So we could argue there is no need to introduce A M at all. Let us also offer a thought about how to formalise this. Consider the map∂ : R 2 → R 1 . If B ∈ R 2 , then (∂B) M ∂ M = 0 by the section condition. We required A M ∂ M = 0. We can define a map from R 1 to the trivial representation ∂ : Evidently the image of∂ is the kernel of the latter. One could perhaps require A M to be trivial in the sense that A M = (∂B) M for some B ∈ R 2 . Then, when the section is ∂ i = 0, ∂ A = 0, we only have components A A as before. However, in the section ∂ M = 0 in fact A M is zero. The action (3.11) is then identical to (2.1). More generally, one could also conceive of restricting solely to A M which are (equivalent to) zero in this "cohomology". This may have something to do with the global or topological structure of the extended space. The gauge field A M was originally introduced in DFT in order to gauge the equivalence between Y M and Y M + φ 1 η M N ∂ N φ 2 due to the section condition. For ∂ i = 0, we have an equivalence (Y i ,Ỹ i + φ 1 ∂ i φ 2 ) for arbitrary functions φ 1,2 of the physical coordinates Y i . Then one can identify all points (Y i ,Ỹ i ) and (Y i ,Ỹ i + c i ) for arbitrary constant c i as belonging to the same gauge orbit. JHEP10(2017)004 This identification of coordinates is a lot more severe than what you would want to have some notion of a genuine doubled torus (the most acceptable version of a genuinely doubled space), for which we would require only the periodic identification (Y i ,Ỹ i ) ∼ (Y i ,Ỹ i + 2πR (i) ). One might suppose that introducing A M = (0,Ã i ) is what one does when one needs to gauge away entirely the dual coordinates, as perhaps would be the case when the physical spacetime is non-compact. To describe a flat doubled torus, which is a simple background in which ∂ M = 0, one does not introduce this gauge identification. However, to understand fully what is going on presumably requires a better understanding of the global properties of DFT/EFT. To illustrate the above points, consider the case of SL(2) × R + . We are interested in "reducing" the 9+3 dimensional extended space with coordinates (X µ , Y α , Y s ) to 11 or 10 dimensions. (The below discussion is somewhat similar to the situation suggested presciently in [71]. ) We claim that the section choice ∂ α = 0 corresponds to a "reduction" on R 2 × {0}. The gauge field component A s is non-zero and is used to gauge away the apparent dual coordinate for the (non-existent) Y s direction, equivalently, its equation of motion coming from the action (3.11) enforces that there is no momentum in this direction. Conversely, in the section choice ∂ s = 0, our extended spacetime is {0} 2 × R. The gauge field components A α are non-zero, and play the same role for the dual coordinates Y α . On the other, the choice ∂ M = 0 in which we depend on none of our coordinates can be associated to an extended space T 2 × S 1 (with the area of the [M-theory] torus related to the radius of the [IIB] circle). We now have A M = 0. Our particle action now captures momentum states in all directions of the extended space. There is no standard geometrical description, meaning that there is no decompactification limit in which all three directions become non-compact. In the limit where the area of the torus goes to zero, the radius of the circle becomes infinite. The states with momentum in the circle direction can be regarded as the momentum modes of the non-compact IIB direction, while those with momentum in the torus directions become infinitely massive. The converse statements apply when the radius of the circle becomes zero, which leads to an 11-dimensional theory. Romans supergravity as EFT on a twisted torus and the D0 brane action In this final section, we will consider the effects of relaxing the section condition in order to allow some (controlled) dependence on the dual coordinates. After crossing this Rubicon, we will arrive at the Romans supergravity [33]. This is a 10-dimensional deformation of type IIA supergravity, with deformation parameter m known as the Romans mass. This appears directly in the action as a sort of cosmological constant term: and appears in the gauge transformations of the form fields. Under a gauge-transformation of the B-field, δB 2 = dλ 1 , we have also massive gauge transformations δĈ 1 = −mλ 1 , JHEP10(2017)004 δĈ 3 = −mB 2 ∧λ 1 . The gauge invariant field strengths appearing in the action are modified due to this, withF 2 = dĈ 1 + mB 2 andF 4 = dĈ 3 −Ĥ 3 ∧Â 1 + m 2B 2 ∧B 2 . The Romans supergravity is interesting within string theory, as it appears not to have a standard 11-dimensional origin. One may view it as the low energy limit of a massive IIA theory which applies in the presence of D8 branes. The Romans mass is essentially the dual of the 10-form field strength of the 9-form RR gauge field coupling to the D8. One can formulate a notion of "massive T-duality" [72] to relate Romans supergravity on a circle to type IIB supergravity, while also one can think of it as being related via duality to a particular compactification of M-theory on a twisted torus [73]. In DFT, one can obtain the massive IIA by deforming the Ramond-Ramond sector [34], introducing a linear dependence on a dual coordinate. In EFT or generalised geometry, this deformation can be viewed as a deformation of the generalised Lie derivative [35,46], which in turn can be obtained as a Scherk-Schwarz reduction of exceptional field theory on a twisted torus. The latter in particular suggests that EFT provides a higher-dimensional origin of the Romans supergravity. What is interesting is the role played by the dual coordinates in this framework. Romans supergravity as a Scherk-Schwarz reduction Scherk-Schwarz reductions of EFT. We will largely follow [35,40]. The procedure is to specify a Scherk-Schwarz or twisted ansatz for all fields of the theory. The Scherk-Schwarz twists depend on some of the coordinates Y M subject to various consistency constraints, and the fields that appear in the particle action factorise as follows: where we have written the ansatz for the vielbein of the external metric, g µν = e a µ e b ν η ab . We also assume that gauge parameters for generalised diffeomorphisms factorise similarly: This can be extended to the other gauge fields of the EFT, however we will not really need these. We denote the fields that will appear in the Scherk-Schwarz reduced theory with bars on both the fields and their indices. We are being as general as possible and allowing them to still depend on some of the extended coordinates. To do so, we have to require i.e. the twist is trivial in directions on which the barred fields depend. The generalised fluxes can be extracted from the transformation rules of the fields of the reduced theory. For instance, one has JHEP10(2017)004 where If U M and V M carry the specific weight λ, then with This is the embedding tensor. For this ansatz to make sense, various consistency conditions follow [35,40]. These replace, and are weaker than, the section condition. For instance, we have the quadratic constraints and constraints like In addition, the section condition should still hold on the derivatives ∂ M acting on the fields of the reduced theory. SL(2) × R + EFT on a twisted torus and Romans supergravity. The example we will consider is to take the SL(2) × R + EFT and reduce it on a twisted torus. Recall that the R 1 representation of this EFT was the reducible 2 1 ⊕ 1 −1 , and that the generalised metric M M N consisted of two blocks M αβ and M ss . The unit determinant part of the former was H αβ = (M ss ) 3/4 M αβ . We can generically write this in terms of a complex scalar τ = τ 1 + iτ 2 , which we will interpret as the complex structure of the torus (in the IIB section, this is the complex axio-dilaton, in the M-theory section on a torus, it would genuinely be the complex structure of a physical torus). We could therefore write the internal "line element" M M N dY M dY N as "ds 2 " = (M ss ) −3/4 1 τ 2 dY 1 + τ 1 dY 2 2 + τ 2 (dY 2 ) 2 + M 3/4 ss (dY s ) 2 . JHEP10(2017)004 The gauging which gives us the Romans supergravity is: We thus have (4.17) The effect of the gauging is to set The EFT background on which we are reducing can be seen to be a twisted torus by "freezing out" the fields of the reduced theory, i.e. settingM M N to the identity. Then we see that this gauging comes from "ds 2 " = dY 1 + mY s dY 2 2 + (dY 2 ) 2 + (dY s ) 2 , (4.18) which one would like to think of as a twisted torus (where owing to the restrictions on the generalised metric, there should be some relationship between the radius of the Y s direction, viewed as an S 1 base, and the area of the Y α directions, viewed as a T 2 fibre). This is the usual coordinate patching for a twisted torus. 6 When we carry out the Scherk-Schwarz reduction, we end up with a theory that no longer sees the Y s direction. Thus the twisted torus is only there from the point of view of the full EFT. Note that the appearance of the twisted torus here is analogous to its appearance in [73]. We stress that the gauging (4.16) depends on the IIB coordinate Y s . We will interpret the effective fields and gauge parameters of our reduced theory as depending on the coordinates Y α of the M-theory section. In fact, from U −1 M N ∂ NV = ∂ MV we see that fields and gauge parameters in the reduced theory should be taken to be independent of Y 1 . The above gauging induces a single non-vanishing component of the generalised fluxes: The constraints are satisfied, assuming the fields do not depend on Y 1 . The appendix contains the explicit details of the action and deformations of the SL(2)× R + EFT. Here, let us just explain a few points. The EFT action contains a "scalar potential" term which contains all terms involving just the generalised metric, external metric and their derivatives with respect to the extended coordinates. The full expression is (B.21). One can show that inserting the Scherk-Schwarz ansatz for the Romans theory leads to JHEP10(2017)004 (where the bars again mean that these are the fields of the effective Scherk-Schwarz reduced theory). Using the relationship between the EFT fields and those of IIA, it is easy to see that new term proportional to m 2 is actually whereĝ here denotes the 10-dimensional string frame metric. This is exactly the Romans mass term. Meanwhile, the EFT gauge fields are also deformed. This is described in the appendix, and is equivalent to making the replacementŝ Fμν →Fμν + mBμν ,Fμν ρσ →Fμν ρσ + 3mB [μνBρσ] . (4.23) These are exactly the modified field strengths of the Romans theory. Using these deformations together with the fact that we know the SL(2) × R + reduces to the action of IIA in 10 dimensions, we immediately see that this gauging indeed provides a reduction from the 12-dimensional SL(2) × R + EFT to the 10-dimensional massive deformation of IIA. One can also check for instance that the massive gauge transformations of the Romans theory are reproduced. 11-dimensional interpretation of the Romans twist In the above procedure we let all our fields be independent of the "M-theory direction" Y 1 and interpreted our theory in the IIA section. However, at least in principle we should be able to study the deformed theory directly in 11 dimensions, with the restriction that the Y 1 direction must be an isometry. The dictionary between the metricĝμν of 11-dimensional supergravity and the fields of the SL(2) × R + EFT is contained in [18]. We split the coordinates Xμ = (X µ , Y α ). The EFT generalised metric is given by Here φ αβ denotes the "internal" components of the 11-dimensional metric, φ αβ ≡ĝ αβ as usual. Now, the Scherk-Schwarz consistency conditions tell us that our fields must be independent of Y 1 . Let k = ∂ ∂Y 1 be the vector field associated to this isometry. The norm of this vector is k 2 = φ 11 . Then translating the Romans mass term appearing in (4.21) to M-theory variables, we find that it is: One can also check that the field strength of the three-formĈμν ρ is replaced according tô JHEP10(2017)004 This uplift is not the usual 11-dimensional supergravity, which is well known not to reduce to Romans supergravity. The crucial feature is the presence of the Killing vector k: it is a theory with a built in isometry. This isometry allows the construction of the "cosmological constant" term (4.25) which does reduce to the Romans mass term in 10 dimensions. We see here that the EFT description of Romans supergravity naturally includes its uplift to this variant of 11-dimensional supergravity. It was perhaps inevitable that this had to be true, as the 11-dimensional section was still available to us (with the restriction ∂ 1 = 0), and it would be surprising if there was some other 11-dimensional uplift of the Romans supergravity -however it is of interest to see that this works explicitly. Massive IIA particles We start with the action (3.11) and specialise to the SL(2)×R + EFT, imposing the Scherk-Schwarz ansatz with the gauging (4.16) that leads to massive IIA. The action can be written (omitting bars from the indices) (4.27) Hereḡ µν ,M M N andĀ µ M only depend on the coordinate Y 2 . We have defineḋ We have kept all the components of the gauge fields here, however the conditionĀ M ∂ M = 0 (acting on barred quantities) implies that in factĀ 2 = 0. The transformation rule ofĀ M follows now from the analysis of section 3.2, where we included the twists in the generalised Lie derivative. Alternatively, we may note thatẎ M + A M transforms covariantly under generalised diffeomorphisms, and so the usual twisting process applied to it leads to the correct expression (3.12) for the transformation ofĀ M . The action (4.27) depends only onẎ s and not Y s , and so we can easily proceed to integrate out this coordinate as before. We can either use our previous results, or just do the calculation which is especially simple for SL(2) × R + . We find after Legendre transforming that We now note that P s is constant by the Y s equation of motion, and zero by the A s equation of motion. The action simplifies to JHEP10(2017)004 which describes a massless particle. What is this particle? We can actually interpret it in eleven dimensions. We can use the identification (4.24) relating the generalised metric of the SL(2) × R + EFT to the metric components of 11-dimensional supergravity, together with the identification of the one-form doublet A µ α with the Kaluza-Klein vector of the M-theory metric as in (2.4). The caveat is that as we have carried out a Scherk-Schwarz twisting, we are not really dealing with 11-dimensional supergravity but the deformed version which reduces to massive IIA. Still, the dictionary works. Definingλ = λ|γ| 1/7 , we find the action S = dτλĝμνD τ XμD τ Xν (4.32) whereĝμν is the 11-dimensional metric, the coordinates are Xμ = (X µ , Y 1 , Y 2 ) and This is the action for the "massive M0-brane" i.e. a massless momentum mode in the 11-dimensional deformation of supergravity which reduces to the Romans supergravity, described in [47], where we are using adapted coordinates such that the Killing vector k is just ∂/∂Y 1 . To confirm this, consider the part of the transformation of the worldline vectorĀ 1 involving the twists, which can be read off from (3.12): Now, in our 11-dimensional theory we have also a three-formĈμνρ transforming under gauge transformations with two-form parameterχμν. The one-and zero-form components of these appear in the SL(2) × R + EFT as A µ s =Ĉ µ12 and Λ s =χ 12 . Define the vector λμ = −kνχμν. Thenλ 2 =χ 12 . We thus have under massive gauge transformations. This is the transformation of the worldline vector of [47]. The dimensional reduction of the massive M0-brane then leads to the action for a massive D0 in massive IIA: after defining mV τ ≡Ā 1 following [47]. We see that the vectorĀ 1 becomes an additional worldline vector. The string theory interpretation is that this arises from the endpoints of strings stretching from the D0 to the background D8 brane. (The equation of motion of V τ appears to set m = 0, but this is only because this is just the bosonic part of the action.) We have therefore established that our action (3.11) for point particle states in the extended spacetime of EFT leads to the correct action for a D0 brane in massive IIA, on making use of the Scherk-Schwarz ansatz. Crucially, this would not have been possible without the extra worldline vector field A M , whose appearance was originally due to the generalised diffeomorphism symmetry of EFT. After deforming these symmetries to obtain the massive gauge transformations of Romans supergravity, a component of the gauge field remains in the setting of the latter theory. A brief recap We investigated a higher-dimensional oxidation of a particle action (2.2), which described a multiplet of particle states in n dimensions transforming under a duality group G. This uplift led to the actions (2.1) and (3.11), in which one naturally saw structures from double and exceptional field theory appearing. In particular, the action (2.1) could be interpreted as a masslessness, or null, condition on a particle state in an extended spacetime. The action (3.11) showed that in order to have invariance under the local generalised diffeomorphism symmetries of DFT/EFT, one had to introduce an auxiliary vector field on the worldline, as argued in [31] for a doubled string action: effectively, this auxiliary vector field is used to gauge away the dual directions [27,28]. Our line of thinking offers a perspective on how to describe a subset of wrapped brane states in DFT/EFT. It was interesting to see in section 4 that the extra worldline field, which ordinarily would not be present in a particle or brane action, could be shown to become the extra worldline vector field that appears on a D0 brane in massive IIA [47]. This made use of EFT as a higherdimensional origin for massive IIA, by Scherk-Schwarz reducing the SL(2) × R + EFT on a twisted torus to obtain the necessary deformations to describe massive IIA as in [35,46]. What about branes? We had two types of particle actions. The action (2.2) corresponded directly to a massive particle in n dimensions, with mass encoded in charges p M . The other, the action (2.1), used extended coordinates Y M to encode the charges, and could be interpreted as the action for massless particle states in the extended spacetime of double field theory or exceptional field theory. It would be interesting to extend these approaches to strings and branes. Indeed, the gauge vector A M was introduced in [31] in order to construct an action for a string in the doubled geometry of DFT. The generalisation to EFT should be considered. In fact, the analogue of the action (2.2) in n dimensions can be worked out fairly easily for the case of the SL(2) × R + EFT. This can be done simply by reducing brane actions to 9 dimensions and using the EFT dictionary to rewrite these in terms of natural SL(2) × R + covariant quantities. (A useful guide for what sort of action to expect is [7].) For instance, there is an SL(2) doublet of strings. Let us think about this in terms of (somewhat unnaturally, maybe) IIA quantities. This doublet combines the direct dimensional reduction of the D2 brane and the transverse dimensional reduction of the F1. We can find an action for this doublet by carrying out these reductions (we also integrate out the worldvolume gauge field of the D2, and dualise the worldvolume scalar on the F1 that corresponds to the coordinate Y 2 on which we reduce). Here, we simply state the result (a, b are worldsheet indices): JHEP10(2017)004 where F a s = ∂ a Y s + A a s , with Y s an auxiliary worldsheet scalar which corresponds to the singlet coordinate of the SL(2)×R + EFT, and the one-and two-form fields that appear are the pullbacks of the fields of the SL(2) × R + EFT to the worldsheet. The D2 corresponds to p 1s = 0 and the F1 to p 2s = 0. The tensions are encoded in these charges as before. Similarly, one check that the transverse reduction of the M2 action (equivalently, the D2) to 9 dimensions gives (a, b, c are worldvolume indices): with the Y α appearing as auxiliary worldvolume scalars which can be viewed as the doublet coordinates of the SL(2) × R + EFT, and the other fields are those of the EFT. No dualisations were carried out. The challenge now would be to lift these to actions describing strings and 2-branes in the 9+3 dimensional extended space of the SL(2) × R + EFT. Inspired by [1][2][3][4][5], and using the massive to massless particle analogy, the approach may perhaps involve searching for some notion of a tensionless brane in DFT or EFT. Other directions We saw that one could determine the masses and tensions of wrapped brane states from a simple Kaluza-Klein analysis of the "generalised line element" of DFT or EFT, remembering that we should only really interpret this as such as part of the worldline theory of a particle state. We only considered simple toroidal reductions here. Then, in section 4, we analysed a twisted torus reduction of EFT leading to Romans supergravity. We are currently investigating what this means in terms of the spectrum of massive IIA [74]. To move further away from tori, one might want to consider for instance the description of EFT on more complicated backgrounds (such as K3 as in [75]) to see whether our approach captures the description of branes totally wrapping some internal manifold leading to a duality group other than the G associated to toroidal reduction. With a more complete understanding of not just particles but brane actions one could go on to study physics in non-geometric backgrounds which may be more naturally described using the DFT/EFT formalisms, for instance exotic branes [76] and their electric duals [77]. There was a slightly puzzle about how to treat the gauge field A M on reducing the generalised diffeomorphism invariant particle action (3.11) to the n-dimensional particle action (2.2). One could argue that choosing ∂ M = 0 as a solution of the section condition of DFT/EFT meant that one need not introduce A M at all: in this case the n-dimensional particle could have arbitrary generalised momenta p M . Alternatively, by imposing certain linear constraints on A M , we found that the generalised momenta p M had to obey the section condition itself. This restriction on the allowed momenta may be interpreted as a statement about the origin of the n-dimensional particle from a single brane in higher dimensions, and be essentially a BPS condition. We saw that this condition also arose coming from the worldsheet of the fundamental string, where it is also related to level-matching. JHEP10(2017)004 It would be interesting to further explore these relations, in particular to understand what an uplifted brane configuration whose reduction leads to generalised momenta violating the section condition would look like from the point of view of DFT/EFT. We should also explore the relationship to the work of [68][69][70] where linear constraints are used to identify branes in DFT/EFT and construct actions for such objects. This may further clarify the properties and role of A M . Evidently it would be beneficial to have not just bosonic actions, as presented here, but fully supersymmetric versions. Doubled string actions can be supersymmetrised [28,[78][79][80][81], and one could explore such an extension for the particle action (3.11). This may help clarify the general restrictions on the extra gauge field A M and how they relate to restrictions (especially the imposition of the section condition) on the generalised momenta p M . Here there could be a link with the superparticle models of [54,55]. JHEP10(2017)004 Then we impose the condition Λ ⊗ ∂| adj = 0. The projection into the adjoint here means that we require Λ α ∂ β − 1 2 δ α β Λ γ ∂ γ = 0. For Λ α = 0 this implies ∂ α = 0. We then also require Λ ⊗ A| R 2 = 0, which is Λ α A s + Λ s A α = Λ α A s = 0. This means that A s = 0. This is in accord with the IIB section, ∂ s = 0. The Lagrange multiplier terms in the action then give S ⊃ dτ (ϕ αs Λ α A s + p α A α + p s A s ) . (A.2) This gives p α = 0 and p s = −ϕ αs Λ α = 0. Let's take another example, this time G = SL(5). The coordinate representation R 1 is the 10. We let a be a five-dimensional index in the fundamental representation, so that we can write A M = A ab with ab antisymmetric. The other representations relevant to us are R 2 =5, R 3 = 5 and R 4 =10. The linear constraint for the section condition solution corresponding to 11-dimensional supergravity is [17] Λ [a ∂ bc] = 0. Here Λ a ∈R 3 =5. No purity condition is required. We also impose Λ b A ab = 0. Taking only Λ 5 = 0, for example, gives ∂ i5 = 0, ∂ ij = 0 for i, j = 1, 2, 3, 4, and also corresponds to A ij = 0, A i5 = 0, as we expect. In the reduction of the action (3.11), we find which implies p ab = −2ϕ [a Λ b] , so that Λ [a p bc] = 0 and hence ǫ abcde p ab p cd = 0, which is the section condition for this EFT [16]. Meanwhile, the linear constraint relevant to the IIB section solution is [68] Λ ab ∂ bc = 0 for Λ ab ∈ R 1 obeying Λ [ab Λ cd] = 0. We also require Λ [ab A cd] = 0. A representative solution is Λ 45 = 0, which means only ∂ 12 , ∂ 13 , ∂ 23 are non-zero, which is the IIB section solution [65]. This also implies that A 12 , A 13 and A 23 are zero and the rest non-zero, as necessary. In the action we find This gives p ab = − 1 2 ϕ e ǫ eabcd Λ cd , which implies Λ ab p bc = 0 and hence ǫ abcde p ab p cd = 0. We therefore see that one can formulate certain constraints on the gauge field A M , which correspond to imposing the section condition on the generalised momenta which appear as charges in the n-dimensional action (2.2). In this case, with ∂ M = 0, one has access to the duality symmetry G which allows one to transform any particular generalised momenta into any other in its orbit. B Further details of the Scherk-Schwarz reduction of the SL(2) × R + EFT We record in this appendix some general expressions for the Scherk-Schwarz reduction of the SL(2)×R + EFT, which were worked out in a prior incarnation of this paper, and which may prove to have some use. JHEP10(2017)004 The non-trivial twistings of the field strengths in the tensor hierarchy are, using (B.14) to (B.17), Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
v3-fos-license
2016-05-04T20:20:58.661Z
2015-01-01T00:00:00.000
8770325
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=2748&path[]=6136", "pdf_hash": "1b360dedba8bc1afc1697a7be94a32663a049711", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:192", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "1b360dedba8bc1afc1697a7be94a32663a049711", "year": 2015 }
pes2o/s2orc
miR-874 functions as a tumor suppressor by inhibiting angiogenesis through STAT3/VEGF-A pathway in gastric cancer. MicroRNAs are endogenously expressed, small non-coding RNAs that regulate gene expression by targeting mRNAs for translational repression or degradation. Our previous studies indicated that miR-874 played a suppressive role in gastric cancer (GC) development and progression. However, the role of miR-874 in tumor angiogenesis and the mechanisms underlying its function in GC remained to be clarified. Here, gain- and loss-of-function assays demonstrated that miR-874 inhibited the tumor angiogenesis of GC cells in vitro and in vivo. Through reporter gene and western blot assays, STAT3 was shown to be a direct target of miR-874. Overexpression of STAT3 rescued the loss of tumor angiogenesis caused by miR-874. Conversely, the STAT3-shRNA attenuated the increased tumor angiogenesis caused by the miR-874-inhibitor. Furthermore, the levels of miR-874 were inversely correlated with those of STAT3 protein in GC tissues. Taken together, these findings indicate that down-regulation of miR-874 contributes to tumor angiogenesis through STAT3 in GC, highlighting the potential of miR-874 as a target for human GC therapy. INTRODUCTION Gastric cancer (GC) is the fourth most prevalent type of malignancy worldwide, and it is the second most frequent cause of death from cancer [1]. Despite significant achievements in the treatment of early GC, the long-term survival rate for advanced GC remains quite low [2]. The five-year survival rate for advanced or metastatic gastric cancer is only 5-20%, with a median overall survival of less than 1 year [3,4]. Consequently, the molecular mechanisms that regulate GC development and progression need further exploration. Angiogenesis is vital for tumorigenesis and development, as tumors cannot grow larger than 2 mm in diameter without angiogenesis. Recent studies have shown that several miRNAs are involved in the regulation of vascular development [5,6,7]. miRNA-378 promotes tumor angiogenesis by targeting Sufu and Fus-1 [8]. miRNA-132 acts as an angiogenic switch by suppressing endothelial p120RasGAP expression, leading to Ras activation and the induction of neovascularization [9]. Conversely, miR-15b, miR-16, miR-20a and miR-20b are potential anti-angiomiRs that may function through targeting VEGF [10]. In a previous study, we demonstrated that miR-874 suppressed the growth, migration and invasion of gastric cancer cells [11]. We also observed that xenografted tumors from pre-miR-874-transfected cells were smaller and had a lower microvessel density (MVD) than tumors from miR-874-inhibitor-transfected cells. However, the exact mechanisms underlying the regulation of angiogenesis in GC by miR-874 remain unknown. Using bioinformatics, we identified STAT3 -a key transcription factor [12] that plays a vital role in human gastric cancer angiogenesis [13] -as a potential direct target of miR-874. Therefore, we investigated the role of miR-874 in GC angiogenesis and its relationship with the STAT3 pathway. In this study, we found that miR-874 strongly repressed GC angiogenesis by targeting the 3' untranslated www.impactjournals.com/oncotarget region (3'-UTR) of the STAT3 mRNA, leading to inhibition of the STAT3 pathway and down-regulation of the angiogenic factor VEGF-A. Our results suggest that down-regulation of miR-874 may be important for the development and progression of GC, highlighting the potential for miR-874 as a therapeutic target. miR-874 is down-regulated in human gastric cancer tissues and cells To determine whether miR-874 is mis-regulated in GC tissues, miR-874 expression in GC tissues and adjacent normal tissues was analyzed using miRNA RT-PCR. As shown in Fig. 1A, the expression levels of miR-874 in human GC tissues were much lower than in the adjacent normal tissues. Further experiments showed that several GC cell lines, including AGS, BGC823, MKN28 and SGC7901, had undetectable or low levels of miR-874. By contrast, normal gastric mucosa epithelial cells (GES-1) had high levels of miR-874 (Fig. 1B). These results indicate that miR-874 is downregulated in both GC tissues and cancer cell lines. To investigate the impact of miR-874 on tumor angiogenesis, we constructed both miR-874 overexpression and knockdown GC cell lines (Fig. 1C). As shown, miR-874 was knockdown about 80% in SGC7901 cells, increased about 140 folds in AGS cells. were used to extract total RNA. The levels of miR-874 were analyzed by miRNA RT-PCR. U6 was used as an internal control. (B) The expression levels of miR-874 were analyzed by miRNA RT-PCR in a normal gastric mucosa epithelial cell line (GES-1) and the GC cell lines AGS, BGC823, MKN28 and SGC-7901. (C) SGC-7901 or AGS cells were transfected with a specific miR-874 inhibitor, pre-miR-874 or empty lentiviral construct vectors. miRNA RT-PCR was used to analyze the expression levels of miR-874. (D) The OD value of the HUVECs (determined by CCK8 assay) were used to calculate the number of cells for each group(SGC-7901-NC, SGC-7901-miR-874-inhibitor, AGS-NC and AGS-pre-miR-874). (E) Tube-formation assays with HUVECs were performed with the different group of conditioned medium. The network area was calculated using Image Pro Plus 6. (F-G) Transwell migration and Matrigel invasion assays with HUVECs were performed in each group. Asterisks indicate a significant difference compared with controls at P < 0.05. miR-874 regulates the processes of tumor angiogenesis in vitro To confirm that miR-874 is a potential angiogenesis suppressor in GC, we investigated the influence of miR-874 on the tube formation, proliferation, migration and invasion of HUVEC cells in vitro. Tube-formation assays with HUVECs were performed with the different group of conditioned medium (miR-874 inhibitor, pre-miR-874 or empty vector). Compared with control cells, the silencing of miR-874 group increased the tube-forming capacity of HUVECs, whereas ectopic expression of miR-874 group dramatically reduced the tube-forming capacity of HUVECs (Fig. 1D). Next, we used cell migration and Matrigel invasion assays to investigate the effects of miR-874 on HUVEC migration and invasion. Our data revealed that HUVECs migration was enhanced by miR-874 knockdown in SGC-7901 cells, whereas migration was suppressed by miR-874 overexpression in AGS cells (Fig. 1E). Additionally, compared with control cells, the silencing of miR-874 in SGC-7901 cells dramatically boosted HUVECs invasiveness, whereas miR-874 up-regulation inhibited HUVECs invasiveness, as assessed using a Matrigel invasion assay (Fig. 1F). We also used the CCK8 assay to assess the effects of miR-874 on HUVECs proliferation. The proliferation of HUVECs in the miR-874 inhibitor-treated group was significantly increased compared with the control group in SGC-7901 cells. By contrast, the conditioned medium of AGS cells transfected with the miR-874 precursor caused a significant decrease in HUVECs proliferation compared with the negative control group (Fig. 1G). VEGF-A expression is inhibited by miR-874 SGC-7901-NC, SGC-7901-miR-874-inhibitor, AGS-NC and AGS-pre-miR-874 cells were used to test the levels of VEGF-A, which is the most important angiogenic factor influencing vasculature and angiogenesis. As shown in Fig. 2A, when compared with SGC7901-NC cells, VEGF-A mRNA levels in the SGC-7901-miR-874-inhibitor cells were increased by approximately 33%. Compared with the AGS-NC cells, VEGF-A mRNA levels in the AGSpre-miR-874 cells were decreased by approximately 30%. Similarly, the protein levels of VEGF-A were increased in SGC-7901-miR-874-inhibitor cells compared with SGC7901-NC cells, and they were also decreased in AGS-pre-miR-874 cells compared with AGS-NC cells (Fig. 2B). We also used ELISA to detect secreted VEGF-A protein in the supernatants of the above cell lines. As expected, knockdown of miR-874 increased secreted VEGF-A protein expression, and overexpression of miR-874 decreased secreted VEGF-A protein expression (Fig. 2C). The 3'-UTR of STAT3 is a target of miR-874 TargetScan (http://www.targetscan.org/), PicTar (http:// pictar.bio.nyu.edu), miRanda (http://www.sanger.ac.uk), and miRBase (http://www.mirbase.org) were used to predict genes which miR-874 might target. We identified a putative miR-874 binding site within the 3'-UTR of STAT3 (Fig. 2D), and a luciferase reporter assay was used to validate whether STAT3 is a direct target of miR-874. Wild-type (WT) and mutant (MUT) versions of the STAT3 3'-UTR -the latter containing site-directed mutations in the putative miR-874 target sites -were cloned into reporter plasmids. Forced expression of miR-874 markedly suppressed luciferase activity from the wild-type reporter (50%) but not from the mutant reporter, suggesting that the 3'-UTR of STAT3 is targeted by miR-874 and that the point mutations in this sequence abolished this effect (Fig. 2E). miR-874 suppresses STAT3 protein expression through translational repression miR-874 silencing in SGC7901 cells, which lack endogenous STAT3 expression, resulted in the up-regulation of STAT3 protein by approximately 3 folds compared with the negative control. Conversely, the protein levels of STAT3 were significantly reduced about 67% in AGS cells, which exhibit basally high expression of STAT3, after transfection with pre-miR-874 (Figs. 2G and 2H). In addition, the activated form of STAT3 (p-STAT3, Tyr705) was significantly increased in miR-874 knockdown cells (SGC-7901) and decreased in miR-874-overexpressing cells (AGS) (Figs. 2G and 2I). In contrast, no significant changes were observed for STAT3 mRNA levels (Fig. 2F). These results indicate that miR-874 suppresses STAT3 protein expression through translational repression. miR-874 inhibits tumor growth, angiogenesis in vivo and negative correlated with STAT3, VEGF-A expression. To determine the effects of miR-874 on tumorigenicity in vivo. Transfected cells were injected into the flanks of nude mice to form ectopic tumors. After 21 days, we observed a slower tumor growth in the miR-874-NC group than in the miR-874-inhibitor group (SGC-7901 cells). Similar phenomenon was observed in AGS cells, that tumor in pre-miR-874 group grow slower than in miR-874-NC group (Figs. 3A, 3B and Supplementary Fig. 1). The average weight of tumors from the miR-874inbibitor group was significantly more than that from the control group in SGC-7901 cells. In AGS cells, we could find that the tumors from pre-miR-874 group were heavier than that from the control group, which was consistent with the results in SGC-7901 cells (Fig. 3C). Moreover, the immunohistochemical assays showed that the number of www.impactjournals.com/oncotarget 3'-UTR regions containing the wild-type or mutant binding site and the sequence complementarity between miR-874 and the STAT3 3'-UTR (WT and MUT) are shown. (E) Relative STAT3 luciferase activity was analyzed after co-transfection with the wild-type or mutant 3'-UTR reporter plasmids. The histogram represents the mean normalized 3'-UTR luciferase intensity from three independent experiments (mean ± s.d.). (F) qRT-PCR was used to analyze the mRNA levels of STAT3 in SGC7901 and AGS cells transfected with miR-NC, miR-874 inhibitor or pre-miR-874. (G-I) Western blotting assays were used to analyze the expression levels of STAT3 and p-STAT3 in SGC-7901 and AGS cells transfected with miR-NC, miR-874 inhibitor or pre-miR-874. Relative densities were quantified using Image-Pro Plus software. The results are presented as mean ± s.d. values from three duplicate experiments. Asterisks indicates significant differences compared with controls at P < 0.05. CD31-positive microvessels was dramatically increased about 3 folds by the SGC-7901-miR-874 inhibitor, whereas AGS-pre-miR-874 decreased the number of CD31-positive microvessels to 65% (Figs. 3D). Similar trends were observed with respect to the expression of STAT3 and VEGF-A in the tumors. (Figs. 3E and 3F). Furthermore, western blotting analysis of the implanted mouse tumors verified that STAT3 and VEGF-A protein expression were significantly enhanced in the SGC-7901-miR-874 inhibitortransfected group compared with the controls, whereas their expression were decreased in the AGS-pre-miR-874transfected group (Fig. 3G). Taken together, we found that miR-874 could inhibit tumor growth and angiogenesis in vivo, and that the negative correlation between miR-874 expression and STAT3 or VEGF-A levels. miR-874 inhibits tumor angiogenesis by targeting STAT3 We demonstrated that ectopic expression of miR-874 in GC cells suppressed the tube formation, proliferation, migration and invasion of HUVECs and inhibited VEGF-A and STAT3 protein expression. By contrast, miR-874 knockdown promoted these behaviors and enhanced VEGF-A and STAT3 protein expression (Figs. 1, 2, 3). To further demonstrate that miR-874 in GC cells affects the angiogenesis of HUVECs through the regulation of STAT3, we up-regulated and down-regulated STAT3 expression. The results showed that that enhanced expression of STAT3 in GC cells (SGC-7901) promoted the proliferation, migration and invasion of HUVECs (Figs. 4A-D; e, LV- (B-C) The graph is representative of tumor growth 21 days after inoculation. Tumor volume and the weight were calculated, and all date are shown as mean ± s.d. (D-F) The expression of CD31, VEGF-A and STAT3 were analyzed in tumor tissues by immunohistochemistry; representative images are shown at magnification × 20 (CD31) or × 40 (STAT3, VEGF-A). Blood vessels were stained using anti-CD31 antibodies, and positively stained blood vessels were counted in five areas per slide to determine the maximum number of microvessels; 10 slides per experiment. The results are presented as mean ± s.d. values (n = 10). (G) The relative expression of STAT3 and VEGF-A protein in the different groups were analyzed by western blotting. Asterisks indicate significant differences compared with negative controls versus SGC-7901-miR-874 inhibitor and AGS-pre-miR-874, respectively; P < 0.05. NC vs f, LV-STAT3), whereas knockdown of endogenous STAT3 (AGS) inhibited these behaviors in HUVECs (Figs. 4E-4H; e, STAT3-shcontrol vs STAT3-shRNA). Intriguingly, the inhibitory effect of STAT3 silencing on these cellular phenotypes was consistent with the effect of miR-874 overexpression. Subsequently, we investigated whether STAT3 could counteract the suppression of these cellular phenotypes induced by miR-874 overexpression in HUVECs. The vector LV-STAT3, which contains only the STAT3 coding sequence, was constructed for STAT3 expression without miR-874 targeting. AGS cells were co-transfected with miR-874 precursor and either LV-STAT3 or LV-NC. The data clearly confirmed that ectopic expression of STAT3 effectively reversed the suppression of HUVEC proliferation, migration and invasion caused by miR-874 overexpression (Figs. 4E-H; c, pre-miR-874+LV-STAT3 vs d, pre-miR-874+LV-NC). In SGC-7901 cells, we observed a similar phenomenon, which could be counteracted by down-regulation of STAT3 (Figs. 4A-D; c, miR-874-inhibitor+STAT3-shRNA vs d, miR-874-inhibitor+STAT3-shcontrol). In addition, we observed similar trends when we tested the levels of VEGF-A, STAT3, p-STAT3 protein and secreted VEGF-A protein in the supernatant (Figs. 5A-J). Taken together, these results confirmed our hypothesis that miR-874 in GC cells affects HUVEC proliferation, migration, invasion and VEGF-A expression by regulating STAT3 expression. miR-874 expression is negatively correlated with the STAT3 levels in human GC tissues To determine whether reduced miR-874 expression correlates with increased levels of STAT3 in GC tissues, eighty pairs of primary GC tissues and adjacent normal tissues were used to determine the STAT3 expression using Western blotting analysis. The results indicated that STAT3 protein levels in GC tissues were dramatically higher than in adjacent normal tissues (Figs. 5K and 5L). As shown in Fig. 5M, linear correlation analysis showed a significant inverse correlation between miR-874 and STAT3 expression in GC tissues (P < 0.01), confirming that decreased expression of miR-874 was significantly associated with increased STAT3 protein expression in this set of GC tissues. DISCUSSION miRNAs are short (20-24 nt), stable, non-coding RNA molecules that regulate 60% of coding genes by binding to mRNA molecules to prevent translation and/or promote degradation. To date, over 1,000 miRNAs have been identified, and they have been shown to participate in nearly all biological processes, including cell proliferation and tumor angiogenesis. Indeed, novel functions and mechanisms by which miRNAs regulate their target genes are regularly discovered [14,15]. Many miRNAs have been shown to act as either oncogenic factors or tumor suppressors, with their specific functions depending on the targeted mRNA. Activation of oncomiRNAs leads to inhibition of tumor suppressor genes, facilitating cell proliferation and tumor progression. Conversely, the decreased activity of tumor-suppressor miRNAs leads to increased oncogene translation, contributing to tumor formation [16]. miR-874 has been identified as a tumor-suppressor and is reportedly down-regulated in some types of cancer, including GC [17][18][19][20][21]. Interestingly, mir-874 is also involved in Mild Cognitive Impairment (MCI), such as Alzheimer's diseases [22]. In the present study, we confirmed that miR-874 expression is significantly lower in GC tissues and cell lines. These results indicate that the down-regulation of miR-874 plays an important role in the initiation and development of GC. In our previous study, we demonstrated that miR-874 plays a suppressive role in the growth, migration and invasiveness of GC cells [11]. In addition to these behaviors, tumor angiogenesis is also important for tumor progression. Angiogenesis is the process by which new micro-vessels sprout from pre-existing blood vessels. Abundant neovascularization is necessary for adequate nutrition during tumor development, including metastasis. Recent studies have shown that miRNAs (e.g., miR-26a, miR-103, miR-125b, miR-132, and miR-107) regulate endothelial cell functions and affect blood vessel formation and extension [9,[23][24][25][26]. Therefore, we hypothesized that miR-874 may contribute to tumor angiogenesis in GC. Tumor angiogenesis is crucially dependent on communication between the tumor and the associated endothelium [27]. The migration, invasion, proliferation and tube formation of endothelial cells (ECs) are important processes for tumor angiogenesis [28]. Here, we describe a role for miR-874 in inhibiting angiogenesis, which is supported by a number of in vitro and in vivo experiments. miR-874 depletion in GC cells promotes HUVEC proliferation, migration, invasion and tube formation in vitro and increases micro-vessel density in vivo. By contrast, enhanced expression of miR-874 suppressed these effects. Further experiments revealed that miR-874 could attenuate tumor angiogenesis by down-regulating expression of VEGF-A. These results strongly suggest that down-regulation of miR-874 enhances the development and progression of GC. However, the mechanisms underlying how miR-874 affects tumor angiogenesis were not clear. Therefore, we searched for potential targets of miR-874 in gastric cancer cells using several computational algorithms. Among the candidate target genes, we focused on STAT3 because of its known role as a regulator of many critical functions in both normal and malignant human tissues, including angiogenesis, differentiation, proliferation, survival, and immune functions [29][30][31]. Constitutive STAT3 activation www.impactjournals.com/oncotarget The expression of STAT3 in human GC specimens and adjacent normal tissues was assayed by Western blotting analysis. STAT3 expression was normalized to GAPDH levels. STAT3 expression levels in GC tissues and adjacent normal tissues were measured and analyzed, as shown by the scatterplot. (L) Representative immunoblotting results from 6 pairs of GC tissues and adjacent noncancerous tissues. (M) Linear correlation analysis was used to determine the correlation between the expression levels of STAT3 and miR-874 using SPSS software (p < 0.01). A representative data set is displayed as mean ± SD values. Asterisks indicate significant differences when compared with controls at P < 0.05. promote VEGF-A expression and stimulates tumor angiogenesis [13,32,33]. Our study shows that miR-874 negatively regulates STAT3 at the post-transcriptional level by binding to a specific target site within the 3'-UTR. Overexpression of miR-874 in human GC cell lines inhibited STAT3 and p-STAT3 production at the translational level, and ectopic expression of STAT3 effectively reversed the suppression of HUVEC proliferation, migration, invasion and VEGF-A expression caused by miR-874 overexpression. In addition, a STAT3 shRNA impaired the enhanced angiogenesis caused by miR-874 knockdown. These results suggest that the inhibitory effects of miR-874 on angiogenesis are dependent on STAT3. We do note that this study has certain limitations. Our results showed that miR-874 can inhibit angiogenesis through the STAT3/VEGF-A pathway (Fig. 6). However, the regulation of angiogenesis-related cytokines in cancer cells is quite complex, and we do not rule out the possibility that other signaling pathways that modulate VEGF-A expression may also be affected by miR-874. In summary, we present evidence that miR-874 suppresses GC progression by modulating angiogenesis through the STAT3/VEGF-A pathway. We also demonstrate that the levels of miR-874 expression in resected patient gastric tumor tissues are significantly lower than in adjacent normal tissues and that they are inversely correlated with STAT3 protein levels in these tumors. These findings indicate that our study has clinical relevance and that miR-874 overexpression and/or strategies for inhibiting STAT3/VEGF-A signaling may have therapeutic applications for gastric cancer. Tissue samples Written informed consent was obtained prior to collection. Acquisition of tissue specimens and the study protocol were performed in strict accordance with the regulations of the Nanjing Medical University Institutional Review Board. Paired specimens of human gastric cancer tissue and adjacent normal gastric tissue were obtained from 80 patients with GC who had undergone surgical operations at the Department of General Surgery, First Affiliated Hospital, Nanjing Medical University, China, and samples were immediately frozen in liquid nitrogen until RNA and protein extraction. In all cases, diagnoses and grading were confirmed by two experienced pathologists and were carried out in accordance with the Cancer criteria of the American Joint Committee. Cells and cell culture The human GC cell lines AGS(ATCC, USA) and BGC823, MKN28, SGC-7901 as well as the human normal gastric epithelial cell line GES-1 (CBTCCCAS, China) were cultured in RPMI-1640 medium supplemented with 10% fetal bovine serum (WISENT, Canada) and antibiotics (1% penicillin/streptomycin, Gibco, USA). Human umbilical venous endothelial cells (HUVECs) (ATCC, USA) were cultured in Endothelial Cell Growth Medium. All cell lines were grown in a humidified chamber supplemented with 5% CO 2 at 37°C. miRNA RT-PCR Total RNA was extracted as above. Target-specific reverse transcription and Taqman microRNA assay probes were assayed using the Hairpin-it TM miRNA qPCR Quantitation Kit (Genepharma, CHINA), according to the manufacturer's instructions. The reactions were also performed using the 7500 Real-Time PCR System. The snRNA U6 was selected as an endogenous reference to calculate the relative expression levels of miR-874 in each sample using the 2 -∆∆Ct method. All experiments were performed independently in triplicate. Vectors containing green fluorescent protein (GFP) and the puromycin sequence for the overexpression and shRNA targeting of human STAT3 using lentiviral gene transfer were constructed by Genephama Biotech (Shanghai, China). The scrambled lentiviral construct was used as a negative control. For the shRNA targeting of human STAT3, we used the oligonucleotide sequence GCGTCCAGTTCACTACTAA. When SGC7901 cells were at 40-50% confluence, the cells were transfected with the lentiviral vectors (LV-STAT3, LV-NC; STAT3-shRNA, STAT3-shcontrol). Stable cell lines were selected using 3 μg/ml bulk puromycin-resistant cultures (puromycin, Sigma, USA) for one week. Afterward, cells were analyzed using quantitative RT-PCR and Western blotting analysis for STAT3 expression. 3'-UTR luciferase constructs and assay The 3'UTR of the STAT3 mRNA containing either the putative or mutated miR-874 binding site was synthesized by Shenggong (Shanghai, China). This sequence was cloned into the FseI and XbaI restriction sites of the pGL3 luciferase control reporter vector (Promega, USA) to generate the STAT3 3'UTR reporter. A total of 5 × 10 5 AGS cells stably transfected with pre-miR-874 or miR-NC were seeded into 24-well plates. Cells were cotransfected with 0.12 μg pGL3-WT-STAT3 or pGL3-MUT-STAT3 3'UTR reporter plasmid. In addition, 0.01 μg Renilla luciferase expression plasmid was cotransfected into the above cells as a reference control. Firefly and Renilla luciferase activities were measured using the Dual-Luciferase reporter assay (Promega, USA) 36 h after transfection, according to the manufacturer's instructions. Relative luciferase activity was calculated as the ratio of firefly fluorescence/Renilla fluorescence. Western blotting assay Total protein from frozen tissue and cell lines was extracted using the following lysis buffer: 50 mM Tris-HCl (pH 7.4), 150 mM NaCl, 1% Triton X-100, 0.1% SDS, 1 mM EDTA, and protease inhibitors (1 mM phenylmethanesulfonyl fluoride and a protease inhibitor cocktail). The protein extracts were size-fractionated using SDS-polyacrylamide gel electrophoresis and transferred to PVDF membranes (Bio-Rad, USA). After blocking, the membranes were incubated with specific primary antibodies in dilution buffer at 4°C overnight. The blotted membranes were incubated with HRP-conjugated anti-mouse or anti-rabbit IgG (Biotime, China) antibodies at room temperature for 2 h. Next, protein expression levels were detected using an enhanced chemiluminescence (ECL) detection system, according to the manufacturer's instructions. GAPDH was used as an internal control. Mouse antibodies to STAT3, rabbit antibodies to p-STAT3 (Cell Signaling Technology, USA), and mouse antibodies to VEGF-A(Abcam, UK) were used. ELISA for VEGF-A The protein levels of VEGF-A in the supernatant were measured using the Quantikine human VEGF-A ELISA kit (NeoBioscience, China), according to the manufacturer's instructions. Briefly, cells were seeded into 6-well plates and cultured to 80% confluence, and the cells were then switched to fresh medium. The supernatants were collected, and the number of cells in each well was counted after 24 h. The level of VEGF-A in the supernatant (100 μl) was determined and normalized to the cell number. A serial dilution of human recombinant VEGF-A was included in each assay to create a standard curve. HUVEC proliferation assay SGC-7901 and AGS cells were cultured as described above. When the cells reached 80% confluence, the culture medium was changed to RMPI-1640 without fetal bovine serum. Following an additional 24 h of culturing, the supernatant was collected as conditioned medium and stored at -80°C. HUVECs were suspended at a density of 2 × 10 4 cells/ml, and 100 μl cells per well were seeded into 96-well plates. After 24 h, the medium was changed to conditioned medium, as described above. Cell proliferation was assessed using the Cell Counting Kit-8 (Dojindo Laboratories, Japan), according to the manufacturer's protocol. At time point 72 h, 10 μl of cell proliferation reagent solution was added to each well of the 96-well plate, and the cells were incubated for 3 h in a CO 2 incubator. The absorbance at 450 nm (OD value) was measured using a microplate reader; absorbance at 630 nm was used as a reference. Average OD values were used to calculate the total number of cells for each group. HUVEC migration and invasion assays Cell migration and invasion assays were performed using a chamber 6.5 mm in diameter with an 8 μm pore size (Corning, USA). The upper chambers were seeded with 1 × 10 4 HUVEC cells. Subsequently, the different groups of conditioned medium were added to the lower chamber. For the invasion assay, the top chamber was coated with 100 μl of 1 mg/ml Matrigel (BD, USA). Cells were incubated at 37°C for 36 h, and then cells in the upper chamber were removed using cotton swabs. Cells migrating into or invading the bottom of the membrane were stained with 0.1% crystal violet for 20 min at 37°C, followed by washing with PBS. Four random fields from each membrane were photographed and counted for statistical analysis. In vitro HUVEC tube network formation assay For the tube network formation assay, each well of a 96-well plate was pre-coated with 50 μl of Matrigel (BD, USA) and allowed to polymerize for 30 min at 37°C. Next, cells were seeded onto Matrigel-coated wells at a density of 2 × 10 4 cells per well in conditioned medium at 37°C. Tube formation was found to be optimal after 4 h. Tube images were taken using a digital camera attached to an inverted phase-contrast microscope. Total tube length in each well was measured and calculated using image pro plus (IPP). Tumorigenicity in vivo All animal experiments were conducted according to the guidelines of the NJMU Institutional Animal Care and Use Committee. A total of twenty-four nude mice (BALB/c nude mice, Vitalriver, China; four weeks old) were randomly divided into four groups, and SGC7901-NC, SGC-7901-miR-874 inhibitor, AGS-NC or AGS-pre-miR-874 stable cells were inoculated subcutaneously into the flanks of nude mice. The mice were euthanized after 3 weeks. Immunohistochemistry (IHC) for subcutaneous grafts All specimens were fixed in 4% formalin and embedded in paraffin. MaxVision TM techniques (Maixin Bio, China) were used for IHC analysis, according to the manufacturer's instructions. After blocking the endogenous peroxides and proteins, 4 μm sections were incubated overnight at 4°C with diluted primary antibodies specific for STAT3, CD31 (Cell Signaling Technology, USA), and VEGF-A (Abcam, UK). Next, the slides were incubated with an HRP-Polymer-conjugated secondary antibody at 37°C for 1 hour. The slides were then stained with a 3,3-diaminobenzidine solution for 3 min and counterstained with hematoxylin. Tumor slides were examined in a blinded manner. Three fields were selected for examination, and the percentage of positive tumor cells and cell-staining intensity were determined. Statistical analysis Data are expressed as mean ± standard deviation (SD) values. Clinicopathological findings were compared using unpaired t-tests or Pearson x 2 tests. Analysis of variance (ANOVA) was used to compare the control and treated groups. P-values <0.05 were considered to be statistically significant. Statistical analysis was performed using the SPSS software (Version 15.0).
v3-fos-license
2019-04-02T13:14:31.504Z
2017-12-13T00:00:00.000
91027446
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "http://ologyjournals.com/beij/beij_00001.pdf", "pdf_hash": "1c135b2e9eee5a6131bc58385160ba29c5a573de", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:195", "s2fieldsofstudy": [ "Biology" ], "sha1": "e9dc09b22ee1b7009916091b9a2d11a464a7cf8f", "year": 2017 }
pes2o/s2orc
Commonly used statistical methods for detecting differential gene expression in microarray experiments Bioinformatics tools provide the needs for carrying out microarray analysis. Differential gene expression analysis reveals the substantial aspects of biological pathways and takes a leading part for further hypothesis development. The principle in analyzing the gene expression data is the need for determination the genes of which expression models differ by phenotype or an experimental condition. Microarrays are employed to detect the distinctive gene expression profiles in a wide variety of experimental conditions. In the field of bioinformatics, different computer science technologies and statistical methods are considered. This multidisciplinary approach allows the understanding of the relationship between statistical methods, bioinformatics applications and computer science technologies. Images resulting from microarray experiments are not directly interpreted to reveal differences in sample condition. To make microarray experiments interpretable, a number of algorithms and statistical approaches need to be applied. The raw dataset has to go through preprocessing step prior to a series of subsequent analyses as significance analysis clustering and visualization of the relevant biological components and samples. Many microarray studies are designed to detect genes associated with different phenotypes. This review attempts to give an overview of statistical methods for identifying differentially expressed genes in microarray experiments. In this review, our aim is to summarize briefly basic methods as a guide for differential gene expression studies. All the summarized methods are given from the basic to more complicated in the review. . Introduction Bioinformatics research studies are concerned with the analysis of large quantities of biological data with the help of computational techniques. In recent years, advances in molecular biology and information technology have allowed a major part of the genomes of various species to be investigated. Current bioinformatics studies are concerned with the structural and functional aspects of genes and proteins since the amount of data produced in the field of molecular biology is enormous. Most of these studies are related to the Human Genome Project. In summary, bioinformatics is an interdisciplinary area at the intersection of information technology, statistics and biology. 1,2 The most basic tasks in the field of bioinformatics are the creation and development of biological databases. The majority of these databases consist of nucleic acid sequences and protein sequences derived from them. One of the most important applications in the field of bioinformatics is the analysis of sequence information. Bioinformatics investigates the genetic structures of all living organisms through the development of new information technologies to clarify complex biological questions. 1 A primary goal of empirical genetic studies is the identification, quantification, and comparison of genetic differentiation among individuals, populations, species and studies. 3 Microarray technology has made it possible to measure the expression of thousands of genes simultaneously. 4 Statistical analysis of microarray data is started through software programs using CEL files defined as raw data. Prior to the start of the analysis, quality assessment of raw data is performed as the first step. In order to evaluate the homogeneity of the arrays and to compare the density distribution between the arrays, box graphs are plotted for each array using the densities of the logarithm2 base of the raw data. Images of the CEL files are obtained to observe the dimensional distributions of the densities on each array and to detect dimensional artifacts. MA-plots are used to compare the expression values for all possible pair of arrays with a probeset-wise median array. The MA plots are generated by plotting M values which are obtained by logarithmic ratios versus A values which are average logarithmic intensity values. The pre-normalization quality control step can be complemented by histograms drawn to assess the density distributions of each array. 5 After quality control of raw data, background correction and normalization should be applied to the data using background correction methods such as RMA (Robust Multiple-Array Average) method. With the RMA method, the probe-level signal is removed from the background signal. Quantile normalization is performed by the RMA method and it is ensured that all the arrays have the same quantile. Using the RMA method, the expression set to be used in the analysis is generated by normalized and the background corrected intensities. After the background correction and the normalization methods are performed, box charts related to each array are drawn to re-evaluate the quality control. Following normalization and background correction, a list of genes that differ between two different conditions can be obtained by applying various statistical tests to the expression dataset to be used for analysis. 5 Preprocessing of microarray data Gene expression measurement is generally obtained as a measure of fluorescence intensity. 6 Background fluorescence can arise from many sources such as non-specific binding, residual precipitates after the washing step, optical noise from the scanner. 5,7 Measurement values may have undergone various adjustments in the device system, such as calibration. Thus, in the presentation of gene expression data, it must be explained how the values are generated by the device system. 5,6 These expression measures always contain a component called "background noise." Local background noise levels are measured from the areas of the glass slide that do not contain probes. The background correction tries to remove non-specific background noise and local variations of the overall signal level on each chip. 5 The most common method to remove the background effect is to remove the measured fluorescence intensity around the spots. 8 Microarray gene expression data sets consist of is given as follows [6]; The most common methods used for background correction in microarray analysis are; The "Robust Multi-Array Average (RMA) Background Correction" method and the "MAS 5.0 Background Extraction" methods. 9 RMA background correction: RMA background correction is a method that uses only Perfect Match (PM) intensities. PM values are corrected using a global model for the distribution of probe intensities. 7 The model is based on the experimental distribution of probe intensities. Observed PM probes are modeled as a Gaussian noise component with µ average and 2 σ variance. 7 To avoid negative expression values, the normal distribution is truncated at zero. If the observed density is assumed to be Y, the correction will be as follows; RMA background correction has been one of the most commonly used pre-processing method in the recent literature. [10][11][12] Performed assessments of the measure's precision, consistency of fold change, and specificity and sensitivity of the measure's ability to detect differential expression and demonstrated the substantial benefits of using the RMA measure to users of the Gene Chip technology. They used data from spike-in and dilution experiments to conduct various assessments on the MAS 5.0, dChip and RMA expression measures. Irizarry have demonstrated that RMA has similar accuracy but better precision than the other two summaries and RMA provides more consistent estimates of fold change. 12 The study of 13 implements seven data extraction methods including MAS 5.0 and RMA to calculate expression indices from Affymetrix microarray gene expression data and tested use of different background correction methods calculated the correlation coefficient for each pairwise comparison of background correction methods. 15 Quality assessment: It is necessary to evaluate the quality of the data before the normalization of the arrays. Quality control assessment should be carried out to determine whether the quality of experimental data is acceptable and whether any hybridization should be repeated. 5,7 Various descriptive data plots are drawn to identify potential problems with hybridization or other experimental structures in the quality control evaluation process. Quality control plots are basically divided into diagnostic and spot statistics. 6,7 Diagnostic Plots: The diagnostic plots include various plots such as MA-plots for evaluating intensity bias and histograms for examining signal-to-noise ratios for each channel. Diagnostic plots are usually used to observe non-linear trends between two channels. 7 a. MA plots: M and A are commonly used variables in the analysis of two-color arrays. A is defined as follows; Cy5 and Cy3 denote green and red dye intensities for a given spot, respectively. A variable is a measure of the total intensity of the logarithmic transformation of a spot. Thus, if the combined red and green intensities are high for a particular spot, the A value will also be high. 7,9 M variable is defined as follows; ( ) ( ) The M variable is the logarithmic transformation of the intensity ratio. The M value shows which of the red and green dyes are more binding to a particular spot array. 7 MA plots are used to investigate density bias. A disproportionate amount of spot above or below the x-axis on the graph indicates a problem in the array. MA plots are an indication of whether normalization within the array is required. 6,7 Alvord et al.demonstrated the use of some of the exploratory plots including boxplots, volcano plots and MA plots for the expression level data on the soybean genome 14 . Lu et al.'s study, can be cited as an example of MA-plot application in method comparison studies, in which MA-plots were created on the raw data and normalized data to compare normalization methods. 15 b. Histograms: In microarray designs, it is very important to obtain the histograms of the p-values of tests conducted to identify different gene expression. Histograms are graphs that are easy to interpret and contain considerable information. A histogram is an indication of whether there is a signal in the gene and whether the genes are differently expressed. Histograms also allow for estimation of how many genes are differentially expressed in reality. 16 Spot statistics plots: Spot Statistics help to predict the structures of spot and hybridizations. The main plots that can be obtained with spot statistics are spatial plots, box plots, scatter diagrams and volcanic plots. 16 a. Spatial plots: Spatial plots are used to reveal irregular spot and hybridization structures. Spatial plots are used to observe the spatial distributions of the intensities on each array and to detect the artifacts. Spatial plots play a fundamental role in determining the background correction, depending on whether there are any dimensional artifacts on the arrays. 7 b. Box plots: Box plots are one of the most commonly used plots for displaying spot and hybridization structures. At the same time, box plots can be drawn to understand the scale differences between different arrays. It is necessary to evaluate the box plots to see if between-array normalization is required. The homogeneity of the arrays can be observed quite clearly from the box plots. 7,16 c. Scatter diagrams: Scatter diagrams used to compare the expression values of two samples are the most commonly used plots for visualizing microarray data. 17 In the first step of the microarray data analysis, a scatter diagram is drawn between the two intensity channels to view the general structures and variability. Scatter diagrams are also commonly used to find out slides lying away from the center, which have abnormal expression structures. 18 d. Volcano plots: Volcano Plots are used to summarize fold change and t-test criteria. A volcano plot is constructed by plotting the negative log of the p-value on the y-axis and log of the fold change between the two conditions on the x-axis. For each gene, there is a point on the graph that represents the t-statistic and the fold change 16 (Figure 1). Normalization: The purpose of the normalization phase is to adjust the data according to the technical variation. Variations can cause measurement differences between general fluorescence intensity levels of various arrays. The normalization process is necessary to make the measured values obtained from different arrays comparable. 9 Normalization methods depend on which microarray technology is used. Generally, logarithmically transformed data are used for further analysis. 19 The most commonly used methods of normalization are as follows. 7 1. Scaling Normalization Method 2. Nonlinear Normalization Methods Contrast Normalization Scaling normalization method: The Scaling Normalization Method scales all the arrays with the same average intensity by choosing a reference array. The constructed procedure is to determine a reference array and then create a linear regression between each array and the chosen array without a constant term. Subsequently, the regression line is used as a normalization relation. 7 Nonlinear normalization methods: Non-Linear Normalization Methods perform non-linear corrections between arrays. These methods generally perform better than linear corrections such as the scaling method. 19 There are many nonlinear normalization methods in the literature. Workman et al. proposed a nonlinear normalization method using array signal distribution analysis and cubic splines. 18 Chen et al. proposed a subset normalization to adjust for location biases combined with global normalization for intensity biases. 19 Edwards 19 proposed a nonlinear LOWESS normalization method used in one channel cDNA microarrays mainly for correcting spatial heterogeneity. Quantile normalization: The purpose of quantile normalization is to impose the same experimental density distribution on each array. If two data vectors have the same distribution, a Q-Q graph has a straight diagonal line with 1 slope and 0 intercept. 20 Drawing quantiles of two data vectors against each other and designing each point on a 45-degree diagonal line leads to a transformation that allows both data vectors to have the same distribution. 20 Cyclic loess normalization: The cyclic loess method is a generalization of the global loess method in which the Cy5 and Cy3 channel intensities are normalized using MA graphics. When dealing with single channel array data, array pairs are normalized according to each other. The Cyclic Loess method normalizes the intensities for an array set by working in dual form. 21 Contrast normalization: In contrast normalization, the data is transformed into a contrast set and a nonlinear MA-plot normalization is performed. Afterward, inverse transformation is applied. 7 Normalization procedures are essential as the first step of expression analyses for adjusting artifacts on samples and making samples comparable. Choice of normalization procedure plays a fundamental role in the final results of gene expression analysis. There are several methods for normalization in the literature. Quantile normalization procedure is one of the most commonly used between these methods. Qian et al. used quantile algorithm for normalization in their study which is aimed to identify differentially expressed genes and compare the expression profiles. 22 The raw expression data was prepro cessed Statistical methods used in differential gene expression analysis: The principle in analysing the gene expression data is the need for determination the genes of which expression models differ by phenotype or an experimental condition. 1 A simple approach to selecting genes is to use the "fold change" criteria. This is only possible if there are no or only a few repetitions. However, an analysis based only on the fold change does not allow for the assessment of the significance of expression differences in the presence of biological and experimental variations that may vary from gene to gene. This is the main reason for using statistical tests to evaluate differential expressions. Parametric tests generally have a higher power if the underlying model assumptions are met. 9 When doing the statistical analysis of microarray data, an important question is determining which scale to analyze the data. Generally, a logarithmic scale is used to approximate the distribution of the repeated measures for each gene to roughly symmetric and normal. 7 The variance-stabilizing transformation derived from an error model for microarray measurements can be used to make the variance of the measured intensities independent of their expected value. This may be advantageous for gene-based statistical tests based on variance homogeneity. 16 This will reduce variance differences between experimental conditions arising from differences in intensity level. However, it should not be forgotten that differences in the variance between conditions may also have gene-specific biological causes. 19,20 t-test comparisons for one or two groups, variance analysis for multiple groups, and trend tests are frequently used models to assess differential gene expression. Due to lack of knowledge about the coregulation of genes, linear models are usually calculated separately for each gene. When lists of relevant genes are identified, researchers can begin coordinated regulatory studies to further model their common actions. 7,20 A statistical testing approach for each gene is common. However, this approach has some difficulties. Most importantly, a large number of hypothesis tests are being performed. This potentially leads to a number of false positives. Multiple test procedures allow the assessment of the overall significance of the results of a group of hypothesis tests. These procedures focus on specificity by controlling type I (false positive) error rates such as experimental error rate or false discovery rate. These controls are statistical methods used in multiple hypothesis tests to correct for multiple comparisons. However, testing multiple hypotheses remains a challenge. Because an increase in specificity is related to a loss of sensitivity, as provided by the p-value correction methods. Therefore, the possibility of detecting the true positivity decreases. 7,9 Most microarray experiments involve only a few repetitions for each condition, making it difficult to predict gene-specific variances. Different methods have been developed to take advantage of variance information from all gene data. 18 Fold change criteria: A simple microarray experiment is carried out to determine the expression differences between two conditions. Each condition can be represented by one or more RNA samples. The simplest method used to test expression differences of genes is the "Fold Change" criterion. 16 The method calculates the logarithm rate between expression levels of the two conditions. It then evaluates whether all genes are greater than a threshold value determined for differential expression. The Fold change method is not considered reliable because it does not take statistical variability into consideration. The fold-change method is subject to bias if the data have not been properly normalized. 18 For gene i, fold change is defined by as follows; Where i x and i y denote the expression levels in the control and treatment groups for the i th gene, respectively. 16 The standard t-test is frequently used to identify differentially expressed genes between two conditions in microarray studies. Shi et al. used student's t-test to obtain differentially expressed genes between embryonic stem cells and urinary induced pluripotent stem cells. 25 Satterthwaite-welch t-test: Homogeneity of variances is rarely seen in microarray experiments. Heterogeneous sample or cell samples may cause heterogeneous variance in microarray experiments. Changing the correlation structure of expression change by the transcription factor can also cause heterogeneous variance. Therefore, Welch (Satterthwaite's) t-test method would be more appropriate for independent samples with different variances. 21 Let 1 x and 2 x be two independent gene expression data under two conditions, The Welch t-statistic is calculated as follows for a given gene; The Welch method has a special correction for the degree of freedom under the variances of different samples; i ω is the estimated squared standard error for sample i; Let j x and j s be the mean difference and standard deviation of differences, respectively. Paired sample test statistic is given by; With n-1 degrees of freedom. Moderated t-test: Moderated t-test is one of the most common methods used in microarray studies. The modification is to add a small value to the standard deviation, which reduces the variability of the t-value, while the t-statistic is being calculated. The main purpose of the moderated t-test is to reduce the statistical significance of genes with a small standard deviation to avoid false positives. 19,28 Moderated t-statistic is described by as follows; Where 0 s is a selected constant to reduce the variability of ' i T . The moderated t-statistic improves the ranking performance if the purpose is to create a short sorted list. 21,29 It has been proven in many studies that the moderated t-statistic performs better than the classical t-test and the fold change criteria. 30,31 Moderated t-test is most frequently used method to identify differentially expressed genes between two conditions in the literature. The ANOVA model for microarray data can be determined in two steps. 34 The first stage is the normalization model; Where ijgr y is the logarithm of signal intensity for the i th array, j th dye, g th gene and r th measurement. μ is the overall average expression level. A is the effect of the array at the measured intensity, D is the effect of the dye, and AD is the effect of interaction between dye and array. The first step is to form ijgr r term from the measured intensities. In the second step, gene-specific effects are modeled in terms of residuals of the normalization method. 21 The gene-specific model is expressed as follows; G is the average intensity for a particular gene, i AG is the effect of an array on this gene, j DG is the effect of dye on this gene, and ijr ∈ is the residual value. The variety-by-gene (VG) term is the main interest in the analysis, and reflects the variability in expression levels between samples for a particular gene. ij VG is expressed as a "catchall" term for the effects related to samples. In the simplest case, ij VG is an indicator of the condition represented by the sample for j dye and i array. In more complex experiments, the design structure at the biological sample level is reflected in the VG terms. 34 When there are duplicated spots in an array, the model should include additional terms for labeling and spot effects. The Gene-specific model can be modified by removing the terms DG and AG for singlecolor data. 34 Hypothesis testing involves comparing two models. The null hypothesis suggests that there are no differential expressions (the VG values are equal to zero) and the alternative hypothesis suggests that there are differential expressions between conditions (the VG values are not equal to zero). 29 Conclusion In this review, we have attempted to give a broad overview of the statistical methods used for differential gene expression analysis. The microarray has made possible the simultaneous investigation of thousands of genes. Microarray technology helps researchers learn about various kinds of diseases. The identification of differentially expressed genes has a great importance to understand biological issues. Scientists can find out the expression levels of thousands of genes by using microarrays. Many of the genetic discoveries show that fundamental need is to implement an appropriate statistical method for finding the differentially expressed gene lists. For this reason, both information technologies and statistical methods have been adapted and developed according to this need. The use of bioinformatics tools for the studies related to gene expression datasets has seen a massive increase in recent years. It is without a doubt, progress in the field of bioinformatics in the future will be at the forefront of other branches of science to define genetic structures. Bioinformatics has empowered the scientific researchers to understand the significant points of genetic issues and various kinds of diseases.
v3-fos-license
2018-04-03T03:41:25.667Z
2014-07-09T00:00:00.000
15537907
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/er/2014/317940.pdf", "pdf_hash": "1f04bd51640809398d2f589e2f5789a7f594d5f0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:196", "s2fieldsofstudy": [ "Biology", "Chemistry", "Environmental Science" ], "sha1": "1f04bd51640809398d2f589e2f5789a7f594d5f0", "year": 2014 }
pes2o/s2orc
Sequential Statistical Optimization of Media Components for the Production of Glucoamylase by Thermophilic Fungus Humicola grisea MTCC 352 Glucoamylase is an industrially important enzyme which converts soluble starch into glucose. The media components for the production of glucoamylase from thermophilic fungus Humicola grisea MTCC 352 have been optimized. Eight media components, namely, soluble starch, yeast extract, KH2PO4, K2HPO4, NaCl, CaCl2, MgSO4 ·7H2O, and Vogel's trace elements solution, were first screened for their effect on the production of glucoamylase and only four components (soluble starch, yeast extract, K2HPO4, and MgSO4 ·7H2O) were identified as statistically significant using Plackett-Burman design. It was fitted into a first-order model (R 2 = 0.9859). Steepest ascent method was performed to identify the location of optimum. Central composite design was employed to determine the optimum values (soluble starch: 28.41 g/L, yeast extract: 9.61 g/L, K2HPO4: 2.42 g/L, and MgSO4 ·7H2O: 1.91 g/L). The experimental activity of 12.27 U/mL obtained was close to the predicted activity of 12.15. High R 2 value (0.9397), low PRESS value (9.47), and AARD values (2.07%) indicate the accuracy of the proposed model. The glucoamylase production was found to increase from 4.57 U/mL to 12.27 U/mL, a 2.68-fold enhancement, as compared to the unoptimized medium. Introduction Glucoamylase or 1,4--D-glucan glucohydrolase (EC 3.2.1.3) is an industrial enzyme which can degrade amylose and amylopectin by hydrolysis of both -1,4 and -1,6 glucosidic links, present in starch, resulting in production of -D glucose [1]. There are two stages in the production of industrial starch syrup: liquefaction and saccharification. In the first step, thermostable -amylases are used to liquefy starch. Following this, saccharification is carried out at 55-60 ∘ C with fungal glucoamylases. The glucoamylase from mesophilic fungi (e.g., Aspergillus niger) is unstable due to its exposure to higher temperatures for a prolonged duration [2]. This disadvantage necessitates the use of thermostable glucoamylases derived from thermophilic fungal sources [3] for industrial usage. Glucoamylase production depends on many media components such as carbon source, nitrogen source, mineral salts, and micronutrients. Therefore it is necessary to optimize the medium components for the enhanced production of glucoamylase [5,[9][10][11]. The classical one-variable-at-a-time (OVAT) approach involves altering the concentration of one of the components and maintaining the others, at a specified level. This is usually problematic since it is laborious and the interaction effects between the various media components are not taken into consideration. The shortcomings of this approach are overcome by the use of statistical techniques like 2 Enzyme Research Plackett-Burman design (PBD), steepest ascent, and response surface methodology (RSM) [12]. Beginning from a large collection of factors, PBD helps to identify the main factors that would be taken up for further optimization processes, through lesser number of trials. The significant factors chosen from PBD are sequentially moved along the path of steepest ascent to target the maximum production of glucoamylase. The levels of the components obtained along the region of maximum response are used in central composite design (CCD), a response surface methodology technique. These form the right set of techniques leading to the optimal concentration of the various significant media components. This approach of arriving at the optimal media composition has been practiced by various researchers in many fermentation processes [13][14][15]. To the best of our knowledge, there are no available reports on the optimization of media components for glucoamylase production using Humicola grisea. Therefore, in the present report, the media components (soluble starch, yeast extract, KH 2 PO 4 , K 2 HPO 4 , NaCl, CaCl 2 , MgSO 4 ⋅7H 2 O, and Vogel's trace elements solution) for the production of glucoamylase by Humicola grisea MTCC 352 were optimized using response surface methodology that included a Plackett-Burman design, path of steepest ascent, and central composite design. Microorganism, Inoculum Preparation, and Fermentation Conditions. The microorganism, used in the study, Humicola grisea MTCC 352, was obtained from Microbial Type Culture Collection, Chandigarh, India. The strain was maintained on potato dextrose agar (PDA) slant, grown at 45 ∘ C for 10 days before being stored at 4 ∘ C. The strain was subcultured, once every 2 months. The fermentation was started with 2 mL of conidial inoculum prepared using 0.15% Triton X-100, that was added to 250 mL Erlenmeyer flasks containing 100 mL of medium (glucose 1 g, yeast extract 0.3 g, KH 2 PO 4 0.1 g, K 2 HPO 4 0.1 g, NaCl 0.1 g, CaCl 2 0.1 g, MgSO 4 ⋅7H 2 O 0.1 g, and 0.5 mL of Vogel's trace element solution), adjusted to pH 6. The inoculum culture was incubated at 45 ∘ C for 4 days at 150 rpm. Vogel's trace elements solution was constituted by the following, as per literature [16]: citric acid monohydrate 5 g, Based on preliminary experiments (data not shown), soluble starch and yeast extract showed better yields for enzyme production. Therefore, for the production medium, soluble starch was used as the carbon source in place of glucose. All the other media constituents and the culture conditions remained unaltered. Both the inoculum culture and production media were autoclaved for 15 minute at 121 ∘ C and 15 psi. The spore suspension of the fungal strain (5 mL) was inoculated in 100 mL of the production medium, taken in a 250 mL flask, for a period of 4 days. Extraction of Extracellular Glucoamylase. After fermentation, the broth was subjected to filtration through Whatman No. 1 filter paper. The filtrate was centrifuged at 13000 rpm for 20 minutes to remove the fungal mycelia. The cell-free supernatant was assayed for glucoamylase activity. All the experiments were carried out in triplicate and the average values were reported. Glucoamylase Activity Assay. 0.05 mL of cell-free supernatant was incubated with 0.7 mL of 50 mM citrate buffer (pH 5.5) and 0.25 mL starch solution (1%, w/v) at 60 ∘ C for 10 minutes. The reaction was stopped by placing the tubes in a boiling water bath for 10 minutes. After bringing back to room temperature, the concentration of glucose formed was determined by glucose oxidase/peroxidase (GOD/POD) method [17]. One unit of glucoamylase activity was defined as the amount of enzyme that releases 1 mol glucose from soluble starch per minute under assay conditions. Plackett-Burman Design. Plackett-Burman design was used to screen the media components and identify the significant components that influence the higher production of glucoamylase. Eight independent variables were chosen with three different levels, namely, low, mid, and high factor settings, coded as −1, 0, and +1, respectively, with their actual values (Table 1). These variables were screened in 13 experimental runs that included a center point, according to Enzyme Research 3 Table 2: Plackett-Burman experimental design matrix for glucoamylase production. Run Glucoamylase activity (U/mL) PBD (Table 2) along with the response (glucoamylase activity).The center point experiment was performed to obtain the standard error of the coefficients. The Plackett-Burman design was based on the first-order model, shown in where is the glucoamylase activity (U/mL), 0 is the model intercept, is the linear coefficient, and is the level of the independent variable [12]. Path of Steepest Ascent. Following the first-order model based on PBD, new sets of experiments were performed in the direction of maximum response as described by steepest ascent method [12]. In this approach, the experiments were started at the midlevel of the statistically significant factors that were picked from PBD. The levels of the each factor were increased depending on their magnitude of the main effect. Experiments were continued until no further increase in response was observed (Table 3). Central Composite Design and Response Surface Methodology. In order to obtain the optimum values of each factor, a CCD was performed. The CCD was used to obtain a quadratic model consisting of factorial points (−1, +1), star points (−2, +2), and central point (0) to estimate the variability of the process with glucoamylase yield as the response (Table 4). Response surface methodology was employed to optimize the four selected significant factors, namely, soluble starch ( 1 ), yeast extract ( 2 ), K 2 HPO 4 ( 3 ), and MgSO 4 ⋅7H 2 O ( 4 ), which increase the glucoamylase production. In this methodology, a 4-factor, 5-level CCD with 31 runs was employed (Table 5). A quadratic equation was used to fit the response to the independent variables as given in (2) where is the predicted response of the glucoamylase activity (U/mL), 0 is the offset term (constant), is the linear effect, is the quadratic effect when = and interaction effect when < , is the squared term, and and are the coded independent variables for statistical calculations according to where is the coded value of the independent variable, is the real value of the independent variable, 0 is the real value of the independent variable on the center point, and Δ is the step change value [12]. Enzyme Research 5 (Table 6), which depicts the importance of attaining higher glucoamylase production. Table 6 shows the main effects, coefficients, and standard error along with the , values and confidence levels of these components on the response (glucoamylase production). The positive and negative values of the coefficients represent the increase and decrease in glucoamylase production against the respective concentration of the components. The main effects characterize the deviations of the average between high and low levels for each one of the factors. If the main effect of a factor is positive, the glucoamylase production increases as the factor is changed from low to high level whereas the opposite behaviour (a decrease in the glucoamylase production) is observed for a negative main effect. In the current study, the media components A, B, D, E, G, and H increased the glucoamylase production at higher level whereas a decrease in response was observed for C and F components. The same phenomena were explained graphically using Pareto chart (Figure 1). It explains the importance of the individual main effect of each factor to determine whether they are significantly different from zero. These values are represented by horizontal columns in the Pareto chart. For a 95% confidence level and three degrees of freedom, the value equals 3.18 and is shown in the plot as a vertical line. This indicates the minimum statistically significant effect for 95% confidence level. It is clear from the Pareto chart that the four factors A, B, D, and G are significant, and therefore these four factors were taken up for further studies. The production of glucoamylase by Humicola grisea depends on various types of nutrients provided. It majorly depends on type of carbon and nitrogen source used. In the present study, starch plays a significant role (positive effect) as a carbon source. Literature abounds in instances which show the prominent role played by starch as a carbon source for high glucoamylase production [5,9,10]. Starch (A) seems to have an "inductive effect" and portrays a significant role in glucoamylase production [11]. Similar to starch as a significant carbon source, the current study indicated the importance of yeast extract (B), as a nitrogen source, in aiding the production of glucoamylase. As in many other studies, yeast extract helps in the development of mycelial structures with a corresponding higher yield of enzymes [18]. Similar kind of results was obtained for glucoamylase production that proved the positive effect played by yeast extract [5,6,9]. 6 Enzyme Research K 2 HPO 4 (D) was also found to have a positive effect on glucoamylase production due to the possible buffering effect on the culture medium. The fact that K 2 HPO 4 has a positive role in the enzyme production is well in concordance with the results obtained by other researchers [5,6,19]. Similarly, a positive effect was observed with the addition of MgSO 4 (G). The experimental observations are in agreement with the studies performed by Kumar and Satyanarayana [5] and Nguyen et al. [6]. Based on the regression analysis, the first-order model for the PBD with coded values is given by The 2 value for the above model was found to be 0.9859 that implies that 98.59% of the variability of the data can be explained by the model. From Table 2 it is clear that the glucoamylase production at the center point (4.57 U/mL) is almost close to the average glucoamylase production value (4.85 U/mL) at the factorial points which suggest that there is an absence of curvature [20,21]. Therefore, steepest ascent method was performed to obtain the levels of the factors which were close to the optimum [22]. The Path Steepest Ascent. In the path of steepest ascent methodology, experiments were conducted using the four significant factors obtained from the first-order model given by PBD. This was done in order to determine the vicinity of optimum by changing the levels of the said factors with respect to the magnitude of the coefficients. By taking the center point of the four significant factors obtained from PBD (the other four factors were maintained at the low level of PBD), the path of steepest ascent was started and moved along the path in which the concentration of all the four factors increased (since all the four factors had positive effects). The experimental design and the results of the path of steepest ascent are shown in Table 3. From the table, it is inferred that the highest response of glucoamylase activity of 11.64 U/mL was observed in Run 7, when medium composition was soluble starch 32.82 g/L, yeast extract 9.23 g/L, K 2 HPO 4 2.33 g/L, and MgSO 4 ⋅7H 2 O 1.98 g/L. Moreover, further increments in concentration of the media components resulted in a dip of glucoamylase production, which may be due to the inhibitory effect of high concentration of one of the components. Thus, it was obvious that the production of glucoamylase stabilized in the seventh run which proved that the media compositions were in the vicinity of optimum. Hence, this composition was chosen for further studies. Central Composite Design and Response Surface Methodology. The CCD was conducted in order to determine the true optimum concentrations of the four factors (soluble starch, yeast extract, K 2 HPO 4 , and MgSO 4 ⋅7H 2 O) for glucoamylase production. The levels of the factors were chosen from the results of the path of steepest ascent (Run number 7), and the design matrix is shown in Table 5. A total of 31 experiments were performed according to the design matrix, and the experimental results are shown in Table 5 where is the glucoamylase activity (U/mL) and 1 , 2 , 3 , and 4 are soluble starch, yeast extract, K 2 HPO 4, and MgSO 4 ⋅7H 2 O, respectively. The ANOVA results (Table 7) obtained in the present study are in good agreement with the general facts of higher , predicted 2 values, and lower PRESS values which specify a better fit [12]. Similarly, values < 0.05 indicate that the model terms were significant. In this study, all the linear, square effects of 1 , 2 , 3 , and 4 and interactive effects of 1 4 , 2 3 , and 3 4 were significant for glucoamylase production. Therefore by removing the insignificant terms ( 1 2 , 1 3 , and 2 4 ) a reduced model was obtained as follows: For this reduced model, the values of various statistical parameters were as follows: value: 26.89, coefficient of determination ( 2 ): 0.9397, predicted 2 : 0.7688, adjusted 2 : 0.9047, and PRESS: 9.47. The increase in value, increase in predicted 2 value, and a decrease in the PRESS value indicate that the reduced model fits the data in a better way. This is corroborated by the higher values of the model than the value of lack of fit [23,24]. The linear trend line in Figure 2 shows that the data are normally distributed which confirms that the model fits well with the experimental results. Therefore, the major assumptions of the model (normal distribution of errors, same errors of variance, randomization, and zero mean error) stand validated. In addition to these, another statistical parameter, AARD (%) (7), was calculated. It explains the extent to which the predicted values differ from the experimental values and a lesser value (<5%) is preferred for a good model [25]. Consider where is the number of experimental data points. For the current system, an AARD of 2.07% was obtained which implies that the model is adequate for the data. In order to visualize the interaction effects between each variable on glucoamylase production, two-dimensional contour plots are shown graphically in Figure 3. The interaction effects between two factors are shown with other two variables kept constant at their center value. It is clear from the plots that there is a change in glucoamylase production with respect to the low or high levels of the factors. The shape of the plot determines the extent of interaction between the factors. The elliptical shape of the contour plot between the factors, soluble starch and MgSO 4 ⋅7H 2 O and yeast extract and K 2 HPO 4, indicates the significant interaction effect and an increase in glucoamylase production at their higher values. The interaction effect between K 2 HPO 4 and MgSO 4 ⋅7H 2 O indicates an elliptical shape with negative effect (decrease in glucoamylase production at higher values). The circular shape of the contour plots among the remaining variables confirms that there was no or less interaction between them. The same phenomena are numerically shown in Table 7 ( < 0.05: presence of interaction and > 0.05: no interaction). The reduced regression model was solved for maximum glucoamylase production using the response optimizer tool in MINITAB 14.0 and the optimum levels of each variable in uncoded units were as follows: soluble starch = 28.41 g/L, yeast extract = 9.61 g/L, K 2 HPO 4 = 2.42 g/L, and MgSO 4 ⋅7H 2 O = 1.91 g/L, all of which were located within the experimental range. The predicted glucoamylase activity under these optimum conditions was 12.15 U/mL. Experimental Verification of the Model. In order to validate these results, experiments were done in triplicate at the optimized values. Under the optimized conditions, the predicted response for glucoamylase activity was 12.15 U/mL and the average of observed experimental values was 12.27 ± 0.16 U/mL. The good correlation between the observed and predicted values confirms the adequacy of the model. This optimization strategy led to the enhancement of glucoamylase production from 4.57 U/mL (unoptimized medium) to 12.27 U/mL (optimized medium), a 2.68-fold increase. In addition to this, the optimized glucoamylase activity was found to be higher than the available literature value for various thermophilic fungi such as Scytalidium thermophilum 15.8 (3.62 U/mL) [26], Thermomyces lanuginosus A.13.37 (2.8 U/mL) [2], and Thermomyces lanuginosus ATCC 200065 (7.4 U/mL) [27] and comparable with Thermomyces lanuginosus TO3 (13 U/mL) [28].
v3-fos-license
2023-11-09T06:17:49.496Z
2023-11-09T00:00:00.000
265049798
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "CLOSED", "oa_url": null, "pdf_hash": "efbc070d95310505d5713cf53b315cda769a8ecd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:198", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3184c117d9430818c540d7603d0b60ca82460eb1", "year": 2023 }
pes2o/s2orc
Significant myopic shift over time: Sixteen-year trends in overall refraction and age of myopia onset among Chinese children, with a focus on ages 4-6 years Background Myopia or near-sightedness is a major cause of blindness in China and typically develops between the ages of 6-12 years. We aimed to investigate the change in refractive error and the age of myopia onset in Chinese children from 2005 to 2021. Methods We first conducted a series of cross-sectional studies to determine the refractive states and the age of myopia onset over time, after which we analysed longitudinal data to investigate the dose-response relationship between hyperopic reserve and future risk of myopia. The analysis was based on the refraction data of children aged 4-18 years who visited the Fudan University Eye and Ear, Nose, and Throat (FUEENT) Hospital, a large tertiary hospital in Shanghai, China, for eye examinations between 2005 and 2021. We examined the prevalence of hyperopia (spherical equivalent refractive error (SERE) >0.75D), pre-myopia (-0.50D < SERE ≤ 0.75D), and myopia (SERE ≤-0.50D), the average SERE for each age group at the initial visit, the average age of myopia onset, and the safety threshold of hyperopic reserve against myopia onset. Results We included 870 372 eligible patients aged 4-18 years who attended examination between 2005 and 2021, 567 893 (65.2%) of whom were myopic at their initial visit to FUEENT. The mean SERE decreased in most (n/N = 14/15) of the age groups over the 16 calendar years, with a mean SERE for the whole cohort decreasing from -1.01D (standard deviation (SD) = 3.46D) in 2005 to -1.30D (SD = 3.11D) in 2021. The prevalence of pre-myopia increased over the 16 years (P < 0.001), while those of myopia and hyperopia remained largely stable (both P > 0.05). We observed a significant decrease in the prevalence of hyperopia (2005: 65.4% vs 2021: 51.1%; P < 0.001) and a significant increase in the prevalence of pre-myopia (2005: 19.0% vs 2021: 26.5%; P < 0.001) and myopia (2005: 15.6% vs 2021: 22.4%; P < 0.001) in children aged 4-6 years. We found an earlier myopia onset over time, with the mean age of onset decreasing from 10.6 years in 2005 to 7.6 years in 2021 (P < 0.001). Children with a hyperopic reserve of less than 1.50D were at increased risk of developing myopia during a median follow-up of 1.3 years. Conclusions We found an overall myopic shift in SERE in Chinese children aged 4-18 years over the past 16 years, particularly in those aged 4-6 years. The mean age of myopia onset decreased by three years over the same period. The “safety threshold” of hyperopic reserve we identified may help target the high-risk population for early prevention. Myopia is a refractive state where the ocular axial length is too long for its refractive power [1].Its global prevalence was estimated to have increased from 28.3% to 34.0% between 2010 and 2020 [2], especially in East Asian countries including China [3].According to a recent Chinese government official report, the overall prevalence of myopia in Chinese primary, secondary, and high school students was 36.0%,71.6% and 81.0%, respectively [4]. Children who develop myopia experience accelerated axial length elongation up to 2-3 years before myopia onset [5,6], which would lead to a myopic shift in their refractive status away from the age-normal hyperopia [7].Consequently, the 2019 International Myopia Institute defined "pre-myopia" in their white paper report as a "refractive state of an eye of ≤0.75D and >-0.50D" in children, where a combination of baseline refraction, age, and other quantifiable risk factors can help predict the likelihood of the future development of myopia to inform preventative interventions. Myopia typically develops between 6 and 12 years [8], with earlier onset age associated with greater risk of high myopia [9] and associated ocular pathologies [10] in the future.Characterising temporal changes in hyperopic and pre-myopic refractive status in children aged 4 to 6 years may improve our understanding of the overall trend of myopia onset and prevalence over time.However, large-scale refractive data in these younger children is lacking in current literature.We therefore aimed to investigate changes in the refractive states of Chinese children aged 4-18 years over time, with a particular focus on younger children aged between 4-6 years, using a clinical sample from a tertiary eye specialty hospital in Shanghai, China.We also aimed to investigate the change in age of myopia onset over time and the dose-response relationship between hyperopic reserve and future risk of developing myopia. METHODS We first conducted a series of cross-sectional data to determine the refractive states and the age of myopia onset over time, after which we adopted a longitudinal study design to assess the dose-response relationship between hyperopic reserve and future risk of myopia in a subgroup of the participants who did not have myopia at baseline and had at least one follow-up visit to determine whether they had progressed to myopia.We conducted the study at Fudan University Eye and Ear, Nose, and Throat (FUEENT) Hospital in Shanghai, China, a tertiary specialty hospital with annual outpatient visits exceeding two million.We received approval from the hospital's ethics committee (No. 2022145). We included all eligible patients aged 4-18 years, based on the following inclusion and exclusion criteria: 1. Spherical equivalent refractive error (SERE) and prevalence of different refractive states over time.Inclusion criteria -at least one set of refractive data available to determine the patient's refractive state, with the initial visit used for analysis; 2. Age of myopia onset over time.Inclusion criteria -at least two sets of refractive data available, with one for the hyperopic or pre-myopic state and another in the myopic state on a subsequent visit; 3. Dose-response relationship between hyperopic reserve and future risk of myopia.Inclusion criteria -at least two sets of refractive data available, with at least one set showing hyperopia or pre-myopia, regardless of refractive states observed in subsequent visits. We excluded patients with ocular pathologies that may impact refractive development in children, including congenital cataracts, strabismus, glaucoma, prescription of low-concentration atropine eye drops, and prescription of optical modalities other than single vision lenses (e.g.multi-focal spectacles or contact lenses, orthokeratology contact lenses). We retrieved data on patients' gender, age, and refractive data including the spherical power, cylindrical power, cylindrical axis, best corrected distance visual acuity, and cycloplegic agent used for examination from the hospital's electronic medical record system from 2005 to 2021.We also collected outstanding medical history, including ocular pathologies, concurrent medication use, and optical treatment modality.We included all patients who met the inclusion criteria within the electronic medical history system throughout the 2005-2021 period; however, data from 2013 and 2014 were not available due to a system update and catastrophic loss of backup data. Subjective refraction We used cycloplegia in all children aged ≤12 years and optionally used in children >12 years before refraction.Cycloplegia was achieved by administering 0.5% tropicamide eye drops five times in a row in five-min-ute intervals.Subjective refraction was measured 30 minutes after the last eye drop.Hospital-based opticians used the standardised maximum plus for best visual acuity method in all subjective refraction procedures. Refractive state We defined SERE as sphere + cylinder/2 and used the less hyperopic eye to define the refractive state.We defined a SERE >0.75D as hyperopia, -0.50D < SERE ≤ 0.75D as pre-myopia, and ≤-0.50D as myopia. Myopia onset We defined myopia onset as a shift in the refractive state from hyperopia or pre-myopia to myopia in two consecutive visits, with the age of the child at the latter visit defined as myopia onset age. Statistical analyses We calculated the mean SERE and its standard deviation (SD) for each age group and for all children as a whole, stratified by calendar years, which were then compared across groups using t-tests.We expressed the number of people with different refractive states as an absolute value (percentage) and compared them using χ 2 tests.We used the Mann-Kendall test to determine whether there was a linear monotonic trend in the mean SERE, proportion of different refractive states, and mean age of myopia onset from 2005 to 2021. We used multivariate Cox regression models to assess the dose-response relationship between SERE and the onset of myopia, adjusting for age and gender.To investigate the safety threshold of SERE below which the risk of myopia at subsequent visits increased significantly, we used restricted cubic splines (RCS) with four knots at the 5 th , 35 th , 65 th , and 95 th percentiles to estimate the adjusted hazard ratios (HR) and 95% confidence intervals (CIs).We tested for potential nonlinearity by using a likelihood ratio test comparing the model with only a linear term against the model with linear and cubic spline terms.We regarded a SERE for which the HR was 1 as the safety threshold.We also employed a linear Cox model to estimate the HR for myopia corresponding to an increase of 0.1D in SERE below and above the threshold, respectively.We performed all statistical analyses in R, version 4.2.2 (R Core Team, Vienna, Austria) and Python, version 3.7 (Python Software Foundation, Delaware, USA), setting the significance level set to P < 0.05 (two-sided). SERE and prevalence of refractive states over time We observed a significant shift in refraction towards higher myopia over time across most of the age groups (P < 0.05 except for the 10-year-old group), with the mean SERE for the whole cohort decreasing from -1.01 (SD = 3.46D) in 2005 to -1.30 (SD = 3.11D) in 2021 (Figure 2 and Table S1 in the Online Supplementary Document). The prevalence of pre-myopia in children aged 4-18 years from 2005 to 2021 significantly increased over time (P < 0.001), while the prevalence of myopia (P = 0.11) and hyperopia (P = 0.84) remained relatively stable over time (Figure 3 Dose-response relationship between hyperopic reserve and future risk of myopia The children included in this analysis (n = 113 066) were followed up for a median of 1.3 years, with 28.2% of them developing myopia during the period.The risk of myopia increased rapidly when SERE at the first visit was lower than +1.50 D (nonlinearity P < 0.001).We found a change in HR per 0.10D SERE of 0.325 (95% CI = 0.317-0.333)below +1.50D and 0.925 (95% CI = 0.922-0.928)above +1.50D(Figure 5). DISCUSSION We aimed to investigate changes in the refractive state and age of myopia onset of Chinese children aged 4-18 years over a 16-year period (from 2005 to 2021) using a clinical sample from a tertiary specialty hospital in Shanghai, China.We found a significant myopic shift in SERE, particularly in those aged 4-6 years, and earlier myopia onset over time. A previous study based on large-scale clinical data from China investigated prevalence of myopia in a population with a wide range of age (1 to 95 years) and showed that the mean age of myopia onset was 7.5 years in children; however, a significant proportion of its participants did not use cycloplegia before measurement of refractive error, which could have compromised the accuracy of refraction data.Additionally, that study did not analyse the trend of myopia onset over time [11].Decrease in age of myopia onset from 10.6 years in 2005 to 7.6 years in 2021 (i.e. by three years) can significantly impact overall myopia and high myopia prevalence in the population.Earlier onset of myopia has been found to predict greater risk of future high myopia in several studies [9,12,13].This is likely due to faster myopia progression in younger children.For example, a previous study conducted in children of Asian ethnicity estimated that myopia progressed by a mean of 0.90D per year at seven years of age vs 0.46 per year at 14 years of age [14].Higher myopia is associated with greater risk of posterior ocular pathologies such as myopic macular degeneration, a leading cause of blindness in East Asia [15].Therefore, an earlier onset of myopia could significantly increase the likelihood of future visual impairment [16]. We also found that the safety threshold of hyperopic reserve associated with a reduced likelihood of future myopia onset was approximately +1.50D in children, regardless of age and gender.A few previous studies have investigated this issue [17][18][19].For example, in their Collaborative Longitudinal Evaluation of Ethnicity and Refractive Error (CLEERE) study, Zadnik et al. [17] found that the cut-off cycloplegic SERE with optimized sensitivity and specificity was approximately +0.75D in students enrolled in grade 1-3 students (6-8 years old) that was protective against myopia incidence in their 8 th grade. The discrepancy between their SERE cut-off findings and ours could be partly attributed to the different ethnicity compositions of participants.The CLEERE study enrolled a ethnically diverse cohort, with 36.2% of the study population being white and 13.7% being Asian American [17].White children and Asian children experience different trajectories during their refractive development; axial length growth is typically faster in Asian children compared to their white counterparts, even in non-myopic children [20,21].Ma et al. [19] recruited a similar ethnic population that in to our study and found that the best cut-off SERE for predicting four-year incident myopia was SERE ≤0.75D in children enrolled in grades 1-3 (area under the curve = 0.84).However, they used a linear logistic regression model to determine the cut-off value for screening purposes, which differs from our nonlinear RCS model.Our result suggests that Asian children need a higher hyperopic "reserve" during emmetropisation to remain non-myopic later in life. It is unclear what caused the myopic shift in the study population over the past 16 years.Although myopia prevalence rates are high in East Asian countries, they also increased in other regions globally [15].This suggests that environmental factors may play a strong causal role in the development of myopia and that the impact is more pronounced in East Asian regions.Asian children typically experience heavier academic burdens at a younger age and have less outdoor activity time than their Western counterparts [22].Increasing outdoor time has shown promise in decreasing myopia incidence in primary school children in clinical studies as well as epidemiological studies [23][24][25][26][27][28].In contrast, coronavirus 2019 (COVID-19) lockdowns dramatically increased myopia incidence in children aged 6-8 years [29].Limited outdoor activity and unprecedentedly increased screen time during e-learning may have been attributed to the surge in myopia prevalence in these young cohorts during lockdown periods [30].To effectively mitigate the burden of future myopia on the individual and society, factors contributing to the earlier myopic shift in younger children aged 4-6 years need to be determined. The strength of this study lies in its long time span, which enabled us to observe the trend over 16 years.However, our findings should be interpreted with caution.First, the selection bias arising from hospital-based database might have led to overestimation of the prevalence of myopia because hyperopic and pre-myopic children without visual problems are less likely to visit the hospital.Thus, the absolute values of SERE and proportions of refractive states from a single centre might not be generalisable to all Chinese children.However, the relative change of these parameters over time may mirror the trend in the general population.Second, only refraction data, age, and gender were available for analysis; biometric data such as axial length, crystalline lens power, and keratometry, however, were not available within the electronic medical system.We were thus unable to explore the trend of ocular biometric change over time in addition to refractive change, but this was not our primary objective in the first place. Figure 2 . Figure 2. The change in mean SERE of children aged 4-18 years from 2005 to 2021. Figure 4 . Figure 4. Age of myopia onset over time from 2005 to 2021.Data points represent mean age of myopia onset, with the redline representing the locally weighted regression smooth curve (LOESS) and the shaded area indicating the 95% confidence intervals. Figure 5 . Figure 5. Dose-response relationship between hyperopic reserve and future risk of myopia.SERE -spherical equivalent refractive error.HR -hazard ratio.
v3-fos-license
2020-11-12T09:10:13.456Z
2020-11-01T00:00:00.000
226303834
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-4418/10/11/912/pdf", "pdf_hash": "57b59dbbd4d99558b615396547dac7d56b892635", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:199", "s2fieldsofstudy": [ "Medicine" ], "sha1": "9f8648d492fec54d1d8d56c5ac127dd7119db976", "year": 2020 }
pes2o/s2orc
Complications of Non-Alcoholic Fatty Liver Disease in Extrahepatic Organs Non-alcoholic fatty liver disease (NAFLD) is now recognized as the most common chronic liver disease worldwide, along with the concurrent epidemics of metabolic syndrome and obesity. Patients with NAFLD have increased risks of end-stage liver disease, hepatocellular carcinoma, and liver-related mortality. However, the largest cause of death among patients with NAFLD is cardiovascular disease followed by extrahepatic malignancies, whereas liver-related mortality is only the third cause of death. Extrahepatic complications of NAFLD include chronic kidney disease, extrahepatic malignancies (such as colorectal cancer), psychological dysfunction, gastroesophageal reflux disease, obstructive sleep apnea syndrome, periodontitis, hypothyroidism, growth hormone deficiency, and polycystic ovarian syndrome. The objective of this narrative review was to summarize recent evidences about extrahepatic complications of NAFLD, with focus on the prevalent/incident risk of such diseases in patients with NAFLD. To date, an appropriate screening method for extrahepatic complications has not yet been determined. Collaborative care with respective experts seems to be necessary for patient management because extrahepatic complications can occur across multiple organs. Further studies are needed to reveal risk profiles at baseline and to determine an appropriate screening method for extrahepatic diseases. Introduction Over the last few decades, the incidences of non-alcoholic fatty liver disease (NAFLD) and non-alcoholic steatohepatitis (NASH) have been increasing worldwide, along with the concurrent epidemics of metabolic syndrome and obesity. A recent meta-analysis estimated that the overall global prevalence of NAFLD is very high at 25.2% [1]. In this analysis, the regional prevalence of NAFLD was reported to be highest in the Middle East (31.7%) and South Africa (30.4%). Thus, NAFLD has been recognized as the most common chronic liver disease worldwide. Compared with matched, controlled populations without NAFLD, patients with NAFLD have increased risks of end-stage liver disease, hepatocellular carcinoma (HCC) [2], and liver-related mortality [1]. Therefore, both hepatologists and patients must recognize hepatic conditions, especially those characterized by liver fibrosis [3,4]. However, hepatic involvement is just one component of the multiorgan manifestation of NAFLD. In fact, a recent multicenter cohort study revealed that the largest cause of death among patients with NAFLD is cardiovascular disease (CVD) followed by extrahepatic malignancies, whereas liver-related mortality is only the third cause of death [3]. NAFLD represents a liver component of metabolic syndrome and is associated with risk factors for metabolic syndrome, including obesity [5], diabetes mellitus [6], and dyslipidemia [7]. Recently, a growing body of evidence has been collected that supports the notion that NAFLD should be treated as an early mediator of systemic diseases and metabolic syndrome, as well as liver-specific diseases [8,9]. The objective of this narrative review was to summarize recent evidences about extrahepatic complications of NAFLD, with focus on the prevalent/incident risk of such diseases in patients with NAFLD. The association between NAFLD and CVD is not mentioned in this article because it is discussed in great detail in another article in this special issue. Chronic Kidney Disease Chronic kidney disease (CKD) is a worldwide public health problem, with possible adverse outcomes that include end-stage renal disease (ESRD), CVD, and premature death. CKD is defined by a sustained reduction in the glomerular filtration rate or evidence of structural or functional abnormalities of the kidneys on urinalysis, biopsy, or imaging [10]. The two major risk factors of CKD are hypertension and diabetes mellitus, which are also major risk factors for NAFLD. A recent meta-analysis demonstrated that patients with NAFLD had a significantly higher risk of incident CKD than those without NAFLD (random-effects hazard ratio (HR), 1.37; 95% confidence interval (CI), 1.20-1.53) [11]. In other studies, patients with NAFLD were similarly reported to have a higher prevalence of CKD than patients without NAFLD (Table 1) [12][13][14][15][16][17][18][19]. Importantly, the majority of these studies demonstrated that NAFLD was independently associated with CKD even after adjustments for risk factors including age, sex, body mass index (BMI), hypertension, diabetes mellitus, smoking, and hyperlipidemia. The possible pathogeneses linking NAFLD and CKD include the upregulation of the renin-angiotensin system and the impairment of antioxidant defense. An excess dietary fructose may also contribute to NAFLD and CKD. Moreover, CKD may mutually aggravate NAFLD and associated metabolic disturbances through altered intestinal barrier function and microbiota composition, alterations in glucocorticoid metabolism, and the accumulation of uremic toxic metabolites [20]. ALT > 40 IU/L group had a significantly higher creatinine level than ALT < 40 IU/L group (0.9 mg/dL vs. 0.8 mg/dL). Colorectal Cancer Colorectal cancer is the third most commonly diagnosed malignancy and the fourth leading cause of cancer-related deaths in the world, accounting for about 1.4 million new cases and almost 700,000 deaths in 2012 [21]. A recent meta-analysis of observational studies suggested that NAFLD was independently associated with a moderately increased prevalence and incidence of colorectal adenomas and cancer (Odds ratio (OR), 1.28 for prevalent adenomas and 1.56 for prevalent cancer; HR, 1.42 for incident adenomas and 3.08 for cancer) [22]. In addition, several studies have reported the prevalent risk of colorectal cancer in patients with NAFLD (Table 2) [23][24][25][26][27][28][29]. However, these retrospective studies were mainly reported from Asia. Therefore, future studies are needed to confirm the true risks of colorectal cancer among various NAFLD populations. The presence of metabolic syndrome, especially diabetes mellitus and obesity, is a well-known risk factor for colorectal cancer [30,31]. However, whether NAFLD is associated with an increased risk of colorectal cancer simply as a consequence of the shared metabolic risk factors, or whether NAFLD itself may contribute to the development of colorectal cancer, is uncertain [22]. Regarding the former possibility, hyperinsulinemia induced by insulin resistance promotes carcinogenesis by stimulating the proliferation pathway through its effect on insulin receptors on cancer cells. In addition, hyperinsulinemia increases the expression of insulin-like growth factor (IGF)-1, which has mitogenic and anti-apoptotic activities that are more potent than those of insulin and can act as a stimulus for the growth of preneoplastic and neoplastic cells [30]. Regarding the latter possibility, patients with NAFLD reportedly have reduced serum levels of adiponectin, which has anti-carcinogenic effects. This mechanism is due to the ability of adiponectin to stop the growth of colon cancer cells through AMPc-activated protein kinase (AMPK) and to induce a caspase-dependent pathway resulting in endothelial cell apoptosis [32]. Other Malignancies Although the association with colorectal cancer and NAFLD is most frequently reported, several studies have reported an increased risk of extrahepatic malignancies in other organs in patients with NAFLD. A recent study from Korea reported that female patients with NAFLD had an increased risk of developing breast cancer (HR, 1.92; 95% CI, 1.15-3.20) [33]. Furthermore, a few studies have suggested increased risks of developing gastric cancer [34,35], pancreatic cancer [35,36], prostate cancer [37], and esophageal cancer [33,35] in patients with NAFLD. Psychological Dysfunction Major depressive disorder (MDD) is an important public health problem. Among those affected, 28% experience a moderate degree of functional impairment, whereas 59% experience severe reductions in their normal functional ability [38]. MDD is often comorbid with other chronic diseases, such as cardiovascular disease, arthritis, asthma, and diabetes mellitus, and the presence of MDD reportedly worsens the health outcomes of these associated conditions [39]. Similarly, an association between chronic liver disease and MDD has also been demonstrated. Patients with NAFLD reportedly have a prevalence of MDD of 27.2%, which is higher than that of the general population [40]. Our previous study showed that NAFLD patients with comorbid MDD have severe histological steatosis and a higher NAFLD activity score, as well as significantly higher levels of serum aminotransferase, γ-glutamyl transpeptidase, and ferritin than age-and sex-matched NAFLD patients without MDD. They also demonstrated that NAFLD patients with MDD had a poor response to the standard care for NAFLD, consisting mainly of lifestyle modifications [41]. The pathogenesis linking NAFLD and MDD remains uncertain. However, a recent study investigated possible changes in brain tissue volumes in patients with NAFLD. They reported that the brain volumes of white and gray matter were significantly reduced in patients with NAFLD, compared with control subjects. Accordingly, this reduction in brain volume might be related to a higher risk of depression in patients with NAFLD [42]. In addition, NAFLD reportedly correlated with other types of psychological dysfunction, such as cognitive impairment and Alzheimer's disease. Cerebrovascular alterations, neuroinflammation, and brain insulin resistance are thought to be key factors in the pathogenesis of these diseases [43]. Gastroesophageal Reflux Disease Gastroesophageal reflux disease (GERD) and NAFLD are commonly associated with metabolic syndrome, especially visceral obesity. Visceral obesity increases the intragastric pressure because of the accumulation of adipose tissue in the whole abdominal cavity, which may induce an abnormal gastroesophageal reflux leading to GERD symptoms [44]. Patients with NAFLD reportedly have a high prevalence of symptoms of GERD [45,46]. Several studies have shown that GERD is associated with sleep problems [47]. Symptoms of GERD (such as heartburn or regurgitation) during the night are thought to cause multiple short periods of awakening, leading to sleep fragmentation. Taketani et al. reported that nearly 30% of Japanese patients with biopsy-proven NAFLD had insomnia, which was independently associated with GERD symptoms. They also demonstrated that treatment with a proton pump inhibitor could relieve both insomnia and the GERD symptoms [48]. Changes in the secretion of hormones (such as cortisol, leptin, and ghrelin) and increased insulin resistance because of a short sleep duration are thought to increase the risks of obesity and diabetes [49,50]. Therefore, GERD and insomnia may lead to the progression and worsening of NAFLD. Obstructive Sleep Apnea Syndrome Patients with obstructive sleep apnea syndrome (OSAS) have repetitive episodes of shallow or paused breathing during sleep that cause a reduction in blood oxygen saturation. This chronic intermittent hypoxia (CIH) induces increased oxidative stress, the generation of reactive oxygen species (ROS), and the release of inflammatory cytokines, resulting in systemic inflammation that drives the exacerbation of NAFLD and the progression to liver fibrosis [51,52]. Musso et al. conducted a meta-analysis of eighteen cross-sectional studies and reported that OSAS is associated with an increased risk (independent of age, sex, and BMI) of NAFLD (OR, 2.99), NASH (OR, 2.37), and advanced fibrosis (OR, 2.30) [53]. Some non-randomized observational studies have reported a beneficial effect of continuous positive airway pressure (CPAP) on surrogate markers of NAFLD [53,54]. However, the currently available evidence from randomized controlled trials [55,56] does not suggest that the treatment of OSAS with CPAP can reverse the exacerbation of NAFLD. Periodontitis Individuals with chronic periodontitis reportedly have a significantly increased risk of developing CVD, including atherosclerosis, myocardial infarction, and stroke [57]. Furthermore, a two-way relationship exists between diabetes and periodontitis-diabetes increases the risk of periodontitis, while periodontal inflammation negatively affects glycemic control [58]. Similarly, an association between periodontitis and NAFLD has also been reported in various studies [59]. Yoneda et al. investigated the detection frequency of Porphyromonas gingivalis, which is well known as a major causative agent of periodontitis, in 150 biopsy-proven NAFLD patients and 60 healthy controls [60]. They found that the frequencies of P. gingivalis detection in patients with biopsy-proven NAFLD (46.7%) and NASH (52.0%) were significantly higher than that in non-NAFLD control subjects (21.7%). Moreover, they also demonstrated that non-surgical periodontal treatments in NAFLD patients ameliorated the serum levels of Aspartate aminotransferase (AST) and Alanine aminotransferase (ALT). Although further large-scale clinical trials are needed, periodontal treatments may be useful supportive measures in the management of patients with NAFLD. Hypothyroidism Pagadala et al. reported that the prevalence of hypothyroidism was significantly higher among patients with biopsy-proven NAFLD than among age-, sex-, race-, and BMI-matched control subjects (21% vs. 9.5%) [61]. Moreover, they found that NASH was associated with hypothyroidism, in a manner that was independent of diabetes mellitus, dyslipidemia, hypertension, and age. In addition, Chung et al. investigated the prevalence of NAFLD in 2324 cases with hypothyroidism, compared with age-and sex-matched controls [62]. They found that the prevalence of NAFLD was significantly higher not only in patients with overt hypothyroidism, but also in patients with subclinical hypothyroidism (even in the upper normal range of thyroid-stimulating hormone (TSH) levels), than in subjects with euthyroidism, independent of known metabolic risk factors [62]. Adult Growth Hormone Deficiency Growth hormone (GH) profoundly reduces visceral fat, which plays an important role in the development of NAFLD. Moreover, GH directly reduces lipogenesis in hepatocytes [63]. Adult growth hormone deficiency (AGHD) is characterized by metabolic abnormalities associated with visceral obesity, impaired quality of life, and increased mortality. Nishizawa et al. reported that the prevalence of NAFLD in patients with AGHD was significantly higher than in age-, sex-, and BMI-matched healthy controls (77% vs. 12%, p < 0.001) [64]. They also demonstrated the effectiveness of GH replacement therapy for NAFLD, although the sample size was too small to assess the true effectiveness. Polycystic Ovarian Syndrome Polycystic ovarian syndrome (PCOS), also known as hyperandrogenic anovulation, is one of the most common endocrine disorders in women of reproductive age. Several studies have demonstrated that the prevalence of NAFLD was significantly higher among patients with PCOS, and that hyperandrogenism was an independent risk factor for NAFLD [65,66]. Although PCOS and NAFLD share similar metabolic comorbidities such as obesity, diabetes mellitus, dyslipidemia, and metabolic syndrome, androgen excess itself is thought to increase insulin resistance [67]. The prevalence of PCOS in patients with NAFLD is still unclear. However, a cross-sectional study from Australia reported that the prevalence of PCOS was very high at 71% (10/14) among female NAFLD patients of reproductive age (20-45 years) [68]. Discussion Although considerable evidence and suggestions have been collected to reveal associations between NAFLD and extrahepatic complications, most of these reports were based on cross-sectional or observational studies with short follow-up periods. An appropriate screening method for extrahepatic complications has not yet been clearly described in any guidelines or guidances, such as the European guidelines from the European Association for the Study of the Liver (EASL) [69], guidance from the American Association for the Study of Liver Disease (AASLD) [70], or guidelines from the Japanese Society of Gastroenterology (JSG) [71]. However, intensive surveillance and early intervention for these extrahepatic complications might benefit the patients with NAFLD. For extrahepatic malignancies, the NAFLD patients who have never undergone colonoscopy should undergo total colonoscopy at least once. Further studies are needed to reveal the risk profiles at baseline and to determine appropriate screening methods for extrahepatic diseases. There are some limitations in this narrative review. First, we did not conduct a statistical analysis about bias risk in our selection of referenced literatures. Second, the association between NAFLD and CVD was not mentioned in this article because it was discussed in great detail in another article in this special issue. Recently, clinical trials examining new therapeutic drugs for NAFLD/NASH that act via various mechanisms are being performed in several countries [72]. The influence of these drugs on the risks and outcomes of extrahepatic complications should be considered in these clinical trials. To date, the influence of life style modifications to extrahepatic complications are unclear, therefore it might be the candidate for future studies. Since extrahepatic complications occur across multiple organ systems, collaborative care with respective experts is needed for the management of patients with NAFLD. In conclusion, compared to healthy controls, patients with NAFLD have higher prevalence/incidence risk of extrahepatic complications in multiple organs. Not only hepatologists, but also patients with NAFLD, should be aware of these increased risks for extrahepatic diseases.
v3-fos-license
2019-03-05T14:30:20.174Z
2019-02-27T00:00:00.000
67872085
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2019.00050/pdf", "pdf_hash": "7cf2982a12999721d98741dcfa65d430d7894b9e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:202", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "c9ca967dd293ed2cd8dc5a83459a5fa40058d5ec", "year": 2019 }
pes2o/s2orc
Tracing Hepatitis E Virus in Pigs From Birth to Slaughter Pigs are considered the main reservoir of genotypes 3 and 4 of the human pathogen hepatitis E virus (HEV). These viruses are prevalent at a high level in swine herds globally, meaning that consumers may be exposed to HEV from the food chain if the virus is present in pigs at slaughter. The aim of this study was to determine the HEV infection dynamics from birth to slaughter using 104 pigs from 11 sows in a single production system. Serum was collected from sows at 2 weeks prior to farrowing, in addition feces and serum samples were collected from the pigs every second week, from week 1 to week 17. Feces and selected organs were also sampled from 10 pigs following slaughter at week 20. All the samples were tested for HEV RNA by real-time RT-PCR and the serum samples were tested for HEV-specific antibodies using a commercial ELISA. Maternal antibodies (MAbs) were only present in pigs from sows with high levels of antibodies and all pigs, except one, seroconverted to HEV during weeks 13–17. In total, 65.5% of the pigs tested positive for HEV RNA at least once during the study (during weeks 13, 15, and/or 17) and significantly fewer pigs with a high level of MAbs became shedders. In contrast, the level of MAbs had no impact on the time of onset and duration of virus shedding. HEV was detected in feces and organs, but not in muscle, in 3 out of 10 pigs at slaughter, indicating that detection of HEV in feces is indicative of an HEV positivity in organs. In conclusion, a high proportion of pigs in a HEV positive herd were infected and shed virus during the finisher stage and some of the pigs also contained HEV RNA in feces and organs at slaughter. The presence of MAbs reduced the prevalence of HEV shedding animals, therefore, sow vaccination may be an option to decrease the prevalence of HEV positive animals at slaughter. INTRODUCTION Hepatitis E virus (HEV) can cause severe infections in humans. Four genotypes of HEV are known; genotypes 1 and 2 are exclusively found in humans whereas genotypes 3 and 4 have been found in humans and pigs. Genotype 3 is found worldwide in pigs and in humans, while genotype 4 has mainly been found in both pigs and humans in Asia, and only more recently also in Europe (1). In several European countries, there has been a dramatic increase in human cases of HEV infection caused by Genotype 3 strains. These viruses have a high sequence identity to contemporary strains circulating in pigs, indicating that swineto-human transmission of HEV is a common event (2). Indeed, high prevalence of anti-HEV antibodies (Abs) in swine herds has been reported from several countries. Detection of the high HEV seroprevalence in older samples indicated that HEV has been present in pigs for decades. A number of studies have shown that consumers are indeed exposed to HEV since porcine livers, bought in supermarkets, have been found to contain HEV-specific RNA (3)(4)(5). Furthermore, HEV, isolated from commercial livers, has been shown to be infectious for pigs in an experimental trial (3). In another study, a pig liver sausage, Figatellu, which is traditionally eaten raw, was found to be the cause of hepatitis in a significant number of people who consumed it (6). In addition, 2-15% of pigs have been shown to be infected with HEV at slaughter (7). Previous longitudinal studies, performed in pigs, revealed that most of the pigs became infected at 8-15 weeks of age but some of the pigs were still positive at slaughter (8)(9)(10). Maternal antibodies (MAbs) against HEV have been shown to be successfully transferred from HEV-Ab positive sows to offspring. However, in a previous study comparing a few animals in a single herd, the level of MAbs had no impact on the infection dynamic of HEV in the offspring. Thus, the protective role of MAbs in pigs is presently unclear (11). The proven zoonotic potential of HEV in pigs combined with the relatively high prevalence of HEV positive pigs in Denmark (more than 50% of the sow herds are HEV positive) (12) may have a negative impact on the safety of Danish pork products if the virus is present in Danish pigs at slaughter. Thus, it is essential to obtain a better knowledge of HEV infection dynamics in typical pig production systems. The aim of the present study was to study the HEV infection dynamics from birth to slaughter, with special focus on the impact of maternal antibody levels and the infectious status of individual pigs at slaughter. Furthermore, the distribution of HEV in different tissues of naturally infected pigs that shed virus 3 weeks prior to slaughter was examined. Field Study Design and the Study Herd A longitudinal study was performed in a single farrow-to finisher herd. More than 100 crossbred pigs were sampled every second week from birth to slaughter. The pigs were kept at the breeding unit until they reached ∼30 kg after which they were moved to the finisher site situated ∼16 km from the breeding unit. Before initiating the study, the presence of HEV in the first parity sows (gilts) at the nursery site was determined by testing feces from 10 sows using an HEV specific real time RT-PCR assay (data not shown). Selection of Sows Two weeks prior to farrowing, serum samples were collected from 58 sows and tested for HEV Abs. Based on the measured, normalized levels of HEV Abs, the sows were divided into three groups; low (1≤OD<2), intermediate (2≤OD<3) and high (OD≥ 3) levels of HEV-specific Abs. The group of low level HEV Abs comprised 23 sows with a mean normalized OD of 1.38 (SD = 0.27). The groups with intermediate and high levels of HEV Abs each included 17 sows, with mean normalized OD values of 2.44 (SD = 0.24) and 4.50 (SD = 1.47), respectively. The farmer randomly selected four sows from each group to be included in the study. Just after farrowing, all piglets from the 12 sows were ear tagged with a unique number. If more than half of the piglets within a litter died, the sow and her piglets were excluded from the study. Sampling of Pigs One week after farrowing, blood sampling of all piglets was performed by a local pig health technician. Thereafter, both rectal swabs and blood samples were collected every second week until week 17 from all piglets. The pigs were restrained either manually or with a snout break and 9 mL of blood was collected by puncture of the jugular vein. The rectal swabs were collected, using a cotton swab, at the rectal surface ∼2-3 cm from the anus and then placed into a sterile container with 2 mL PBS. The samples were labeled and kept cool during transportation to the laboratory. The blood samples were stored at 4 • C until further processing on the same day. The serum was extracted from whole blood by centrifugation at 3,000 RPM for 10 min at 5 • C. The serum fractions were then transferred into Nunc tubes and stored at −80 • C until RNA extraction. The tubes containing the cotton swabs in 2 mL PBS were shaken at 300 rpm for 1 h before the liquid was poured into 2 mL Eppendorf tubes and stored at −80 • C until analysis. Individual pigs were excluded from the study if more than two sampling dates were missed. Selection of Pigs for Tissue Sampling Ten of the 26 pigs where shedding of HEV (as detected by the presence of HEV RNA) occurred ∼3 weeks prior to slaughter (week 17), were randomly selected for necropsy at a laboratory facility situated 100 km from the herd. At the age of 20 weeks, the pigs were transported alive to the laboratory on a vehicle with no other pigs present. On arrival, the pigs were killed by intra-cardiac injection of pentobarbiturate (50 mg/kg) and exsanguinated by cutting the arteria axillaris. At necropsy, samples of the tonsils, lungs, kidneys, spinal cord, gall bladder (intact), hepatic lymph nodes, colon with contents, small intestine with contents, mesenteric lymph nodes, heart, and the entire liver were collected. Furthermore, muscle samples (3 × 3 cm) were collected from the shoulder, neck, pork loin, tenderloin, ham, and diaphragm. Intestinal contents were collected from the colon and the small intestine. The tissue was then rinsed in cold PBS. Bile was extracted from the gall bladder with a syringe and a small piece of tissue was excised and rinsed in PBS to remove the remaining bile. All samples were transferred to labeled tubes and stored at −80 • C until analysis. RNA Extraction and PCR Analysis Automated extraction of RNA from the rectal swab supernatant was performed on the QIAsymphony SP system (QIAGEN) using the DSP virus/pathogen mini kit version 1 (QIAGEN, Cat no. 937036). The protocol used was complex 200 V5 DSP with an elution volume of 110 µL. The HEV RNA was detected by real time RT-PCR essentially as described by Breum et al. (12) except that the concentration of the primers was changed to 500 nM for HEV2-P and HEV2-R and 100 nM for HEV2-F. Furthermore, the time settings used for the PCR cycling were changed to 15 s for denaturation and annealing and 20 s for elongation. Serological Analysis All serum samples were tested for the presence of anti-HEV IgG using a commercial kit (PrioCHECK R HEV Ab porcine kit; Prionics). As recommended by the vendor, only the samples having an OD value that exceeded the OD of the cut-off control (provided in the kit) multiplied by 1.2 were regarded as positive. The OD values were normalized by dividing the OD of the sample with the OD of the cut-off control multiplied by 1.2, which eliminated plate-to-plate variations. According to the information provided by the vendor, the assay has a sensitivity of 91% and a specificity of 94%. Statistical Analysis The statistical analyses were performed using SAS 9.1. For the determination of the overall difference between the three groups, a mixed linear model was used. This method allowed for missing data points from individual pigs. To evaluate the differences on a weekly basis, the ANOVA was performed. Finally, to compare groups for the difference in the number of shedders, the χ 2test was applied. For all analyses the significance level was set at P = 0.05. RESULTS Initially, a total of 12 sows and 135 piglets were included in the study, but 31 of the piglets, including one entire litter, either died or were excluded due to missing sampling points. Thus, data from a total of 104 piglets from eleven sows were included in the analysis. Serology Based on the levels of HEV Abs prior to farrowing, the 11 sows were allocated to one of three groups with low, intermediate or high levels of HEV Ab, designated group 1, 2, and 3, respectively. Normalized OD values, indicative of the HEV Ab levels in serum, for the included sows and the number of piglets in each litter in each group are listed in Table 1. All the pigs, except for one, seroconverted during the study (Figure 1). The pigs in groups 1 and 2 showed similar anti-HEV Ab profiles in serum with OD values below the cut off until seroconversion that occurred between weeks 11 and 13 followed by a steady further increase in HEV IgG levels which lasted until the end of the observation period at week 17 (Figure 1). Group 3 showed a different profile with positive HEV IgG levels from birth until week 7 and then these group 3 pigs, like the pigs in groups 1 and 2, seroconverted between week 11 and 13 followed by a steady increase in HEV IgG levels until week 17 (Figure 1). No differences were seen between the pigs in groups 1 and 2 so these groups were combined in the statistical analyses. There was a clear difference in the level of HEV IgG between the pigs in group 3 compared to the pigs in group 1 and 2 from week 1 to 11, but not at week 13 to 17 (Figure 1). Real Time RT-PCR Of the 104 ear marked pigs included in the analysis, 66 pigs (63.5%) tested positive for HEV RNA in feces in at least one sample during the study period ( Table 2). There was a significant difference in the number of viral shedders ranging from ∼73% in groups 1 and 2 to 45% for group 3 (P = 0.032) ( Table 2). However, there was no significant difference in the time when the first detection of HEV shedding was observed between the groups (P = 0.876). None of the pigs tested positive for HEV prior to week 13 and only 9 pigs became virus positive between weeks 11 and 13 (Figure 2). The majority of the pigs (n = 51) tested positive for HEV for the first time at week 15, whereas six pigs tested positive for the first time at week 17. Of the 104 pigs, 23 (22%) tested positive for HEV in feces at two samplings and two pigs (2%) were positive at three samplings (weeks 13, 15, and 17) (Figure 2). Analysis of Samples Collected From Selected Pigs at Slaughter To analyze if the organs and tissues contained HEV at slaughter, 10 of the 26 pigs that tested positive for HEV at week 17, ∼3 weeks prior to slaughter, were randomly selected for further analysis. The 10 pigs included three, five and two pigs from groups 1, 2, and 3, respectively. The HEV IgG profiles for the 10 individual pigs from birth until slaughter are shown in Figure 3A. Three of the pigs (1-1, 2-1, and 3-1, one from each group denoted by the first number in the ID) were seronegative at week 17, but both pigs 2-1 and 3-1 had tested positive for HEV before week 15 (Figure 3B). At slaughter (week 20), three of the 10 pigs, one from each group, were still positive for HEV RNA in feces at a level similar to that observed at week 17 ( Figure 3B). There was no significant difference in the HEV shedding pattern before week 17 for the three pigs that were positive for HEV at week 20 compared to the other seven pigs that tested negative for HEV at week 20 (P = 0.633). Interestingly, only the three pigs that tested positive for HEV in feces at week 20 were positive for HEV RNA in organs ( Table 3). Only the internal organs tested positive for HEV RNA while none of the muscle samples tested positive. The liver associated samples [liver, bile, gall bladder, and hepatic lymph nodes (HLN)] were strongly positive for HEV RNA (low Ct) whereas lower levels of HEV RNA were detected in extra-hepatic organs such as the lungs and tonsils. DISCUSSION The offspring from 11 sows with different levels of HEV specific antibodies were included in the present study. To investigate the efficacy of passive transfer of maternal antibodies on the HEV infection dynamic in the offspring, the 104 piglets were allocated to one of three groups based on the level of anti-HEV antibodies measured in the sows 2 weeks prior to farrowing. The MAbs were detected only in piglets from sows with high levels of anti-HEV Abs prior to farrowing, revealing a clear correlation between the levels of anti-HEV Abs in the sows and the maternal anti-HEV Abs in the piglets. This finding is in accordance with previous studies, which also showed that a high level of antibody is required for effective transfer from the sow (8)(9)(10). The difference in HEV MAbs levels between piglets born of sows with high level of HEV IgG (group 3) compared to the other two groups were significantly different until week 13. Previous studies have confirmed that MAbs against HEV decline at around weeks 9-13 (8,9). HEV RNA was detected in feces of pigs from week 13 and onwards. Thus, no viral shedding was detected in the pigs when housed in the sow herd because the pigs were moved to the finisher site at 30 kg (week 9-12). Based on the facts that anti-HEV Abs were detected in the sows prior to farrowing and that HEV RNA was detected in the gilts in the herd (data not shown), HEV was indeed present in the sow herd of this study. However, it is not clear, if the piglets were infected by HEV just prior to being moved from the breeding unit or if the pigs were infected after arrival at the finisher site. However, although there was no effect of the level of HEV MAbs on the onset or duration of viral shedding, significantly fewer pigs in the group with initially higher levels of MAbs tested positive for HEV during the study. These findings indicated that the pigs were exposed to HEV relatively late in the nursery period i.e., after the MAbs had declined in most pigs. A previous field study failed to show any effect on the level of MAbs on the risk of becoming HEV shedders, however, that study was performed on very few animals (2 litters) and the pigs were infected very early (week 3-4) indicating a high viral load in the environment (11). Another field study detected HEV RNA in feces of pigs starting in weeks 12-15 ∼3-5 weeks after the anti-HEV MAbs had waned, which is more in line with the findings in the present study (9). Seroconversion against HEV, as measured using a commercial HEV ELISA, was observed in the present study in all pigs, except one, starting between week 11 and week 13 which is in accordance with development of IgG in previous studies (8)(9)(10). Thus, the pigs that were HEV RNA negative at all samplings in feces also seroconverted indicating that they indeed were infected or at least exposed to HEV either in a short period of time or at levels below the detection limit of the real-time RT-PCR assay. However, these animals may have been positive for HEV in other tissues or in serum. Seroconversion coincided with the first detection of viral RNA for most of the pigs. This was unexpected since IgG Abs previously have been shown to develop 2-3 weeks after onset of viremia (9,11). Detection of HEV in serum, in the present study, was attempted on the same sampling days as for the feces samples, but was unsuccessful even though different methods for RNA extraction were tested and the assay previously has performed very well in detecting HEV RNA in serum samples from the field (12) and in a ring trial (unpublished results). The level of HEV in serum has, however, previously been shown to be significantly lower than in feces and the viremia also seems to be of shorter duration than the fecal shedding (11). Furthermore, in an experimental trial in pigs, using intravenous inoculation of homogenates of livers with different levels of HEV, it was shown that the duration and levels of viremia were strongly correlated to the level of HEV present in the inoculum (3). Thus, a likely explanation for the finding in the present study, i.e., seroconversion coincided with positive fecal samples, could be that virus fecal excretion start days or even weeks after exposure. Another contributing factor to the early detection of anti-HEV Abs could be that the anti-porcine IgG conjugate included in the ELISA cross-reacted with IgM Abs which normally develop earlier than IgG (8,10). The HEV RNA was detected in internal organ samples (intestine, lymphatic tissue, bile and liver), but not in muscle, which is in accordance with previous findings (7,11,13,14). Interestingly, only the pigs that tested positive in fecal samples at slaughter were also positive in organs. This indicated that testing of feces from pigs prior to slaughter could be used as an indicator of HEV presence in internal organs. However, albeit that all feces positive pigs were found to harbor HEV in tissue in one previously study (14), the predictive value of a negative feces test may be limited since HEV has been detected previously in organs from pigs that tested negative in feces (8,14). In conclusion, a high proportion of the pigs, in a single HEV positive herd, were infected and tested positive for HEV during the finisher stage and a fraction of these pigs also had HEV RNA in feces and organs at slaughter. High levels of MAbs reduced the prevalence of HEV positive animals and, therefore, sow vaccination may be an option to decrease the prevalence of HEV positive animals at slaughter, however, more studies are required to investigate this. DATA AVAILABILITY The datasets generated for this study are available on request to the corresponding author. ETHICS STATEMENT The study included blood sampling of animals under field conditions for diagnostic purposes and by that did not require approval from the ethic committee. AUTHOR CONTRIBUTIONS JK participated in the design of the study, performed the laboratory analyses on pig samples, participated in the assessment and statistical analysis and drafted the manuscript. LL participated in the design of the study and in the assessment and statistical analysis and commented and made adjustment to the manuscript. SB participated in the design of the study, in the establishment of the analytical assays and participated in the assessment and statistical analysis and commented and made adjustment to the manuscript.
v3-fos-license
2021-07-25T06:17:04.066Z
2021-07-01T00:00:00.000
236210775
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4360/13/14/2322/pdf", "pdf_hash": "bf64a449eea5b9796136cca1307588b2410a0da9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:205", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "9cc67e094053a30d31c57feeb1672682bcd4a235", "year": 2021 }
pes2o/s2orc
Silicone Rubber Composites Reinforced by Carbon Nanofillers and Their Hybrids for Various Applications: A Review Without fillers, rubber types such as silicone rubber exhibit poor mechanical, thermal, and electrical properties. Carbon black (CB) is traditionally used as a filler in the rubber matrix to improve its properties, but a high content (nearly 60 per hundred parts of rubber (phr)) is required. However, this high content of CB often alters the viscoelastic properties of the rubber composite. Thus, nowadays, nanofillers such as graphene (GE) and carbon nanotubes (CNTs) are used, which provide significant improvements to the properties of composites at as low as 2–3 phr. Nanofillers are classified as those fillers consisting of at least one dimension below 100 nanometers (nm). In the present review paper, nanofillers based on carbon nanomaterials such as GE, CNT, and CB are explored in terms of how they improve the properties of rubber composites. These nanofillers can significantly improve the properties of silicone rubber (SR) nanocomposites and have been useful for a wide range of applications, such as strain sensing. Therefore, carbon-nanofiller-reinforced SRs are reviewed here, along with advancements in this research area. The microstructures, defect densities, and crystal structures of different carbon nanofillers for SR nanocomposites are characterized, and their processing and dispersion are described. The dispersion of the rubber composites was reported through atomic force microscopy (AFM), transmission electron microscopy (TEM), and scanning electron microscopy (SEM). The effect of these nanofillers on the mechanical (compressive modulus, tensile strength, fracture strain, Young’s modulus, glass transition), thermal (thermal conductivity), and electrical properties (electrical conductivity) of SR nanocomposites is also discussed. Finally, the application of the improved SR nanocomposites as strain sensors according to their filler structure and concentration is discussed. This detailed review clearly shows the dependency of SR nanocomposite properties on the characteristics of the carbon nanofillers. Introduction Next-generation carbon-based nanofillers have gained great importance owing to their excellent properties that are suited to a wide range of applications, such as strain sensing. Yang et al. [1] showed the improved strain sensing behavior of composites based on graphene (GE)-filled silicone rubber (SR). Moreover, Kumar et al. [2] showed that the use of carbon nanotubes (CNTs) as a nanofiller in an SR matrix leads to improved mechanical and electrical properties. Furthermore, Kumar et al. [3] demonstrated a piezoresistive strain sensor based on CNTs and carbon black (CB) in an SR matrix, and the improved gauge factor and durability of the composites were studied. Lastly, Song et al. [4] demonstrated the strain sensing behavior of CNT/CB hybrid in an SR matrix, and the high strain sensitivity of the composites was demonstrated. In recent years, strain sensors have been extensively strain sensitivity of the composites was demonstrated. In recent years, strain sensors have been extensively investigated due to their promising applications in health monitoring [5], flexible electronic devices [6], or detecting locomotive motions [7]. Recently, flexible strain sensors based on polymer or rubber composites have been favorable for the above applications. These strain sensors based on polymer composites offer various advantages, such as high flexibility [8], robust stretchability [9], high gauge factor [3], and high sensitivity [10]. Promising strain sensors probably exhibit a high sensitivity to compressive or tensile strain originating from various mechanical motions [7]. Polymer composites are based on various types of polymers, which are typically elastomers, such as SR, while nanofillers, such as GE [10] or their hybrid [11] and CNTs [12], are added to improve the conductivity of the composites. In particular, much attention has been paid to carbon nanotubes (CNTs), namely by Zhou et al. [13], Earp et al. [14], Wei et al. [15], and Song et al. [16]. All these authors demonstrated the successful use of CNTs as a nanofiller, and improved properties were studied. Similarly, the use of GE as a nanofiller was shown by Ren et al. [17], Liao et al. [18], Zhang et al. [19], and Wei et al. [20], and improved properties were reported that are amenable to the development of next-generation polymer composite materials. The small particle size [21], high aspect ratio [22], and optimum surface area [23] of these nanofillers allow for the achievement of good mechanical and electrical properties at low filler content, as demonstrated by Khanna et al. [24], Wang et al. [25], Wu et al. [26], and Wei et al. [27]. These authors reported CNTs as a single or hybrid filler in various matrices, and improved properties were reported. Thus, to produce innovative material systems with excellent properties, these nanofillers were used as reinforcing agents in the rubber matrix. Graphene has emerged as the mother of all carbon allotropes [28]. When stacked, few-layer or multilayer graphene (MLG) is formed [29]; when randomly stacked, different grades of carbon black (CB) are formed [30]; and when wrapped, single-or multilayer carbon nanotubes result (Scheme 1). However, the nanofillers tend to aggregate at high contents owing to intermolecular Van der Waals interactions [31] among the single or hybrid filler particles [32], resulting in poorer properties [33]. Thus, the nanofiller must be optimized in order to obtain the best mechanical and electrical properties. • The shore A hardness of virgin SR and, in some cases, filled SR is below 65. Therefore, it can be used in various soft applications, such as actuators or strain sensors, etc. [55]. • Virgin SR is easy to process, soft, has good flexibility and good tensile strength, and is easy to vulcanize. Therefore, it is promising to be used as a strain sensor [56]. • SR has high thermal stability, good aging properties, and low environmental degradability [57]. • Virgin SR is an insulator. Therefore, although it has advantages for use in electric cables, it cannot be used as a strain sensor. As such, conducting nanofillers are added to make the rubber-matrix-based composite electrically conductive to make it useful for strain sensors [58,59]. • Virgin SR is ductile. Therefore, it is vulcanized to be useful for various industrial applications, such as strain sensors. However, the type of vulcanization, such as room-temperature-vulcanized silicone rubber [36] and high-temperature-vulcanized silicone rubber [36], not only affects the mechanical properties, such as the mechanical stiffness of the rubber composites, but also affects the strain sensing properties, such as the gauge factor, stress relaxation, and durability [3]. • The main disadvantage of using SR is its poor fracture strain. Most often, the fracture strain of silicone rubber is up to~300%, which is too short as compared to that of natural rubber [60]. Another disadvantage of using silicone rubber is the dependency of the mechanical properties of silicone rubber on the type of vulcanization [36]. Various advantages of using silicone rubber are described in the above section. Microstructure, Defect Density, and Crystalline State of the Carbon-Based Filler Particles CB is roughly oval and predominantly consists of spherical particles with different diameters and particle sizes, as shown in Figure 1a. The spherical CB particles are distributed unevenly, and their aggregation has been observed [61]. The spherical particles become larger as the number of particles in the aggregates increases [62]. CB exhibits structural lengths varying from a few angstroms to micrometers [3]. CB is a traditional filler material used in a wide range of polymer matrices, including SR. CB optimally reinforces the SR matrix, but a high amount is required, which leads to undesirable composite properties such as high hardness, processing difficulty, and high viscosity [63]. Therefore, nano carbon black (nano-CB) has recently been used by various researchers to obtain favorable properties at low loadings [2]. Polymers 2021, 13, x FOR PEER REVIEW 5 of 34 monolayer, bilayer, or trilayer GE, and few-layer graphene (FLG) for <5 layers of stacked graphene [68]. Other terms include MLG for <10 stacked layers, graphene nanoplatelets (GNPs), expanded graphite (EG), or graphite nanoflakes (GNFs) for stacked graphene layers with thicknesses below 100 nm [68]. The degree of exfoliation of the graphene layers in GR strictly depends on the method of synthesis [3]. Raman spectroscopy is generally employed to assess the structural properties of GRbased carbon nanomaterials, such as defect density, defect structure, and number of graphene stacked layers [69]. Studies have found that the structural properties depend on the type and nature of the carbon nanomaterial investigated (Figure 1b). A significant change CNTs can be formed by wrapping graphene or multilayer graphene to form singlewalled CNTs [64] or multi-walled CNTs (MWCNTs) [65], respectively. CNTs typically have a diameter below 20 nm and lengths ranging from a few nanometers to micrometers; this yields a high aspect ratio of approximately 65 [2,3]. These features make CNTs ideal nanofillers for reinforcing the SR matrix, and the properties improve exponentially as the aspect ratio is increased [66]. The tubes are highly entangled and randomly distributed, as shown in the SEM image in Figure 1a. The length, diameter, and distribution of the tubes can be identical or different depending on the synthetic method. CNTs have attracted great attention as nanofillers for extremely strong, ultra-lightweight, and high-performing SR nanocomposites for various applications, such as strain sensing. Nanographite (GR) has a sheet-like morphology with different numbers of graphene layers stacked together [67]. These GR sheets have a thickness of a few nanometers (basically below 10 nm) and lateral dimensions from hundreds of nanometers to submicrons, leading to a high aspect ratio of 5-10 [3]. The SEM micrograph of GR shown in Figure 1a reveals that the graphene layers are stacked in an orderly manner, as seen in graphite or expanded graphite. Different terminology is used to refer to GR in the literature, such as monolayer, bilayer, or trilayer GE, and few-layer graphene (FLG) for <5 layers of stacked graphene [68]. Other terms include MLG for <10 stacked layers, graphene nanoplatelets (GNPs), expanded graphite (EG), or graphite nanoflakes (GNFs) for stacked graphene layers with thicknesses below 100 nm [68]. The degree of exfoliation of the graphene layers in GR strictly depends on the method of synthesis [3]. Raman spectroscopy is generally employed to assess the structural properties of GR-based carbon nanomaterials, such as defect density, defect structure, and number of graphene stacked layers [69]. Studies have found that the structural properties depend on the type and nature of the carbon nanomaterial investigated (Figure 1b). A significant change in the defect density and defect structure was reported to take place between the different types of carbon nanomaterials [70]. These structural changes are clearly reflected in the Raman signatures of the CB, CNT, and GR nanomaterials (Figure 1b). The Raman spectra display three bands (D-band, G-band, and 2D band) at different Raman shifts as characteristic features of CB [71], CNT [72], and GR [73]. The Raman spectra of CNT and GR show two different phonon oscillations in the G-band and 2D band, but for CB, the 2D band is absent because it is extremely sensitive to changes in structural properties based on interlayer interactions. Moreover, the full width at half maximum of the 2D GR signifies the number of graphene layers stacked in the crystalline domain regime. The D-band is directly related to the defect density and structural defects present in the carbon nanomaterials. As indicated in Figure 1b, the D-band and its intensity are directly related to the amount of disorder present in CB, CNT, and GR. The I D /I G ratio, which is the intensity ratio of the D/G-bands, was 1.12 for CB, 1.05 for CNT, and 0.3 for GR, indicating that the lowest defect density occurs for GR and the highest for CB-based carbon nanomaterials [3]. It should be noted that the number of defects increases from GR to CB, thereby affecting the properties of CB-based SR composites more than those of CNTand GR-based composites. X-ray diffraction (XRD) is generally employed to investigate the crystalline state of carbon nanofillers. A prominent (002) peak usually appears at 2θ = 26 • , which is the characteristic peak for carbon nanomaterials (Figure 1c). The dimensions of crystallite GR in the orthogonal and parallel directions with respect to the structural layers can be estimated using XRD [74]. The intensities of the reflections in XRD patterns are generally not correlated to polarization or Lorentz factors in order to allow better visibility of (00 ) peaks. The d-spacing can be estimated using Bragg's law, and the D hk correlation length can be estimated using Scherrer's equation, as follows: where K is Scherrer's constant, λ is the wavelength of the irradiating beam (1.5419 Å, CuK α ), β hk is the width at half height, and θ hk is the diffraction angle. A correction factor Polymers 2021, 13, 2322 6 of 31 must be used if θ hk is lower than 1 • . D 00 can be obtained by using the (002) reflections. By estimating D hk and assuming a 0.33-nanometer interlayer spacing, the number of graphene layers stacked in GR and wrapped in CNTs can be estimated. The XRD pattern of GR shows highly ordered features, in which 20-25 graphene layers are generally stacked in the crystalline domain region [3]. Similarly, CNTs show multilayer (35-40 layers) graphene stacking wrapped in a concentric form in the crystalline domain of CNTs [3]. However, the number of layers in CB cannot be estimated using Scherrer's equation because of the random stacking of graphene layers. Moreover, the XRD results reveal that there is an appreciable amount of amorphous carbon in the CB specimens. FTIR, XPS, and XRD of Graphite, GO, GE Monolayer, bilayer, and FLG can be synthesized by reduction of graphene oxide (GO) obtained from bulk graphite. FTIR is a promising technique to investigate the chemical changes that occur during the oxidation of graphite to GO and the reduction into GE (Figure 2a). Compared to graphite, a new absorption peak at 1725 cm −1 (assigned to C=O) was seen in graphite oxidizing to GO. However, this absorption peak (C=O peak) disappears while reducing GO to GE. It shows that there is reduction in oxygen-containing functional groups in the process of reducing GO to GE. XPS is further used as a tool to classify the quantitative details about the type and functionalization of GO and GE. The XPS survey for graphite, GO, and GE is reported in Figure 2b. The peak with high intensity at~284.6 eV represents the sp 2 and sp 3 hybridized carbon atoms. The other distinguished peaks at~286.5, 287.7, and~289.1 eV typically correspond to C-O, C=O, and O-C=O, respectively [75]. The O/C ratio on the surface of the specimens can be extracted through the ratio of respective peak areas. The extracted O/C ratio for graphite, GO, and GE is 11.2, 35.6, and 18.2%, respectively [75]. The significant increase in ratio for GO is due to the oxidation of graphite, which increases the amount of oxygen-containing functional groups. However, a decrease in the O/C ratio in GE confirms the successful reduction of GO into GE. Solution Casting Processing Nanocomposites are prepared by mixing a solution of room-temperature-vulcanized (RTV)-SR with carbon nanofillers via solution casting. The sample preparation process is initiated by spraying the molds with a mold-releasing agent. The molds are sprayed and Further studies on the successful oxidation of graphite and its reduction into GE were performed using XRD. The XRD patterns of graphite, GO, and GE are presented in Figure 2c. It was found that pristine graphite shows a sharp peak at 26.4 • , indicating the ordered structure of graphite. After oxidation of graphite into GO, the peak shifts to 10.9 • , with an interlayer distance of 0.85 nm [75]. With the further process of reducing GO into GE, the weak peak at 24.1 • of reduced GE highlights the restoration of the graphitic structure with significant exfoliation in its structure [75]. Solution Casting Processing Nanocomposites are prepared by mixing a solution of room-temperature-vulcanized (RTV)-SR with carbon nanofillers via solution casting. The sample preparation process is initiated by spraying the molds with a mold-releasing agent. The molds are sprayed and dried for 2 h. For the preparation of rubber nanocomposites, carbon nanofillers are mixed with an SR solution for 10 min. The SR/carbon-nanofiller nanocomposites are then subjected to vacuum for 30 min to remove air cavities from the SR matrix. Subsequently, 2 phr of vulcanizing agent is added, and the mixture is poured into the sprayed mold, which is then manually pressed firmly and kept for 24 h at room temperature for vulcanization. Samples are then removed from the sprayed molds and the mechanical and electrical properties are tested, as in our previous study [76,77]. A schematic of the procedure for the preparation of SR-based nanocomposites is shown in Scheme 2. Filler Dispersion through AFM AFM is an important tool for studying the substrate depth, grain size, and roughness of filled polymer composites. Figure 3 shows the use of AFM to investigate the dispersion of filler particles [78], such as CNTs, GR, CB, and their hybrids. The phase images ( Figure 3a-e), topological images (Figure 3a'-e'), and histograms of the grain heights (Figure 3a''e'') are presented. As seen in the phase images, the CNT and GR particles in the hybrid systems were found to be preferentially located near CB filler particles (black arrows, Figure 3a,b). The migration of CNT and GR particles towards CB particles was proposed to form synergistic filler networks, leading to synergistic effects of the mechanical and elec-Scheme 2. Schematic presentation of the preparation of RTV-SR-based nanocomposites and the suggested mechanism for reinforcement of the RTV-SR matrix with carbon nanofillers. Filler Dispersion through AFM AFM is an important tool for studying the substrate depth, grain size, and roughness of filled polymer composites. Figure 3 shows the use of AFM to investigate the dispersion of filler particles [78], such as CNTs, GR, CB, and their hybrids. The phase images (Figure 3a Figure 3a"-e") are presented. As seen in the phase images, the CNT and GR particles in the hybrid systems were found to be preferentially located near CB filler particles (black arrows, Figure 3a,b). The migration of CNT and GR particles towards CB particles was proposed to form synergistic filler networks, leading to synergistic effects of the mechanical and electrical properties of the composites. The topological images of the composites show the distribution of GR particles in exfoliated form in hybrid and single composites, along with a few aggregates. The brighter domains of the filler particles in SR can be easily identified and are distributed homogenously, while there are only a few aggregates, especially in GR-filled SR composites. The presence of GR aggregates in single fillers and its hybrid with CB further suggests the higher roughness of their composites compared to composites based on CNTs and CNT + CB. In line with this hypothesis, it was found that the roughness of GR as a single filler was 23 nm, and that of its hybrid with CB was 22 nm [79]. However, the roughness was much lower for CNTs (18 nm) and the CNT hybrid with CB (10 nm) [79]. This further supports the uniform dispersion of CNTs and evidences their synergistic relationship with CB in the hybrid form. The better mechanical and electrical properties of the CNT + CB-hybrid-based SR composites are discussed in later sections of this review. As previously mentioned, AFM reveals the surface roughness, while phase imaging quantifies the surface stiffness, which describes the difference in the modulus of the mechanical properties of the filled composites between the two components. Recently, Lee et al. [34] employed AFM to study the filler percolation threshold for CNTs in RTV-SR composites. From the AFM results, the authors found that at 3 phr CNT, continuous long-range filler networks (or filler percolation threshold) of CNTs were observed in the RTV-SR matrix. Filler Dispersion through TEM TEM is frequently used to study filler dispersion in polymer matrices [80]. Figure 4 shows TEM images of the three types of carbon-based nanofillers in SR matrix, where the grey dots indicate the filler particles. The CNT particles in the SR matrix are visible in Figure 4a,b. It can be seen from the high-resolution TEM image that few CNT particles were agglomerated while continuous CNT networks were formed, as represented by the red lines [52]. In their TEM studies, Hu et al. [81] also showed the agglomeration of CNT in an SR matrix at 5 wt% CNTs. Although ultrasonic treatment was applied to the CNT before dispersion in the SR matrix, the CNTs were still not dispersed uniformly. This is due to the strong Van der Waals forces and intermolecular π-π interactions among the CNT particles [81]. As a single filler, GR was found to be highly exfoliated, with a grain size in the range of~30 nm. However, in the hybrid form of GR + CB, the grain size increased to 81 nm [79]. This provided evidence for the restacking of graphene layers in GR owing to Van der Waals interactions with the CB and GR particles, thereby leading to an increase in the grain size of the CB + GR hybrid composite and a decline in the mechanical and electrical properties. However, the grain size of the CNT filler was~37 nm, which was higher than that of the GR filler, and the grain size of the CB + CNT hybrid was only~39 nm, which was lower than that of the CNT + GR hybrid [79]. Interestingly, the aggregation of CNTs in the presence of CB was lower than the aggregation observed in the CB + GR hybrid. This also shows that the dispersion of CNTs with and without CB was better than that of GR, demonstrating the superior properties and higher synergistic effect of CNTs and their hybrids with CB. The presence of GR aggregates in single fillers and its hybrid with CB further suggests the higher roughness of their composites compared to composites based on CNTs and CNT + CB. In line with this hypothesis, it was found that the roughness of GR as a single filler was 23 nm, and that of its hybrid with CB was 22 nm [79]. However, the roughness was much lower for CNTs (18 nm) and the CNT hybrid with CB (10 nm) [79]. This further supports the uniform dispersion of CNTs and evidences their synergistic relationship with CB in the hybrid form. The better mechanical and electrical properties of the CNT + CB-hybrid-based SR composites are discussed in later sections of this review. As previously mentioned, AFM reveals the surface roughness, while phase imaging quantifies the surface stiffness, which describes the difference in the modulus of the mechanical properties of the filled composites between the two components. Recently, Lee et al. [34] Polymers 2021, 13, 2322 9 of 31 employed AFM to study the filler percolation threshold for CNTs in RTV-SR composites. From the AFM results, the authors found that at 3 phr CNT, continuous long-range filler networks (or filler percolation threshold) of CNTs were observed in the RTV-SR matrix. Filler Dispersion through TEM TEM is frequently used to study filler dispersion in polymer matrices [80]. Figure 4 shows TEM images of the three types of carbon-based nanofillers in SR matrix, where the grey dots indicate the filler particles. The CNT particles in the SR matrix are visible in Figure 4a,b. It can be seen from the high-resolution TEM image that few CNT particles were agglomerated while continuous CNT networks were formed, as represented by the red lines [52]. In their TEM studies, Hu et al. [81] also showed the agglomeration of CNT in an SR matrix at 5 wt% CNTs. Although ultrasonic treatment was applied to the CNT before dispersion in the SR matrix, the CNTs were still not dispersed uniformly. This is due to the strong Van der Waals forces and intermolecular π-π interactions among the CNT particles [81]. It is evident that the CNT forms a filler percolation threshold owing to its high aspect ratio, which causes the CNT particles to form interconnected networks and leads to better mechanical, electrical, and thermal properties [82,83]. The inclusion of GR to form a CNT hybrid in the SR matrix is illustrated in Figure 4c,d. GR particles are found in the vicinity It is evident that the CNT forms a filler percolation threshold owing to its high aspect ratio, which causes the CNT particles to form interconnected networks and leads to better mechanical, electrical, and thermal properties [82,83]. The inclusion of GR to form a CNT hybrid in the SR matrix is illustrated in Figure 4c,d. GR particles are found in the vicinity of the CNT particles, which gives rise to a synergistic effect on the properties of the hybrid SR composites. It was also noticed that after the addition of GR, the aggregation of CNT particles was significantly reduced, leading to improved dispersion of CNTs in the SR matrix. Moreover, the percolation phenomenon is more evident after the addition of GR to the CNT/SR composites as more GR particles are located near the CNT particles, as demonstrated by the red (CNT) and blue (GR) lines. In the case of self-assembled CNT-GR in SR composites, as shown in Figure 4e,f, the distribution of GR and CNT is narrower, and, as in the case of the hybrid filler, the CNT particles are closer to the GR particles. The percolation phenomena are indicated by the red and blue dashed lines. In addition, bridging structures were identified between the GR and CNT particles in SR composites in the self-assembled hybrid systems. This behavior enhanced the efficiency and organization of the networks. In addition to the high aspect ratio of CNTs, the close proximity of CNTs to GR particles makes them more efficient at forming filler networks and helps in attaining filler percolation at lower filler contents. Moreover, filler percolation provides a monotonic response of the electrical properties during cyclic loading of self-assembled hybrid systems. Filler Dispersion through SEM SEM can be used to study the dispersion of filler particles in a polymer matrix [84,85]. Zhao et al. [86] showed that the formation of filler percolative networks involves two stages of network formation. At lower loadings (e.g., 4 wt%), there are short-range discontinuous networks. In addition, at higher filler loadings (e.g., 8 wt%), there is long-range and continuous filler-network formation. Zhao et al. [86] showed the dispersion of a GR-CB/SR hybrid at a filler loading of 4 wt%. It was found that the properties at lower percentages were worse than those at higher loadings. This accounts for the formation of short, isolated CB networks in both the space and interior arms of the GR in the SR matrix. From the SEM images shown by Zhao et al. [86], it can be concluded that CB forms continuous networks and is interconnected, as shown by the red lines. The properties at 8 wt% CB led to a very high thermal conductivity [86]. Here, the synergistic effect of the GR-CB/SR on the thermal conductivity has previously been described [86]. Figure 5 further shows the microstructure and morphologies of fracture surfaces of GE/SR composites using SEM micrographs under three processing conditions: mechanical mixing, solution mixing, and ball milling [87]. The SEM micrographs revealed filler-rich zones and zones rich with matrix phases. From these investigations, they reported that the dispersion of GE sheets seems to be improved by solution mixing and ball milling. The results in Figure 5 further indicate that the size of GE agglomerates is reduced, and clusters of GE sheets become much looser than those formed during mechanical mixing due to the applied sonication and formation of conductive networks [87]. Kumar et al. [3] reported that among CB, GE, and CNTs, CNTs showed the best dispersion in RTV-SR composites. CNTs form filler percolative networks at a very low content of approximately Polymers 2021, 13, x FOR PEER REVIEW 12 of Polymer-Filler Interaction through Swelling Tests The physical performance of a composite strongly depends on a number of facto such as the shape, volume fraction, and particle size of the filler particle; filler-filler int actions, and especially polymer-filler interactions. The polymer-filler interaction depen on the polymer-filler compatibility [88] and the interfacial area [89]. A high interfac area provides a greater opportunity for polymer chains to interact with the filler partic and their aggregates. The polymer-filler interaction leads to the adsorption of polym chains on the filler surface. The interfacial interaction and dispersion of filler particles c be enhanced by introducing silane coupling agents to improve the adhesion between t filler and rubber composite phases [90]. The mechanical properties of composites have been improved by adding stiff fill to a soft SR matrix. An additional contribution to the reinforcement arises from the int facial interactions between the SR and filler particles. Thus, the quality of the interfac interaction plays an important role in determining the properties of the filled composit These interactions lead to an increase in the degree of cross-linking, which can be det mined through equilibrium swelling tests [91]. Figure 6 depicts the polymer-filler int action using swelling methodology and a Kraus plot. Equilibrium swelling measureme are highly effective in determining the number of effective network chains in a rubb Polymer-Filler Interaction through Swelling Tests The physical performance of a composite strongly depends on a number of factors, such as the shape, volume fraction, and particle size of the filler particle; filler-filler interactions, and especially polymer-filler interactions. The polymer-filler interaction depends on the polymer-filler compatibility [88] and the interfacial area [89]. A high interfacial area provides a greater opportunity for polymer chains to interact with the filler particles and their aggregates. The polymer-filler interaction leads to the adsorption of polymer chains on the filler surface. The interfacial interaction and dispersion of filler particles can be enhanced by introducing silane coupling agents to improve the adhesion between the filler and rubber composite phases [90]. The mechanical properties of composites have been improved by adding stiff fillers to a soft SR matrix. An additional contribution to the reinforcement arises from the interfacial interactions between the SR and filler particles. Thus, the quality of the interfacial interaction plays an important role in determining the properties of the filled composites. These interactions lead to an increase in the degree of cross-linking, which can be determined through equilibrium swelling tests [91]. Figure 6 depicts the polymer-filler interaction using swelling methodology and a Kraus plot. Equilibrium swelling measurements are highly effective in determining the number of effective network chains in a rubber matrix. For the filler added to the SR matrix, swelling tests not only reveal the effect of chemical junctions but also the density of polymer-filler attachments. Kraus demonstrated the theory of interaction between fillers and polymer matrix [92,93]. This theory reports the swelling of the vulcanizates in certain solvents such as toluene and was shown to obey the following equation: where Q r and Q r0 are the individual swelling ratios of filled and unfilled vulcanizates, respectively; v r0 and v r represent the individual volume fraction of filled and unfilled rubber composites after swelling; and m can be determined by the following equation: where c is the constant characteristic of the reinforcing filler, but it is independent of the degree of vulcanization, the nature of the solvent, and the polymer matrix. According to Kraus, if there is good polymer-filler interaction, the restriction to swollen rubber leads to a decrease in the ratio of v r0 /v r as the filler loading in the polymer composite is increased. Therefore, application of Equation (3) is useful for determining the adhesion between polymer chains and filler particles in polymer composite [94]. matrix. For the filler added to the SR matrix, swelling tests not only reveal the effect of chemical junctions but also the density of polymer-filler attachments. Kraus demonstrated the theory of interaction between fillers and polymer matrix [92,93]. This theory reports the swelling of the vulcanizates in certain solvents such as toluene and was shown to obey the following equation: where Qr and Qr0 are the individual swelling ratios of filled and unfilled vulcanizates, respectively; vr0 and vr represent the individual volume fraction of filled and unfilled rubber composites after swelling; and m can be determined by the following equation: where c is the constant characteristic of the reinforcing filler, but it is independent of the degree of vulcanization, the nature of the solvent, and the polymer matrix. According to Kraus, if there is good polymer-filler interaction, the restriction to swollen rubber leads to a decrease in the ratio of vr0/vr as the filler loading in the polymer composite is increased. Therefore, application of Equation (3) is useful for determining the adhesion between polymer chains and filler particles in polymer composite [94]. In this case, according to Kraus' theory, the higher the negative slope is, the better the polymer-filler interaction and reinforcement are for the filler and the polymer matrix in polymer composites. In Figure 6, the slope is 0.95 for 3 phr expanded graphite (EG) and 0.83 for 15 phr EG in the SR matrix, indicating good reinforcement of the SR matrix by EG [95]. It can also be concluded that as the reinforcement increases, the degree of stress transfer from the polymer chains of the SR matrix to the EG particles increases, which, in turn, results in an increase in the constraint zone in the polymer chains of the SR matrix. Kumar et al. [2] also studied the strength of the interfacial interaction through a Kraus plot. The CNT specimen showed a higher slope and, thus, a higher Kraus constant than the CBbased RTV-SR composite. This is attributed to the higher interfacial interaction and high networking density of the CNT particles in the RTV-SR composite. In this case, according to Kraus' theory, the higher the negative slope is, the better the polymer-filler interaction and reinforcement are for the filler and the polymer matrix in polymer composites. In Figure 6, the slope is 0.95 for 3 phr expanded graphite (EG) and 0.83 for 15 phr EG in the SR matrix, indicating good reinforcement of the SR matrix by EG [95]. It can also be concluded that as the reinforcement increases, the degree of stress transfer from the polymer chains of the SR matrix to the EG particles increases, which, in turn, results in an increase in the constraint zone in the polymer chains of the SR matrix. Kumar et al. [2] also studied the strength of the interfacial interaction through a Kraus plot. The CNT specimen showed a higher slope and, thus, a higher Kraus constant than the CB-based RTV-SR composite. This is attributed to the higher interfacial interaction and high networking density of the CNT particles in the RTV-SR composite. Under Compressive Strain The compressive and tensile mechanical properties depend on a number of factors, but the dispersibility and interfacial stress transfer between the filler and polymer composite are the most important [96]. It is expected that the addition of a small amount of CNTs significantly improves the mechanical properties of composites owing to the high aspect ratio and attainment of the filler percolation threshold at lower CNT amounts in the polymer composites. The compressive mechanical properties and synergistic effects of filled vulcanizates are shown in Figure 7. Figure 7a presents a graph of the compressive stress against the compressive strain. It can be seen that the compressive stress increases with increasing compressive strain in all vulcanizates [97]. This is due to the impact of (a) the large, increasing density of the filler clusters in the polymer composites, (b) the tangling and detangling of the filler clusters at different strains, and (c) the improved interfacial adhesion and filler networking in polymer composites [98,99]. Figure 7a shows the dominant mechanical properties of CNT-based composites [97], which are attributed to (a) the orientation and de-orientation of the filler clusters, especially those with high aspect ratios such as CNTs; (b) decreasing the interparticle distance of the filler, leading to attainment of the filler percolation threshold and, hence, improved properties; and (c) a high interfacial area between the CNTs and polymer chains in the composites, which leads to efficient stress transfer from polymer chains to CNT aggregates. High stress transfer assists in efficient heat dissipation within the composite [100]. Under Compressive Strain The compressive and tensile mechanical properties depend on a number of factors, but the dispersibility and interfacial stress transfer between the filler and polymer composite are the most important [96]. It is expected that the addition of a small amount of CNTs significantly improves the mechanical properties of composites owing to the high aspect ratio and attainment of the filler percolation threshold at lower CNT amounts in the polymer composites. The compressive mechanical properties and synergistic effects of filled vulcanizates are shown in Figure 7. Figure 7a presents a graph of the compressive stress against the compressive strain. It can be seen that the compressive stress increases with increasing compressive strain in all vulcanizates [97]. This is due to the impact of (a) the large, increasing density of the filler clusters in the polymer composites, (b) the tangling and detangling of the filler clusters at different strains, and (c) the improved interfacial adhesion and filler networking in polymer composites [98,99]. Figure 7a shows the dominant mechanical properties of CNT-based composites [97], which are attributed to (a) the orientation and de-orientation of the filler clusters, especially those with high aspect ratios such as CNTs; (b) decreasing the interparticle distance of the filler, leading to attainment of the filler percolation threshold and, hence, improved properties; and (c) a high interfacial area between the CNTs and polymer chains in the composites, which leads to efficient stress transfer from polymer chains to CNT aggregates. High stress transfer assists in efficient heat dissipation within the composite [100]. The elastic modulus was measured at a compressive strain of 0.23% and is plotted against filler loading in Figure 7b. The figure shows that hybrids based on CNTs and GR exhibit improved compressive mechanical properties (i.e., compressive stress and elastic modulus), especially at higher filler contents. This enhancement is attributed to the synergistic effects of the hybrid fillers [101] and the restriction of polymer chain mobility due to the increased fraction of bound rubber in the polymer composites. Many recent studies have shown that the improved mechanical properties of hybrid fillers in polymer composites are due to synergistic effects. The results in Figure 7 also indicate that GR and CNTs are well dispersed in the polymer matrix [97]. The improvement in compressive mechanical properties is also due to the attainment of the filler percolation threshold between CNTs and between GR and CB in the polymer composites. The high surface area of the The elastic modulus was measured at a compressive strain of 0.23% and is plotted against filler loading in Figure 7b. The figure shows that hybrids based on CNTs and GR exhibit improved compressive mechanical properties (i.e., compressive stress and elastic modulus), especially at higher filler contents. This enhancement is attributed to the synergistic effects of the hybrid fillers [101] and the restriction of polymer chain mobility due to the increased fraction of bound rubber in the polymer composites. Many recent studies have shown that the improved mechanical properties of hybrid fillers in polymer composites are due to synergistic effects. The results in Figure 7 also indicate that GR and CNTs are well dispersed in the polymer matrix [97]. The improvement in compressive mechanical properties is also due to the attainment of the filler percolation threshold between CNTs and between GR and CB in the polymer composites. The high surface area of the filler provides good interfacial interaction and a high interfacial area, and thus, higher adsorption of the polymer chains on the filler particles is possible. The good interfacial interaction leads to heat dissipation from the polymer chains in the polymer composites to filler particles, resulting in improved mechanical properties [102]. Kumar et al. [2] also showed that CNTs as a filler have better mechanical properties than CB does. The authors showed that the Young's modulus increased by 272% with 2 phr CNTs and increased to as high as 706% at 8 phr CNTs. In addition, the Young's modulus improved by only 125% with 10 phr CB-reinforced RTV-SR composites [2]. These findings are further supported by another study in which the compressive moduli of CNT, GE, and CB were reported [3]. The authors showed that CNTs are outstanding fillers for improving the modulus and stiffness of RTV-SR-based composites [3]. They attributed the outstanding properties of CNTs to the high aspect ratio, which leads to filler percolation and long-range networks at lower filler contents (2 phr) and an exponential improvement in mechanical properties beyond the addition of 2 phr CNTs [3]. Kumar et al. [3] further showed that CNT (among CNT, CNT-GR, and GR) is the best filler for improving mechanical properties. With 3 phr CNTs, the tensile strength and Young's modulus were 1.78 and 1.77 MPa, respectively, which are better than those of other fillers. However, the other fillers, such as GR, showed a higher fracture strain of 244% compared to CNTs with 191% [34]. Under Tensile Strain Tensile strength and fracture strain are a few of the many properties extracted through tensile mechanical measurements. The stress-strain behaviors of CNTs (Figure 8a), CNT-GR (Figure 8b), and GR (Figure 8c) are presented. From the measurements, it was found that tensile stress increased with increasing tensile strain until fracture strain. The increase in tensile stress is assumed to be due to (a) filler-filler and polymer-filler interactions, (b) efficient filler networking in the polymer composites, (c) stress transfer from polymer chains to filler aggregates, and (d) re-orientation of polymer chains and filler particles in the direction of the tensile strain. The CNT-based composites showed the highest tensile stress due to fracture strain. This is attributed to the high aspect ratio, high interfacial area, and the formation of percolative filler networks at lower CNT contents in comparison to GR [103]. The synergistic effect of the CNT and GR hybrid is shown in Figure 8b, in which the fracture strain of the hybrid species is higher than that of the individual CNT and GR filler species. Furthermore, Song et al. [4] showed that pure SR has poor mechanical properties, such as a low tensile strength of 0.40 MPa and a fracture strain of 115.07%. With the addition of a hybrid filler at 10 phr, the tensile strength reached 4.5 MPa and the fracture strain increased to 211.15%. The mechanical properties of the SR/conductive carbon black-polymerized MPS-carbon nanotube (CCB-P-CNT) composite were significantly enhanced compared to those of the SR filled with 10 phr CNTs. This is mainly attributed to the fact that conductive carbon black (CCB) modified by polymerization of 3-trimethoxysilyl propyl methacrylate monomer (PMPS) improves the dispersibility of CNTs in the SR matrix [4]. Similarly, the addition of graphene nanoribbon (GNR) to the SR matrix improved the mechanical properties of the composites [104]. For instance, by adding 2 wt% GNR to the SR matrix, the tensile strength increased to 0.40 MPa, the fracture strain decreased to 78%, and the Young's modulus increased to 0.85 MPa [104]. Sarath et al. [95] also showed that the addition of EG to the SR matrix improved the mechanical properties. For instance, with 7 phr EG in the SR matrix, the tensile strength increased to 6.8 MPa, the fracture strain increased to 221%, the Young's modulus at 100% strain increased to 3.86 MPa, the tear strength increased to 30.2 Newton-millimeter (N·mm), and the hardness increased to 65 [95]. The tensile modulus (Figure 8d) and reinforcing factor (Figure 8e) were reported for different fillers and their hybrids, which increased with increasing filler content in the SR matrix. The tensile modulus and reinforcing agent were the highest for CNT-based composites. The superior properties, especially for the CNT-based composites, were due to the efficient filler networking in the SR matrix. The filler-filler and polymer-filler interactions in the SR composites were also speculated to be the reason for the improved properties upon increasing the filler content in the SR-based rubber matrix. It is also interesting to note that the tensile modulus and reinforcing factor of the CNT-GR hybrid at 5 phr were higher than those of the CNT composites. This was attributed to the synergistic effect of the CNT-GR hybrid. The effect of fillers and their hybrids on the fracture strain is shown in Figure 8f. It can be seen that the fracture strain, especially for CNTs, increased for up to 2 phr filler content and then decreased, while for the CNT-GR hybrid and GR, the fracture strain increased steadily until 5 phr. The decrease in CNT content after 2 phr is due to the aggregation of CNT particles in the SR matrix. Moreover, the increase in fracture strain for the CNT-GR hybrid is due to the synergistic effect and lubricating effect of GR in the hybrid and as a single filler. GR flakes did not appear to significantly affect the viscosity of the composites compared to the CNT fillers. In contrast, CNTs readily form filler networks at lower filler contents because of their high aspect ratio, leading to better properties than those imparted by GR flakes. Kumar et al. [105] showed that with the use of a thinner in an SR composite, the fracture strain increases to 241% from 226% (unfilled), while the Young's modulus drops from 0.6 (unfilled) to 0.17 MPa. Therefore, it can be concluded that the use of a thinner softens the composites and makes them useful for various applications, such as flexible devices that require softness, flexibility, and stretchability [105]. The authors further demonstrated that the dissipation losses fell sharply from almost 90 (10 phr thinner) to 22% (60 phr thinner) [105]. the efficient filler networking in the SR matrix. The filler-filler and polymer-filler interactions in the SR composites were also speculated to be the reason for the improved properties upon increasing the filler content in the SR-based rubber matrix. It is also interesting to note that the tensile modulus and reinforcing factor of the CNT-GR hybrid at 5 phr were higher than those of the CNT composites. This was attributed to the synergistic effect of the CNT-GR hybrid. The effect of fillers and their hybrids on the fracture strain is shown in Figure 8f. It can be seen that the fracture strain, especially for CNTs, increased for up to 2 phr filler content and then decreased, while for the CNT-GR hybrid and GR, the fracture strain increased steadily until 5 phr. The decrease in CNT content after 2 phr is due to the aggregation of CNT particles in the SR matrix. Moreover, the increase in fracture strain for the CNT-GR hybrid is due to the synergistic effect and lubricating effect of GR in the hybrid and as a single filler. GR flakes did not appear to significantly affect the viscosity of the composites compared to the CNT fillers. In contrast, CNTs readily form filler networks at lower filler contents because of their high aspect ratio, leading to better properties than those imparted by GR flakes. Kumar et al. [105] showed that with the use of a thinner in an SR composite, the fracture strain increases to 241% from 226% (unfilled), while the Young's modulus drops from 0.6 (unfilled) to 0.17 MPa. Therefore, it can be concluded that the use of a thinner softens the composites and makes them useful for various applications, such as flexible devices that require softness, flexibility, and stretchability [105]. The authors further demonstrated that the dissipation losses fell sharply from almost 90 (10 phr thinner) to 22% (60 phr thinner) [105]. Mechanical Properties: Experimental Data vs. Theoretical Prediction The extent of reinforcement of a polymer matrix by a filler depends on various parameters of the filler, such as the shape, particle size, aspect ratio, orientation, and surface area of the filler particle, the viscosity of the polymer matrix, and its modulus. The high surface area-which is due to the small particle size of carbon nanomaterials-and espe- Mechanical Properties: Experimental Data vs. Theoretical Prediction The extent of reinforcement of a polymer matrix by a filler depends on various parameters of the filler, such as the shape, particle size, aspect ratio, orientation, and surface area of the filler particle, the viscosity of the polymer matrix, and its modulus. The high surface area-which is due to the small particle size of carbon nanomaterialsand especially the high aspect ratio of CNTs make them promising candidates for the reinforcement of polymer matrices. In contrast, theories and models help to project the modulus of the reinforced polymer matrix by considering the shape, size, and aspect ratio of the filler particles used in the polymer matrix. Therefore, it is interesting to compare experimental data with those predicted using theoretical models. The Guth model, which is largely based on the aspect ratio of the filler, has been widely used to predict the modulus of filled polymer composites [107,108]. The equation used in the prediction is where E' and E' o are the moduli at low deformation in filled and unfilled rubber, respectively; Φ is the filler volume fraction of the system; and f is the aspect ratio of the filler particles. The quadratic term represents the mutual disturbance among the connected filler particles. Similarly, the Halpin-Tsai model can be used to predict the modulus of filled composites [109,110]. This model largely considers the aspect ratio (f) and orientation of the filler particles. The general expression of the model is given by where η is given by Based on the above models, the modulus of filled composites was predicted using the respective aspect ratio values of 40 for the Guth model and 45 for the Halpin-Tsai model and is presented in Figure 9 [111]. The behaviors of the theoretical modulus and the experimental data are depicted, and it can be seen that the experimental data fit well with the theoretical models. In addition, the Halpin-Tsai model fit more accurately than the Guth model, which deviated as the filler volume fraction increased. However, it is noteworthy that both models assumed similar aspect ratios of the filler used in the reinforcing matrix [111]. cially the high aspect ratio of CNTs make them promising candidates for the reinforcement of polymer matrices. In contrast, theories and models help to project the modulus of the reinforced polymer matrix by considering the shape, size, and aspect ratio of the filler particles used in the polymer matrix. Therefore, it is interesting to compare experimental data with those predicted using theoretical models. The Guth model, which is largely based on the aspect ratio of the filler, has been widely used to predict the modulus of filled polymer composites [107,108]. The equation used in the prediction is where E' and E'0 are the moduli at low deformation in filled and unfilled rubber, respectively; Φ is the filler volume fraction of the system; and f is the aspect ratio of the filler particles. The quadratic term represents the mutual disturbance among the connected filler particles. Similarly, the Halpin-Tsai model can be used to predict the modulus of filled composites [109,110]. This model largely considers the aspect ratio (f) and orientation of the filler particles. The general expression of the model is given by where η is given by Based on the above models, the modulus of filled composites was predicted using the respective aspect ratio values of 40 for the Guth model and 45 for the Halpin-Tsai model and is presented in Figure 9 [111]. The behaviors of the theoretical modulus and the experimental data are depicted, and it can be seen that the experimental data fit well with the theoretical models. In addition, the Halpin-Tsai model fit more accurately than the Guth model, which deviated as the filler volume fraction increased. However, it is noteworthy that both models assumed similar aspect ratios of the filler used in the reinforcing matrix [111]. Figure 10 shows the dynamic mechanical thermal analysis of an SR matrix filled with GF and a GF-CB hybrid. Figure 10a shows that the behavior of the filled composites was identical despite the different magnitudes. In the glassy region, the addition of GF and the GF-CB hybrid causes a noticeable increase in the storage modulus, as shown in Figure 10b. In one study, incorporating 0.5 wt% GF in GF/SR composites was found to be more efficient than incorporating 2D graphene nanosheets (GNS) [86]. Eventually, by increasing the CB content from 2 to 8 wt% in the CB-GF hybrid, the storage modulus of the SR composites gradually improves. However, as the temperature increases from −139 to −110 • C, the storage modulus in Figure 10b decreases sharply as a result of glass transition phenomena [86]. Dynamic Mechanical Thermal Analysis identical despite the different magnitudes. In the glassy region, the addition of GF and the GF-CB hybrid causes a noticeable increase in the storage modulus, as shown in Figure 10b. In one study, incorporating 0.5 wt% GF in GF/SR composites was found to be more efficient than incorporating 2D graphene nanosheets (GNS) [86]. Eventually, by increasing the CB content from 2 to 8 wt% in the CB-GF hybrid, the storage modulus of the SR composites gradually improves. However, as the temperature increases from −139 to −110 °C, the storage modulus in Figure 10b decreases sharply as a result of glass transition phenomena [86]. Nevertheless, the rate of decrease in the modulus with increasing temperature is compensated by rigid fillers such as GF and CB, which impart thermal stability to the SR composites. Moreover, in the rubbery state, the GF-CB-hybrid-based SR composites also had a higher storage modulus than the unfilled SR and GF-filled composites (inset of Figure 10a). Despite the considerable flexibility of the polymer chains of the SR matrix under this rubbery state, the rigid GF and CB filler particles could confine the segmental motion under strain. Figure 11 shows the influence of the applied load and sliding velocity on the friction coefficient (COF) and wear rate. Nevertheless, the rate of decrease in the modulus with increasing temperature is compensated by rigid fillers such as GF and CB, which impart thermal stability to the SR composites. Moreover, in the rubbery state, the GF-CB-hybrid-based SR composites also had a higher storage modulus than the unfilled SR and GF-filled composites (inset of Figure 10a). Despite the considerable flexibility of the polymer chains of the SR matrix under this rubbery state, the rigid GF and CB filler particles could confine the segmental motion under strain. Figure 11 shows the influence of the applied load and sliding velocity on the friction coefficient (COF) and wear rate. Tribological Properties It is interesting to note that as the load increases, the COF decreases ( Figure 11a) and the wear rate increases (Figure 11b). Moreover, the values for the EG-filled composites are lower than those of the unfilled SR composites. At a lower load, the COF was high and the specific wear rate was lower because lower loads were unable to break the EG-SR interfacial interaction in the composites. As the applied load was increased to 30 Newton (N), there was a drastic decrease in the COF and an exponential increase in the wear rate [95]. At higher loads, the interfacial interaction between EG and SR broke, which led to a decrease in the COF and an increase in the wear rate. It is interesting to note that as the load increases, the COF decreases ( Figure 11a) and the wear rate increases (Figure 11b). Moreover, the values for the EG-filled composites are lower than those of the unfilled SR composites. At a lower load, the COF was high and the specific wear rate was lower because lower loads were unable to break the EG-SR interfacial interaction in the composites. As the applied load was increased to 30 Newton (N), there was a drastic decrease in the COF and an exponential increase in the wear rate [95]. At higher loads, the interfacial interaction between EG and SR broke, which led to a decrease in the COF and an increase in the wear rate. Figure 11c,d show the influence of the COF and wear rate on various sliding velocities. With the increase in EG content in the SR matrix, the COF and wear rate decreased. The composite with a low sliding velocity (1 m/s) exhibited a higher COF and wear rate than that with a higher sliding velocity (5 m/s). Here, it was observed that 7 phr EG was the optimum value [95]. When the composites were studied under rotation at a low sliding velocity, effective contact between the SR and counter face occurred. Therefore, the specimen with unfilled SR exhibited a higher COF and wear rate. However, with the addition of EG to the SR matrix, the lubricating properties of EG affected the COF and wear properties. In addition, EG forms lubricating films, thus reducing the contact area between the SR and EG and leading to a reduction in the COF and wear rate. However, the case with a higher sliding velocity of 5 m/s was different. Less time was needed to form EG films, if they were generated at all; if films were generated, the higher sliding velocity affected the films' ability to remain between rubbing surfaces. In conclusion, the wear rate and COF are significantly affected by the addition of EG, the sliding velocity, and the applied load [95]. Moreover, temperature also affects the COF, wear rate, and wear mechanism. With the increase in EG content in the SR matrix, the COF and wear rate decreased. The composite with a low sliding velocity (1 m/s) exhibited a higher COF and wear rate than that with a higher sliding velocity (5 m/s). Here, it was observed that 7 phr EG was the optimum value [95]. When the composites were studied under rotation at a low sliding velocity, effective contact between the SR and counter face occurred. Therefore, the specimen with unfilled SR exhibited a higher COF and wear rate. However, with the addition of EG to the SR matrix, the lubricating properties of EG affected the COF and wear properties. In addition, EG forms lubricating films, thus reducing the contact area between the SR and EG and leading to a reduction in the COF and wear rate. However, the case with a higher sliding velocity of 5 m/s was different. Less time was needed to form EG films, if they were generated at all; if films were generated, the higher sliding velocity affected the films' ability to remain between rubbing surfaces. In conclusion, the wear rate and COF are significantly affected by the addition of EG, the sliding velocity, and the applied load [95]. Moreover, temperature also affects the COF, wear rate, and wear mechanism. Thermal Properties Thermal Conductivity, TGA, and DTG Figure 12a shows the thermal conductivities of different composites based on GF and GF-CB hybrids in SR matrix. A sharp improvement in the thermal conductivity was observed when GF was added to the SR matrix. Then, when CB was added as a hybrid filler, the thermal conductivity was almost constant up to 4 wt% of the CB-GR hybrid in the SR matrix [86]. From 4 to 8 wt%, there was a sharp increase in the thermal conductivity. This was presumed to be due to the shift from short-range and discontinuous (4 wt%) filler networks to long-range, continuous, and well-connected filler networks (8 wt%) [86]. This hypothesis was confirmed by SEM, as shown in Figure 5. In other words, at 8 wt% CB-GR hybrid, filler percolation occurs, which leads to a marked improvement in the thermal conductivity. Electrical Properties The electrical properties of the carbon nanostructure (CNS)-filled SR composites are shown in Figure 13. An exponential increase in electrical conductivity occurred when the CNS loading was increased from 0 (unfilled SR) to 0.05 wt%. The electrical conductivity was further increased to 0.11 S/cm with increasing CNS loading up to 1.5 wt%, which is far higher than that of the unfilled SR matrix. A drastic improvement in the electrical conductivity was observed at a particular wt% of CNS, which is known as the filler percolation threshold. The filler percolation threshold for CNS was as low as 0.05 wt%, where the electrical conductivity increased from 10 −14 to 10 −5 Siemens/centimeter (S/cm) [53]. The continuous and interconnected filler network of the CNS results in a low electrical percolation threshold. The aspect ratio of CNS (>3000) is also assumed to play a vital role in forming conducting networks at lower CNS contents, as demonstrated in Figure 13 [53]. The percolation threshold indicates that a minimum amount of CNS is required where CNS contact points are interconnected. A minimum average distance between the CNS particles is established to create a conductive network outside the SR matrix. Hu et al. [81] also determined the thermal diffusivity of SR composites based on GF and CNTs. The thermal diffusivity of pure SR was reported as 0.105 m 2 /s, and it increased with the addition of GF and CNTs. For instance, the thermal diffusivity was 0.113 and 0.119 m 2 /s with 1 wt% GE and 5 wt% CNTs, respectively. This can be attributed to GR's ability to strongly interact with polymer chains in the SR matrix and lower the agglomeration rate compared to CNTs [81]. Figure 12b,c show thermogravimetric analysis (TGA) and differential thermogravimetric (DTG) curves of composites based on GF-and CB-filled SR. It was noticed that weight loss was initiated as the subjected temperature increased, and it started decreasing significantly between 400 and 700 • C [86]. The weight loss was the lowest for the 8 wt% CB/GF-filled SR matrix. It is recognized to be due to the improved thermal conductivity of the composites, as described in Figure 12a. At 400 • C, the apparent weight loss of the composite was due to the breakage of Si-Si, C-H, and Si-O bonds that led to the production of volatiles such as H 2 , CH 4 , etc., and produced the first significant weight loss [86]. With the further increase in temperature from 400 to 550 • C, there was a further loss of weight, which could be due to further degradation in polymer chains producing volatile substances [86]. It is interesting to note that with the addition of CB and GF in the SR matrix, the weight loss was lower than that in virgin SR matrix as described above [86]. It is due to the increase in networking and additional polymer-filler interactions that provide higher thermal stability at higher temperature. Electrical Properties The electrical properties of the carbon nanostructure (CNS)-filled SR composites are shown in Figure 13. An exponential increase in electrical conductivity occurred when the CNS loading was increased from 0 (unfilled SR) to 0.05 wt%. The electrical conductivity was further increased to 0.11 S/cm with increasing CNS loading up to 1.5 wt%, which is far higher than that of the unfilled SR matrix. A drastic improvement in the electrical conductivity was observed at a particular wt% of CNS, which is known as the filler percolation threshold. The filler percolation threshold for CNS was as low as 0.05 wt%, where the electrical conductivity increased from 10 −14 to 10 −5 Siemens/centimeter (S/cm) [53]. The continuous and interconnected filler network of the CNS results in a low electrical percolation threshold. The aspect ratio of CNS (>3000) is also assumed to play a vital role in forming conducting networks at lower CNS contents, as demonstrated in Figure 13 [53]. The percolation threshold indicates that a minimum amount of CNS is required where CNS contact points are interconnected. A minimum average distance between the CNS particles is established to create a conductive network outside the SR matrix. Moreover, Yang et al. [52] derived the relationship between the electrical conductivity and filler (CNT or GE) volume fraction in an SR matrix. The authors showed that as the GE and CNT contents in the SR matrix increased, the electrical conductivity increased exponentially. The results revealed a synergistic effect in the CNT-GE hybrid, which demonstrated higher electrical conductivity than that of GE and CNT as single components [52]. The percolative thresholds were 1.97 wt% (CNT), 1.27 wt% (CNT-GE hybrid), and 0.92 wt% (self-assembled CNT-GE hybrid) composites [52]. These values are slightly higher than those reported for the CNS/SR composites in Figure 13 [53]. Song et al. [4] determined that the percolation threshold for the SR/CCB-P-CNT composite is 0.30 phr. However, this value is lower than that reported by Yang et al. [52] and higher than those reported in Figure 13 [53]. In addition, the electrical conductivity of the composite reported by Song et al. [4] was found to be affected by vulcanization by the SR matrix. This is attributed to the shrinkage between SR molecular chains, which leads to the destruction of the conductive network formed in the SR matrix and a decrease in electrical conductiv- Moreover, Yang et al. [52] derived the relationship between the electrical conductivity and filler (CNT or GE) volume fraction in an SR matrix. The authors showed that as the GE and CNT contents in the SR matrix increased, the electrical conductivity increased exponentially. The results revealed a synergistic effect in the CNT-GE hybrid, which demonstrated higher electrical conductivity than that of GE and CNT as single components [52]. The percolative thresholds were 1.97 wt% (CNT), 1.27 wt% (CNT-GE hybrid), and 0.92 wt% (self-assembled CNT-GE hybrid) composites [52]. These values are slightly higher than those reported for the CNS/SR composites in Figure 13 [53]. Song et al. [4] determined that the percolation threshold for the SR/CCB-P-CNT composite is 0.30 phr. However, this value is lower than that reported by Yang et al. [52] and higher than those reported in Figure 13 [53]. In addition, the electrical conductivity of the composite reported by Song et al. [4] was found to be affected by vulcanization by the SR matrix. This is attributed to the shrinkage between SR molecular chains, which leads to the destruction of the conductive network formed in the SR matrix and a decrease in electrical conductivity [4]. After vulcanization, the electrical percolation shifts from 0.39 to 0.55 phr [4]. In another study by Hu et al. [81], electrical measurements showed that the volume resistivity decreased with the addition of GF and CNTs. In this study, the percolation threshold was approximately 2 wt% for graphene and 5 wt% for CNTs. These values are quite high compared to those reported by Song et al. [4], Yang et al. [52], and in Figure 13 [53]. The lower values of the percolative networks for GF than for CNTs reported by Hu et al. [81] can be attributed to the uniform distribution of GF particles compared to the aggregated CNT particles in the SR matrix. These aggregated CNTs lead to a lower tendency to form long-range percolative networks, and thus, a higher amount of CNTs is required. Lee et al. [34] studied resistance as a function of tensile strain for CNT and CNT-GR hybrid fillers. The authors found that the resistance at fracture strain for 3 phr filler was 0.4 kΩ for CNTs, which is much lower than that for the CNT-GR hybrid (59.4 kΩ) [34]. Piezoresistive Response of CNS/SR Composites under Cyclic Strain Figure 14a-d show the piezoresistive response and durability of specimens under cyclic strain with increasing amplitude. The relative resistance can be correlated with the applied stress-strain cycles (stretch-release cycle), as the relative resistance increases under stretching and decreases as the stress is released. Under cyclic strain, the system generated a residual strain in each subsequent loading cycle. This residual strain and accumulation of resistance can be attributed to the viscous nature of the SR matrix. Hence, in addition to the self-sensing capability of these systems, the residual strain and residual resistance can be considered. Arif et al. [53] also showed that when the strain is zero in the unloading cycle during the cyclic process, the resistance does not return to its initial position and leads to residual resistance and residual strain in the SR composite. This is due to the permanent destruction of a few interactions in the conductive pathways made by filler particles in the SR matrix under multi-hysteresis cyclic loadings [53]. The destruction to the permanent conductive networks of the SR composites can be obtained by measuring the volume resistivity based on the correlation between the resistivity and inter-particle distances of the conductive filler particles [53]. Figure 14e shows a strain contour map of the residual strain obtained after the release of different loading cycles. The distribution of strain can be observed at the macroscopic level in the contour map. Using the residual strain and residual resistance, one can predict the zones of damage and failure initiation. For example, macroscopic strain localization could indicate higher damage because of the local CNS network arrangement [53]. unloading cycle during the cyclic process, the resistance does not return to its initial position and leads to residual resistance and residual strain in the SR composite. This is due to the permanent destruction of a few interactions in the conductive pathways made by filler particles in the SR matrix under multi-hysteresis cyclic loadings [53]. The destruction to the permanent conductive networks of the SR composites can be obtained by measuring the volume resistivity based on the correlation between the resistivity and inter-particle distances of the conductive filler particles [53]. Strain Sensors The stretchability and strain sensitivity of GE/SR composites have been recorded. Thus, cyclic strain was applied and various strain-sensitive measurements, such as changes in resistance upon cyclic strain and durability, were measured [1]. Figure 15a shows the response of the change in resistance as a function of increasing GE content in the SR matrix. The change in resistance increased gradually with increasing strain, but the process of change was non-monotonic. The "shoulder peak" phenomenon was observed for these GE-based SR composites. Moreover, it was found that the second peak increased gradually with increasing cyclic loading and stabilized after a few cycles. The deconstruction and reconstruction of conductive GE networks, which stabilized after a few cycles, were believed to cause such a response. Figure 15b shows the change in resistance as a function of increasing strain from 5 to 30%. It can be noted that the resistance change was higher at higher strain rates (30%), which was assumed to be due to the higher discontinuity and increasing interparticle distance in conductive GR networks at higher strain rates [1]. In addition to the strain magnitude, the strain rate also affects strain-sensitive properties. Yang et al. [1] studied the relative resistance as a function of cyclic strain for CNTs, a CNT-GE hybrid, and a self-assembled CNT-GE hybrid. The authors found that the relative resistance was higher for the first cycle and then stabilized in subsequent cycles, which is related to the viscoelasticity of the SR matrix. Similar behavior was reported by the authors in Figure 15a Figure 15c shows the change in resistance as a function of the strain rate increase from 10 to 50 mm/min. The results show that with an increase in the strain rate, the change in resistance increases. The results also highlight that a change in the resistance at higher strain rates causes higher strain sensitivity. This could be due to the higher stress applied Figure 15. Change in resistance upon applied cyclic strain: (a) change in resistance at increasing GE content in SR matrix; (b) change in resistance at increasing strain from 5 to 30%; (c) change in resistance at different strain rates increasing from 10 to 50 mm/min; (d) durability cycle for the composites at 30% strain and strain rate of 10 mm/min; (e) application of the strain sensor for monitoring on a rubber seal; (f) cyclic response of the strain sensor under different cycling strains [1]. Figure 15c shows the change in resistance as a function of the strain rate increase from 10 to 50 mm/min. The results show that with an increase in the strain rate, the change in resistance increases. The results also highlight that a change in the resistance at higher strain rates causes higher strain sensitivity. This could be due to the higher stress applied to the materials induced by the higher strain rate. Another reason could be that the destruction of larger GE networks at higher strain rates led to a higher change in resistance. Figure 15d shows the durability cycle of the GE/SR composites over 500 cycles. It was found that as the number of cycles increased, the change in resistance initially decreased and then stabilized. This can be attributed to the fresh breakdown and reformation of the GE networks, which stabilized after certain cycles. These features indicate that the GE/SR composites have good stability and durability. In addition, based on these measurements, the GE/SR composites possess good fatigue properties and a higher ability to restore stress. Kumar et al. [3,106] also studied the durability of strain sensors for CNT-, CNT-GR-, and CB-based composites. From the results, they found that the resistance loss was negligible, indicating high electromechanical stability for the CNT-, CNT-GR-, and CB-based composites used as strain sensors [3,106]. Figure 15e shows the use of a strain sensor based on a GR/SR composite firmly bound by a rubber seal for monitoring. Both compressive displacement and compressive load are important factors in monitoring the rubber seal. The strain sensor based on the GE/SR composite could monitor both these states and generate electrical signals with different compressive loads. Figure 15f shows that the relative resistance of the strain sensor based on the GR/SR composite changed gradually when the rubber seal underwent compressive displacement. Generally, the relative resistance increases when the rubber seal undergoes compressive displacement and decreases when the rubber seal recovers [1]. Moreover, the resistance of the flexible strain sensor was very stable under strain cycling, indicating that the GE/SR composite sensor can be successfully used in sensing strain on a rubber seal. Deformation recovery is an important performance indicator for strain sensors. The deformation recovery rate demonstrated by Song et al. [4] showed that the composites based on the SR matrix possessed a high recovery rate. For instance, the deformation recovery rate was 99.8% at 20% strain, which decreased to 97.1% with a strain from 50 to 200%. These results indicate that the SR/CCB-P-CNT composite exhibits good deformation recovery performance as a strain sensor [4]. Moreover, Song et al. [4] reported an SR/CCB-P-CNT composite with excellent mechanical properties, high strain sensitivity, and good electrical properties that could be used to detect human motions. The authors showed a change in resistance while bending and releasing a finger during drinking, demonstrating a highly sensitive strain sensing application [4]. For strain sensors, the gauge factor, which is defined as the change in the relative resistance against the applied strain, is important in addressing their application. Yang et al. [1] estimated the gauge factor for GR/SR composites at different strains. The gauge factors were 143 for 2.3 wt%, 76.7 for 3 wt%, 50.7 for 4 wt%, and 30.3 for 5 wt%. It is interesting to note that the strain sensing range increased from 35 to 170% as the GE content increased from 2.3 to 5 wt% [1]. Kumar et al. [3,106] studied the gauge factors for strain sensors based on CNTs, a CNT-GR hybrid, and CB. The authors found that the RTV-SR strain sensors based on CB and the CNT-GR hybrid had higher gauge factors of 7.8 and 32.4, respectively, compared to that for CNTs (1.3) [3,106]. The authors further studied the transient response to applied strain for CNT-and CB-based composites. From the measurements, they found that CB-and CNT-GR-hybrid-based strain sensors yielded better responses than those from CNT-based composites in strain sensors [3,106]. These results are consistent with those obtained for the gauge factor. Moreover, both composites showed a decrease in resistance during the strain-holding period. This is attributed to the stress relaxation of the rubber matrix [3]. Figure 16a,b show the mechanism behind the shielding effectiveness for periodic and randomly distributed microwave/GE fibers. It is well known that when an electromagnetic wave penetrates the specimen under investigation, the wave exhibits several reflections depending on the structure it encounters on the way. As the wave interacts several times with the structure, it subsequently loses energy [112]. The incident microwave energy exhibited two loss phenomena. The first phenomenon was related to the energy absorption of the electrical components of the electromagnetic field, and the second phenomenon was related to the absorption of the ferromagnetic resonance. The other magnetic losses could be due to the magnetic hysteresis property of the wire. times with the structure, it subsequently loses energy [112]. The incident microwave energy exhibited two loss phenomena. The first phenomenon was related to the energy absorption of the electrical components of the electromagnetic field, and the second phenomenon was related to the absorption of the ferromagnetic resonance. The other magnetic losses could be due to the magnetic hysteresis property of the wire. The shielding component of the graphene fibers is mainly due to dielectric losses. The formation of conductive pathways by graphene fibers results in reduced relaxation times and improved dielectric losses [112]. In addition, the defects and residual oxygen- The shielding component of the graphene fibers is mainly due to dielectric losses. The formation of conductive pathways by graphene fibers results in reduced relaxation times and improved dielectric losses [112]. In addition, the defects and residual oxygencontaining functionalities in the graphene fibers also improve the polarization losses. Figure 16c,d show the shielding effectiveness of periodic arrays and randomly dispersed microwave or graphene fibers and their combinations. In the MMMGGG array with the highest SE, most of the waves were absorbed by both the magnetic and dielectric properties of the subsequently arranged microwaves. The remaining waves could be partly absorbed by the GGG fillers or transmitted through them (Figure 16a). In this arrangement, the microwaves gathered to form a thick absorbing layer, and the graphene fibers of the next layer of the composite behaved as a microwave-reflecting layer. Such a layer arrangement yields an "absorb-reflect-absorb" mechanism. However, in the case of MGMGMG arrays, the absorption of the incident wave as it propagates is less efficient due to the presence of graphene fibers between the microwires, which absorb only the electric part of the electromagnetic wave. The aspect ratio of graphene fibers plays an important role in the SE of the composites. A larger aspect ratio will greatly decrease the demagnetizing field and increase the field efficiency and loss due to absorption. A detailed discussion of the mechanism of electromagnetic wave propagation and SE phenomena is described by Xu et al. [112]. Conclusions The present review highlights that the use of carbon-based nanofillers such as CNTs, GR, and CB in SR-based composites can significantly improve their mechanical, electrical, thermal, and tribological properties. These improved properties have significance for various applications, such as strain sensors. CNT-based SR composites exhibit outstanding properties, and CNTs are the best fillers for different applications. This review summarizes the most recent advancements of carbon-based nanofillers in the SR matrix for various applications, bridging the gaps in the existing literature in terms of the different properties and applications reviewed. It is clear from our analysis that CNTs are outstanding nanofillers among the various carbon-based nanofillers, especially in the SR matrix. Target applications such as strain sensors were found to depend on the electrical conductivity, mechanical stiffness, and stretchability of the SR-based composites. The superior properties of CNTs are due to their favorable morphology, high aspect ratio, and high surface area. Current Trend and Future Perspectives The current demand for carbon-based nanofillers surged in the last 1-2 decades, especially in rubber composites such as silicone rubber. This increase in demand is due to their outstanding mechanical, electrical, thermal, and tribological properties. In the last century, carbon black was traditionally used as a filler, especially in tires, until the discovery of carbon nanotubes in 1991 by Iijima [113,114]. Over the last decade, a trend of increasing publications on CNT-based polymer composites has been witnessed. This increase in interest for CNT-based polymer or rubber composites is due to achievement of outstanding physico-chemical properties at very low loadings of below 3 phr. CNTs were extensively used as fillers until graphene was synthesized by Geim et al. [115]. The major limitation of using CNTs in rubber composites is the poor barrier properties of CNT-based composites. However, graphene has been quite promising in improving its barrier properties due to its sheet-like morphology. In addition to CNTs and graphene, nano carbon black has been used over traditional carbon black. The advantage of using nano carbon black is in its ability to provide optimum and desired properties at a lower content in polymer composites (below 20 phr) than traditional carbon black filler (around 60 phr). Thus, the difficulties faced in reinforcing with traditional carbon black, such as altered viscoelastic properties, can be overcome by using nano carbon black. The future perspective for rubber composites based on silicone rubber and carbon nanofillers is bright and promising. Without hesitation, researchers are expected to continue searching for new possibilities of using these new materials until any further advancement or new discovery is achieved. However, challenges such as the bulk synthesis of graphene are still a matter of concern in using them at a large scale. To some extent, CNTs have achieved large-scale production and availability. In the same way, graphene forms such as multi-layer graphene and graphene nanoplatelets continue to be useful as alternatives until monolayer graphene synthesis is achieved at a large scale [116]. These graphene forms are still promising as new materials to be used as nanofillers for rubber composites. Silicone rubber as a host matrix for rubber composites has been promising for various applications, such as strain sensors. There are a number of advantages of using silicone rubber, such as its softness, easy processing, high aging and thermal resistance, high durability, and good flexibility, as desired for strain sensors. However, silicone rubber has poor fracture strain, but this limitation is not significant, and it is still the material of choice for flexible devices such as strain sensors. Conflicts of Interest: The authors declare that there is no conflict of interest.
v3-fos-license
2017-10-11T00:25:11.463Z
2007-03-01T00:00:00.000
24757539
{ "extfieldsofstudy": [ "Environmental Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.scielo.br/j/aabc/a/J7XpKjgXbsqPpqnSTcJ3CWg/?format=pdf&lang=en", "pdf_hash": "937aad5c5131b9cf22e3ad5ac4e372d54158b8d4", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:206", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "bd3d49fd11770721f6fd2a549d288df8e13dbc4b", "year": 2007 }
pes2o/s2orc
Groundwater resources in the State of São Paulo ( Brazil ) : the application of indicators Indicators, for groundwater resources, have mostly been employed to define the present status and the degradation tendency, regarding both quantity (underor overexploitation) and quality (natural and anthropic contamination). This work presents the application of indicators in order to draw a picture of the groundwater resources situation in the 22 Water Resource Management Units (WRMU) of the State of São Paulo. The seven Indicators (I1 to I7) applied provide a general overview of groundwater dependence (I1, I2), availability (I3, I4), and quality (I5, I6, I7). Considering public supply (Indicator 1), one observes that 9 WRMUs show high (>50% of the population supplied by groundwater), 6, intermediate (49-25%), and 7, low (<24%) dependence on groundwater. Indicators 3 and 4 show that the resource still presents a great potential for further abstractions in most of the WRMUs, although there is evidence of overexploitation in the Upper Tietê, Turvo/Grande, and Pardo basins, and low availability in the Upper Tietê, Piracicaba/Capivari/Jundiai, and Turvo/Grande. Indicator 5 (aquifer natural vulnerability) denotes that the WRMUs 2, 4, 8, 13, 14 and 18-22 (part of the recharge area of Guarani Aquifer System) need more attention mainly where large contaminant loads are present. Indicator 6 shows the general excellent natural quality of groundwater, although it also denotes that 3 WRMUs need special consideration due to chromium and fluoride contamination. Indicator 7 demonstrates a close relationship between groundwater contamination occurrence and density/type of land occupation. INTRODUCTION With a population of 37 million inhabitants (93% in urban areas), a territory of 248209 km 2 , and the concentration of 36% of the country GDP, São Paulo is the most populous and economically important State of Brazil.Its dependence on groundwater is demonstrated by the fact that of its 645 municipalities, 70% are totally or partially supplied by this resource. Although groundwater resources perform such an important role, little has been done in order to protect them.Limited knowledge about recharge of the aquifers, 142 RICARDO HIRATA, ALEXANDRA SUHOGUSOFF and AMÉLIA FERNANDES aim at (1) describing the situation, and (2) identifying the potentiality and constraints related to quality (natural and anthropic contamination) and quantity of the aquifers in those areas.They can be used as important tools of communication with the policy-makers and the public in general and, when associated with time-series and reliable aquifer conceptual models, may be used to forecast likely future scenarios.The application of these indicators was useful for drawing a picture of the groundwater resources situation in its 22 Water Resources Management Units (WRMUs).WRMUs are operational units with a tripartite (State government, municipal governments and civil society) administration (water authorities), and correspond geographically to the main hydrographic basins of the State of São Paulo. CHARACTERIZATION OF THE AQUIFER SYSTEMS OF THE STATE OF SÃO PAULO The State of São Paulo is constituted of two hydrogeologic provinces where the aquifer systems are inserted, namely: (1) the Paraná Volcano-Sedimentary Basin, which encompasses Bauru, Serra Geral, Guarani and Tubarão aquifer systems; and (2) the Eastern Massif of the Southeast, which encompasses the Precambrian, Taubaté, São Paulo and Shoreline aquifers.A short description of these aquifers can be found on Table I, and their occurrence in the State in Figure 1. RECHARGE AND AQUIFER POTENTIALITY Renewable groundwater resource corresponds to the " exploitable groundwater reserve", as used elsewhere, and is herein defi ned as the difference between the recharge and the discharge that maintains the minimum baseflow in the rivers.In other words, it consists of the maximum discharge that can be withdrawn in a watershed so as to not cause negative impacts on the subsurface and surface water bodies.The estimates of the reserves, for each WRMU, can be found in the Situation Reports of the water resources of State of São Paulo (digital technical report, unpublished data) and are depicted in Table II.When considering these values, it is important to consider some limitations such as: the methodology of calculation was not the same for all 22 WRMUs, precluding a direct comparison among them; the recharge calculation did not take into account the urbanization ef-fects, such as the impermeabilization and the losses of the supply and sanitation systems (excepting the case of the WRMU 6, namely, Upper Tietê Basin).The latter is signifi cant in some units, for instance, in the Upper Tietê Basin it is estimated that the no natural recharge reaches up to 13 m 3 /s, being this 31% larger than the exploitation that has been practiced in the basin. When one considers the current knowledge of the aquifer systems, some inconsistencies, regarding the calculated exploitable resources (Table II), become evident.For instance, the excellent yielding achieved in Guarani Aquifer is not compatible with the calculated resources that are a consequence of the underestimate of its recharge.The calculations should consider that the recharge has probably been raised by the systematic and extensive pumping, at least in some municipalities (Sracek and Hirata 2002).On the other hand, the largest estimated resources of WRMU 11 (Ribeira de Iguape/South Shoreline) seem to be disagreeable with an aquifer system that consists mainly of crystalline Pre-Cambrian rocks (gneisses, granites and metasedimentary rocks), which do not allow a large rate of infi ltration. GROUNDWATER EXPLOITATION The groundwater resources play a major role for the public water supply in the State of São Paulo.In about 50% of the municipalities, the majority concentrated in the Northwestern portion of the State, the groundwater constitutes 75 to 100% of their water supply (Figure 2).The evaluation of the total exploited discharges for each municipality, which were compiled from SEADE Foundation database (http://www.seade.gov.br/,access on 15/10/04), demonstrates that the public supply system has exploited up to 18.3 m 3 /s.In the North, Central and Western portions of the State, the abstraction comes mainly from the Bauru Aquifer System (mainly Adamantina and Caiuá aquifers) and, for the largest cities, from the Guarani Aquifer System, where the well depths may be greater than 500 m (Table III). Some known facts, with regard to private exploitation, are: the totality of the industries installed in the Metropolitan Region of Campinas (in the WRMU 5) own wells; the total abstraction in the Upper Tietê Basin (WRMU 6), withdrawn by the estimated 7000 private wells, reaches 8 m 3 /s (for the year of 2000); even in the WRMUs that are constituted of crystalline Pre-Cambrian rocks, an extensive abstraction for industrial supply and for autonomous household is practiced.Despite of these facts, little knowledge is available with regard to groundwater exploited volumes by private wells.Furthermore, in contrast with the surface waters, the demands, according to different types of use, are more diffi cult to estimate. 7779 wells are registered in the databank of the DAEE (Department of Water and Electric Energy of the State of São Paulo), however it is estimated that the State of São Paulo territory contains around 30000 currently active boreholes (G.Rocha, unpublished data), and several tens of thousands of dug wells and mini-wells (small diameter, shallow, and inexpensive).This situation clearly demonstrates the lack of control over the exploitation of groundwater by the government institutions. Despite of this limitation, estimates of the groundwater exploited volumes, for each of the 22 WRMU, are presented in the Situation Reports of these units (Table III); the sum of these estimates results in 41.8 m 3 /s.From this, around 18.3 m 3 /s (SEADE 2004) are used for public supply.In Table III, one can observe that the Upper Tietê (7.9 m 3 /s), Turvo/Grande (5.5 m 3 /s), Mogi-Guaçu (4.8 m 3 /s), Paraíba do Sul (3.6 m 3 /s), and Middle Paranapanema (probably 3.2 m 3 /s) practice the greater values of total abstraction.The abstractions in the other units are smaller than 1.5 m 3 /s. GROUNDWATER NATURAL QUALITY Groundwater in São Paulo State is, in general, of excellent quality, being potable and without restrictions for most uses.For the public supply, the only necessary CETESB 2004).This is explained by the fact that the source of the sediments, of the Adamantina Formation in this region, was originated in the Triângulo Mineiro (MG), where minerals of chromium (e.g., chromite and some types of garnet) were available.Fluoride affects mainly the Guarani (Botucatu and Piramboia forma-tions), Serra Geral, and, locally, Tubarão (Itararé Formation) aquifer systems.The origin of the fluoride is not yet clearly understood, although it is generally accepted that it is associated with major faults (Perroni et al. 1985).It occurs in one or more areas of Pontal do Paranapanema, Tietê/Batalha, Middle Paranapanema and Lower Tietê WRMUs, although it generally affects only less than 5% of the whole area of each unit. GROUNDWATER INDICATORS The indicators allow the defi nition of the present status or trends for a specifi c quality or parameter of a defi ned area and also the comparison of different regions (UNESCO/IAEA/IAH/UNECE 2004). The most common use of indicators is describing the state of the resource, although regular measurements of them provide time series that can be used either for predicting future trends or responses to the management.They, therefore, act as an important communication tool for policy-makers and the public, in general, and also permit to evaluate the effectiveness of specifi c policy actions and subsidize the development of new actions.An indicator value can also be compared to a reference condition and so it can be used as a tool for assessment. Indicators, for groundwater resources, have mostly been employed in order to defi ne the present status, on a regional scale, and the degradation tendency, with regard to both quantity (under-or overexploitation) and quality (natural and anthropic contamination).The formulation of the indicators, herein presented, was based partially on UNESCO/IAEA/IAH/UNECE (2004) and Vrba et al. (2005).As described in the following sections, specifi c combinations of the seven indicators provide a general picture of the WRMU with regard to three aspects, namely, (1) dependence, (2) availability, and (3) quality of groundwater (Table IV. Figure 3) synoptically represents these three aspects using simple and intuitive " smiling faces". As is depicted in Table V, each of the 22 WRMUs was classifi ed in three categories of increasing attention, namely, observation, attention, and alert, according to its situation with regard to each Indicator. INDICATORS OF GROUNDWATER DEPENDENCE The formulae related to the indicators of groundwater dependence are illustrated in Table IV.Indicator 1 (I1) is concerned with the role of groundwater for public supply.Categories low, intermediate and high correspond to: less than 25%, 25 to 50%, and larger than 50%, respectively.9 WRMUs show strong dependence on groundwater (supply of more than 50% of the population); 6, intermediate (49 to 25%); e 7, low (<24%) (Table V, Figure 2).The municipalities of the Western and Central portions of the State of São Paulo, whose population corresponds to 16.4% of the State, are strongly supported by groundwater abstraction.The most dependent basin is the Tietê/Batalha (91%), being followed by Aguapeí (88%), Turvo Grande (78%), Pardo (69%), São José dos Dourados (66%), Peixe (61%), Tietê/Jacaré (61%), Pontal do Paranapanema (56%) and Lower Tietê (52%).All of these are in category high and the other basins, that present a degree of dependency inferior to 50%, in categories intermediate and low (Table V). Indicator 2 (I2) expresses the degree of groundwater participation in water supply for all uses, that is, the abstraction of groundwater compared to the total, ground-and surface water, one.In the State of São Paulo, the groundwater contributes with 11% of the total used water.The majority of the WRMU are in category low (I2 < 25%), three in intermediate (25% ≤ I2 < 50%), and one in high (I2 ≥ 50%).The major contributions of groundwater, according to the Situation Reports data, are found in São José dos Dourados (50%), Turvo/Grande (32%), Tietê/Jacaré (29%) and Middle Paranapanema (26%) (Table IV). INDICATORS OF GROUNDWATER AVAILABILITY Indicator 3 (I3) (Table IV relates the renewable groundwater resources, defi ned in the section " recharge and aquifer vulnerability", to the total population.The categories low, intermediate and high, for this Indicator, correspond to larger than 1500 L/inhab/day, 500 to 1500 L/inhab/day, and less than 500 L/inhab/day, respectively (Table V).In the State of São Paulo, an average discharge of 787 L/inhab/day is estimated.As one could expect, the Upper Tietê Basin (WRMU 6), with 93 L/inhab/day, is the most critical, followed by Piracicaba/Capivari/Jundiai (482 L/inhab/day) and Tietê/Sorocaba (433 L/inhab/ day).Signifi cant drawdowns of the potentiometric surfaces are reported for the Upper Tietê Basin (Hirata et al. 2002, Hirata andFerreira 2001) and also for Turvo/Grande (São José do Rio Preto municipality) and Pardo (Ribeirão Preto municipality) basins (São Paulo 2004a, b).Nevertheless, in many of the WRMUs, for instance Ribeira de Iguape (13973 L/inhab/day), Litoral Sul (3186 L/inhab/day), Upper Paranapanema (3186 L/inhab/day), Lower Pardo/ Grande (3050 L/inhab/day), and Middle Parnapanema (2891 L/inhab/day), the availability is high, mainly due to the relatively low amount of inhabitants in these regions.The present study did not take into account the potentiality of the Guarani Aquifer in its confi ned portion.This aquifer, as concluded by previous investigations, is the great groundwater reservoir of the State of São Paulo. Indicator 4 (I4) expresses how much water has been abstracted with regard to the renewable groundwater re- Excepting WRMUs Turvo Grande (52%), Pardo (44%), Upper Tietê (41%), Mogi-Guaçu and Tietê/Jacaré (28%) basins where the use is quite intense, the rest of them show values less than 20% (category low), being the majority less than 10% (Table V).For the Piracicaba/Capivari/Jundiai Basin, with acknowledged intense use of groundwater, the value of I4 is only 4%, however I3, which takes into account the population of the basin, denotes that its situation is problematic (Table V).Investigations of potentiality, carried out by private consultants in order to attend the need of licensing expressive discharges, have demonstrated that, for small areas, the renewable groundwater resources are very limited in Piracicaba/Capivari/Jundiai Basin.This situation demonstrates that the major problem of this basin is the great density of abstractions, due to the concentration of human occupation, on aquifers (Tubarão and Precambrian systems) whose discharges are limited by their hydraulic conductivity and storativity. In the Upper Tietê Basin, a recent study (Hirata et al. 2002) has concluded that the piezometric levels of the aquifers have been lowered and, consequently, there have been reserve losses, due to uncontrolled exploitation of groundwater.Nevertheless, I4 also reveals that, An Acad Bras Cienc (2007) 79 (1) outside of the urbanized areas, this resource could be 100% more exploited than it currently is. INDICATORS OF GROUNDWATER QUALITY Indicator 5 (I5) (Table IV) considers the relative extension, in each WRMU, of areas of different vulnerabilities, which is an intrinsic characteristic of an aquifer, and is defi ned as the susceptibility of the saturated zone of the aquifer becoming contaminated, according to current potability parameters, by an anthropic activity (Foster and Hirata 1988).I5 does not consider the interaction of the vulnerability with the potential contaminant loads for the reason that there is not an up to date evaluation of their distribution. GOD method (Foster and Hirata 1988) was applied for the mapping of the vulnerability of the aquifer systems of the State of São Paulo in 1:500.000(Hirata et al. 1997).I5 points out that the most vulnerable WRMUs correspond to Pardo, Tietê/Jacaré, Lower Tietê, Aguapeí, Pontal do Paranapanema, Paraíba do Sul, Peixe, São José dos Dourados, Sapucai/Grande and Upper Paranapanema.One important area of high vulnerability, correspond to recharge zones of the Guarani Aquifer Sys-tem, especially in the WRMUs of Pardo, Mogi-Guaçu, and Upper Paranapanema (Table IV).Therefore, detailed studies of existent contamination and careful evaluations, when considering the installation of future activities, should be carried out in these regions.The vulnerability of the WRMUs totally located in crystalline terrains is not defi ned. Indicator 6 (I6) (Table IV) denotes the total area where the natural quality of groundwater is not in accordance with drinking water standards.In the State of São Paulo, the most common elements, related to the natural solubilization of minerals of the host rock by the percolation of groundwater, are fluoride and total chromium, as toxic components, and iron and manganese, as aesthetic parameters.In the present study, only the fi rst two are considered. According to I6 the natural quality of the groundwater of the State is, in general, excellent, however contamination of fluoride are found in Paranapanema, Tietê/Batalha, and Middle Paranapanema basins; of chromium, in São José dos Dourados, Turvo/Grande, and Lower Pardo/Grande; and of both components in Lower Tietê.The greater values of I6 were found in São José An Acad Bras Cienc (2007) 79 (1) dos Dourados (20%), Turvo/Grande (19%), Lower Tietê (17%) and Piracicaba/Capivari/Jundiai (11%), which were classifi ed as category high (Table V).Once the actual extension of the areas where the contamination occurs is not known, the calculation of this Indicator took into account the total area of the municipality where the contamination was detected. Indicator 7 (I7) relates the number of contaminated groundwater sites to the total area of WRMU.The main problem related to this indicator is the lack of information.A government program for detecting groundwater contamination sites in the State of São Paulo is new, and up to now, few of them were studied in detail.The inventory of contaminated areas, elaborated by CETESB, responsible for the State environmental control, reports, up to November of 2004, the existence of 1366 confi rmed cases of contamination (among tens of thousands of potentially contaminant sources), of which 931 were caused by fuel stations, 237 by industries, 61 by solid waste disposal, 92 by trade associated activities (including storing and handling of hazardous products), and 15 by accidents of unknown origin (http:// www.cetesb.sp.gov.br,access on 05/05/05).The majority of the contaminated sites is located in urban areas of the Upper Tietê Watershed (725 cases), followed by Piracicaba/Capivari/Jundiai Basin (182 cases). CONCLUSIONS The indicators herein proposed, together with the available information for each WRMU, are suitable to evaluate the current situation of the groundwater in the State of São Paulo.Three different combinations of these indicators provide a general picture of three aspects, namely, current (1) dependence, (2) availability and (3) quality of groundwater. The dependence of groundwater for the State of São Paulo is remarkable and can be evaluated by indicators 1 and 2. With regard to the public supply (Indicator 1), one observes that 9 WRMUs show high (more than 50% of the population is supplied by groundwater), 6, intermediate (49 to 25%), and 7, low (less than 24%) dependence on groundwater.The largest demand is geographically located in the North, Central and Western portions of the State.On the other hand, considering the supply of groundwater for any purpose (Indicator 2), the dependence on groundwater is expressively smaller, and only one WRMU (Middle Paranapanema) is in the category high, and three and 18 WRMUs in categories intermediate and low, respectively. The groundwater availability is assessed by indicators 3 and 4. Indicator 4 points out that the resource still presents a great potential for further abstractions in most of the WRMUs.However the Upper Tietê, followed by Turvo/Grande, and Pardo basins show evidence of overexploitation and need special attention.In the specifi c case of Upper Tietê Basin, followed by Piracicaba/Capivari/Jundiai, and Turvo/Grande, the volume of water divided by the population (Indicator 3) clearly denotes the low availability. The quality of groundwater is demonstrated by indicators 5, 6 and 7.In WRMUs 2, 4, 8, 13, 14 and 18 to 22 more than 10% of their territory is of high vulnerability, which denotes that they need to be more carefully considered, when large potential contaminant loads are present or planned to be installed.Some of these basins contain part of the recharge area of the Guarani Aquifer System, the most productive aquifer in Brazil.Indicator 6 points out its general excellent natural quality, although it also denotes that some WRMUs (São José dos Dourados, Turvo/Grande, and Piracicaba/Capivari Jundiai) need special consideration with regard to chromium and fluoride concentrations.The majority of the cases of contamination caused by human activities is concentrated in urban areas. In spite of the groundwater resources of good quality in the State of São Paulo being fairly abundant, there are some specifi c areas where the WRMUs are currently facing problems such as overexploitation (strong drawdown in particular urban areas), low availability (related to population concentration) and contamination (natural and anthropic).In this way, the indicators, that can be easily understandable by policy-makers, represent an important tool for identifying areas that should be either prioritized for detailed studies or worked in a preventive way. ) Fig. 1 -Main aquifer systems in the state of São Paulo. Fig. 3 - Fig. 3 -Example of material for public awareness using groundwater indicators. TABLE II Area, precipitation, renewable groundwater resources for each Water Resource Management Units (WRMU) of the State of São Paulo. SÃ O PAULO GROUNDWATER RESOURCES: APPLICATION OF INDICATORS145 TABLE III Groundwater use for the State of São Paulo (SEADE 2004, modified). Chromium is mostly observed in water of some deeper and heavily exploited wells of the Adamantina Aquifer in São José dos Dourados, Turvo/Grande and Lower Pardo/Grande WRMUs.It is geographically limited to the Northwestern portion of the State of São Paulo (R. Hirata and G. Rodolfiunpublished data, M.L.N.Almodovar, unpublished data,
v3-fos-license
2020-07-30T02:04:29.478Z
2020-07-24T00:00:00.000
221761002
{ "extfieldsofstudy": [ "History" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIV-M-1-2020/1081/2020/isprs-archives-XLIV-M-1-2020-1081-2020.pdf", "pdf_hash": "2c9746d80b49f7796051730d4bf21d11e887c848", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:207", "s2fieldsofstudy": [ "History" ], "sha1": "1ced38ff1cae812b40a45f1415131618923b7a83", "year": 2020 }
pes2o/s2orc
The use of traditional mud-based masonry in the restoration of the iron age site of salŪt (Oman). A way towards mutual preservation : The archaeological record of the Sultanate of Oman speaks of the use of mudbricks (adobes) and mud plaster as key building materials over a long chronological range from the Early Bronze Age (late 4th / 3rd millennium BC) to the Late Iron Age at least (first centuries BC). Traditional earthen architecture perpetuated this scenario until modern times when the discovery of oil brought along deep transformations in the local economy and way of living. This long-lasting tradition has provided the necessary means to cope with the problem of mudbrick structures conservation on the prominent archaeological site of Salūt, in central Oman, where substantial mudbrick walls were discovered, dating to the second half of the second millennium BC and beyond. In fact, exploiting the life-long experience in mud-based masonry of a local mason turned out to be the best (and arguably only) way of consolidating and protecting the ancient structures. This strategy not only is definitely a sustainable one, as only readily accessible and largely available natural materials were employed, but it also helps to revive a locally rooted skill that seriously risks being forgotten due to the lack of interest in INTRODUCTION The prominent Iron Age (c. 1300-300 BC) site of Salūt, in central Oman, has been the focus of the research of the Italian Mission to Oman (IMTO) of the University of Pisa from 2004 to early 2019. Since the early years of this progressively expanding project, conservation and possible reconstruction of the ancient monuments have been among the main aims, with the specific goal of developing a presentation of the site to the wider public and to foster locals' appreciation and involvement. While initial work mainly dealt with dry stone walls, the restoration and conservation of mudbrick structures became more and more essential as wider portions of the site were unearthed and revealed mudbrick floors and walls preserved to remarkable heights. A multidisciplinary approach, primarily including stringent collaboration between architects and archaeologists, has been instrumental in achieving satisfactory results. Since this project aims to follow UNESCO guidelines an in-depth study and discussion of the original nature, materials and dimensions of the discovered structures is, in fact, essential. This approach provided the basis for implementing a whole series of new technologies for digital documentation and elaboration that have developed with unprecedented rapidity over the last few years, namely including digital photography, 2D photogrammetry and structure-from-motion 3D reconstructions that often used aerial footage as raw data. The restoration process could thus build upon this comprehensive work of documentation and understanding of the ancient architectural remains. Fundamental to our idea of restoration was the use of widely and readily available materials that need not be sourced far from the site and are, therefore, highly sustainable both economically and in terms of resource consumption; not to mention their correspondence with their ancient counterparts. Moreover, the nature of the work itself resulted in being best for the application of the traditional, mudbased masonry, the survival of which is strongly endangered by the gradual loss of interest of the young generations. The ancient oasis of Salūt and the archaeological background for earthen construction in SE Arabia Mud-based construction (including mudbricks and mud-based plaster and binder), together with stone building, has represented the key building technique in South East (SE) Arabia since at least the late 4th/very early 3rd millennium BC and up until the second half of the 20th century CE when the economy and daily life of the Arabian and Gulf countries was revolutionised by the discovery of oil. Located in the heart of the Oman Peninsula, the ancient oasis of Salūt bears witness to 5000 years of human occupation at the least and hosts evidence related to the main periods of SE Arabian history (Degli Esposti, 2015;. As such, it offers the possibility of examining the evolution of earthen architecture throughout this long time frame. The first period of intense occupation at Salūt can be dated back to c. 2500 BC (Early Bronze Age), when three monumental so-called 'tower' sitescharacteristic of the periodwere established in the plain. The architecture of these sites entailed the use of large to megalithic blocks for the main and outer walls, while mudbrick architecture was used (in some cases) for inner partitions. In other areas of SE Arabia, where stone is not available for construction, even the outer structures of these monuments were erected with the use of substantial quantities of mudbricks. After a period of reduced human settlement (Middle Bronze Age, c. 2000-1300, the Early Iron Age (c. 1300-300 BC) saw a remarkable demographic increase in the region and new settlements appeared in large numbers. At Salūt, after a few centuries of apparent abandonment, the establishment of a prominent Early Iron Age site on and around one small hill in the middle of the plain involved the extensive use of mudbrick features, which were the focus of the restoration programme discussed here. Walls were built either of mudbricks above a stone base or entirely of mudbricks, bound with clay and originally plastered with mud-plaster. Floors were also often made with mudbricks. Mudbricks and brick fragments were also used in the buildup of the supporting structure for the huge fortification that formed the elevated part of the site (Degli Condoluci 2018). Earthen architecture continued to be in use in the region even after the end of the Early Iron Age and has represented the main construction technique until modern times when cement became widely used. Several traditional villages are still visible, some still at least partially inhabited, that document this long-lasting tradition. In the Salūt oasis, post-Early Iron Age occupation is mainly borne out by a Late Iron necropolis comprising underground chamber tombs with stone and mudbrick walls. Later, during the Islamic period, testimony of earthen architecture is only provided by graves, while domestic features are lost. MULTI-MEDIAL DOCUMENTATION AND VIRTUAL RECONSTRUCTION AS COMPANION TOOLS FOR THE RESTORATION PROCESS In the last few years significant improvements in architectural and topographic surveys have taken place allowing the combined use of the classic topographic ground-point technique with large-scale photography for the creation of 3D models. Specifically, the great versatility in the use of point clouds made possible by the use of drones and professional cameras permits modelling a range of subjects spanning entire areas to small objects 'in situ', i.e., easily shifting from the macro to the micro-scale. By exploiting these opportunities, the excavation areas at Salūt were surveyed daily through wide-area and detailed flights, adding to the aerial footage, and thus obtaining the results of field-level photographic documentation. It is clear that this kind of approach has significantly improved the efficiency of the archaeological workflow, as excavation can proceed quickly without waiting for the long sessions of traditional field topography and manual drawing. Direct work on the field remains, however, essential for providing the coordinates of the ground-points. Furthermore, through the use of digital modelling tools such as Digital Terrain Model (DTM) and Digital Elevation Model (DEM), it was possible to remove redundant data from the terrain models, thus highlighting the structures that would conversely be difficult to visualize in their complexity even after accurate field survey. Terrain modelling also allowed for a reliable assessment of the slopes affected by the water flows, facilitating the study for the construction of drainage channels with a low visual impact, compatible with the aspect of the site. Together with the use of point clouds-based modelling, video footage was identified as a useful tool for creating a database aimed at the preservation of the technical know-how implied in the hand-made realisation of mudbrick and plaster, now at risk of disappearing. In fact, by implementing a sustainable approach to the conservation of mudbrick walls and floors, over the years we have gained an in-depth view of the local, traditional mud processing techniques for the making of the plaster to protect the walls. To this end, we accurately documented the work of our local mason by recording some interviews and shooting videos of every step of the production process, taking care to detail the use of the different tools together with the local terminology associated with them. Once re-organized and edited, this material could also support the dissemination of the site. Figure 5. The different tools used throughout the mudbrick production process and their transliterated (local) Arab name. An accurate documentation is essential for the planning of conservation work and for the implementation of the guidelines for subsequent maintenance. Moreover it becomes an integral part of the site and a tool for understanding its past, its present and the future needs. THE CONSERVATION STRATEGY FOR SALŪT Restoration activities have been carried out on both stone walls and mudbrick walls, obviously following different and specific procedures, while adhering to the basic tenets of the modern conservation, such as the UNESCO and ICOMOS, guidelines: -Promoting minimum intervention: summed up by the maxim 'do as much as necessary and as little as possible'. -Ensuring the reversibility of interventions: being able to reestablish the previous condition avoiding irreversible interventions. -Making the intervention recognizable and respectful: as similar as possible to the original, without running into a fake original (Petzet, 2004). For mudbrick elements, specific procedures have been adopted to consolidate masonries, floors, foundations or simply to remake surfaces, using local traditional techniques and materials suitable and found on-site. All the operations were materially carried out exclusively by Mr. Massaoud Al-Khīarī, an old Omani craftsman, expert in traditional mudbricks and mudplaster working, with the coordination of the architectural conservation team. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-M-1-2020, 2020 All interventions on the site were scheduled according to work phases. In the first phase, a general view of the nature of the area was acquired and the condition of the structures brought to light during the previous excavation campaigns was surveyed in order to assess the current state of degradation and identify critical restoration and conservation issues (Bizzarri, 2015). Decay and damage As is well known, earth is one of the most widely used construction materials in the world but it is also one of the most vulnerable and the heterogeneity of earthen materials and construction systems makes it difficult to classify general decay processes and their related treatments (Rainer, 2008). The second phase of the restoration work entailed the understanding of the deterioration causes, with the aim of achieving a decay mapping, a necessary step before a comprehensive intervention program and a conservation plan of the site. The causes of deterioration of these structures can be classified as intrinsic when they are associated with the materials' composition or with the construction technique and extrinsic when external factors such as water, wind and other environmental and contextual factors play a role. The most common types of deterioration observed on the mudbrick structures of Salūt seem to be related to wind and rain erosion (extrinsic factors). Aeolian erosion is further facilitated by the absence of roofs or shelters and, likewise, unanticipated heavy rains can fall upon the not well-protected structures thus accelerating decay. Generally such damage tends to occur at the top of the wall (Doat et al., 1983) and on the entire surface, where erosion occurs in the form of non-structural damage like detachment, disaggregation, flaking and cracking. At the bottom of the wall, in case of water penetration/infiltration and/or rising damp creating a basal erosion, the inefficiency of the protective coating can cause structural damage leading to upper wall displacement, leaning and collapse (Rainer, 2008). Water from flash-flooding is certainly the major factor causing deterioration at the base of the walls, while the capillary rise and rains prompt erosion along the top of the walls, thus creating deep ridges and gullies. The study of the site and the analysis of the different causes of degradation led to two ways to minimize erosion and protect the earthen structures: -Modify the slopes of the site in order to get rid of standing water and minimize surface runoff. -Insulating the structures against the extrinsic factors by protecting the exposed surfaces with materials such as soil, mudbricks and mud-plaster (Dehkordi et al., 2008). First of all, the issue of rainwater drainage was tackled by identifying the naturally formed channels through which water tends to drain. These channels were then more neatly shaped and filled with drainage material, facilitating the waters to flow downstream and, at the same time, not altering the visual appearance of the site (Bizzarri, 2015). Conservation and traditional methods As regards the conservation of mudbrick structures, two different cases were identified: 1. Collapsed mudbrick masonry, where only the foundation row survived, generally found in a fair state of preservation. 2. Mudbrick masonry intact in all its original height, even if not in a good state of conservation. In order to achieve the conservation of the discovered structures and to provide a clear explanation of the site to the wider public, restoration also included partial or total reconstructions. The first step was the set-up of a mudbrick production area near the site, also including the preparation of the mud-and-straw mortar. The latter was continuously mixed and kept wet so as to be ready to be brought on the intervention place when needed. In fact, suitable mud, mixed with straw, needs to be stirred for a couple of days; it is then brought on-site and mixed again with additional water to achieve the right consistency. The mixture is then spread, coat after coat, onto the mudbrick walls. The sun dries the plaster in two/three days, giving it a solid structure and a light brown colour (Bizzarri, 2015). Mudbricks were prepared in advance to be used in integrations or reconstructions and the 'soft' nature of the mudbricks has allowed the mason to cut them to the required, smaller dimensions as the work proceeded, in order to: -Ensure physical compatibility between the original and the restored section and sufficient thickness to protect the original section from erosion (Dehkordi et al., 2008). -Increase the surface of bricklaying. -Make the intervention 'recognizable and respectful'. The restoration of the mudbrick walls and floors The restoration of the collapsed mudbrick walls followed these steps: 1. Preliminary cleaning activity, manual removal of vegetation and soil. 2. Manual removal of unstable and collapsed mudbrick portions to provide a solid basis for reconstruction. 3. Cleaning of the base and placing of a geotextile layer to mark the separation between the old part and the reconstructed one. 4. Rebuilding of the missing part of the walls with new mudbricks made on-site. 5. Plastering of the surfaces with mud-and-straw layers to protect the underneath structure. In particular, for steps 3 and 4 the layer of geotextile was covered with a layer of compressed earth, followed by layers of mudbricks and compressed earth up to the required height and the top of the wall was covered with a mud-and-straw plaster layer sloped at the efficient incline to shed water. In order for the mudbrick masonries to remain intact over all of their height, the restoration process included: removal of the old and deteriorated mud plaster, cleaning of the surface of the mudbrick structure, filling of possible lacunas with small and medium stones set in the same mud used for plastering, and spreading of new mudand-straw plaster, coat after coat, on the wall. The same materials and methods were used for the mudbrick floors. First, the surface of the floor was levelled using compressed earth. This was covered with a layer of geotextile topped by an additional layer of bricks. A thin layer of soil was then used to fill the gaps between the mudbricks and to create a slight slope necessary to divert surface water, when possible. Despite mudbrick structures being recognized as very difficult to preserve, at Salūt the apt strategy to handle this problem was set up involving traditional skills deeply rooted in local house building, using local and sustainable materials and plastering the walls with fresh mud prepared near the site, thus using the same sandy clay used for the original mudbricks and mud plaster. This procedure has proven to be extremely efficient, particularly when connected with the definition of a longer-term maintenance plan for the site. In fact, the lack of regular maintenance is one of the main factors that accelerate the decay of earthen structures. However, thanks to this conservation strategy, restored structures suffered no significant damage from the heavy rains occurring at the site in recent years. EMPIRICAL TESTS ON PLASTER MIX: ACHIEVING AN ADEQUATE INTERPRETATION FOR THE CONSERVATION OF THE EARTHEN STRUCTURAL HERITAGE OF THE SITE Plastering had to follow the protocol established by the IMTO, which comprises some constraints: -The new interventions should be clearly distinguishable from the original parts of the structures. -Traditional technologies should be implied, thus using locally available resources. -Results should reduce the necessity of maintenance works. To meet these purposes, a series of plastering tests were conducted, aiming to: -Achieve a good match with the composition of the original material. -Obtain a similar evo-transpiration capacity as the original material. -Obtain a similar (or in any case not higher) strength as the original material. The ultimate goal of the research was to obtain a mud-based mix suitable for the production of plaster and mortar that could be stabilized with a lower quantity of straw than the one produced by our mason, in order to minimise its aesthetic impact on the finishing. The absence of a dedicated budget, time constraints and the limited diversity of locally available materials affected the number and quality of the tested combinations. The soil was sampled from different locations on the site with the aim of providing useful data to identify different compositions and to verify its suitability as a construction material. Three macroscopically different soils were sampled, cleansed from organic elements: T1 from the northeastern area of the Iron Age settlement; T2 from the western area; and T3 from the lower slope of the hills standing northeast of the Iron Age site, where Bronze Age tombs were being excavated. Two kinds of soil were also sampled from the area surrounding the archaeological sites: 'soil S', which sedimentation analysis ( Figure 1) indicated to be composed of 100% fine sand, with a reddish-golden colour; and 'soil G', composed of 52% gravel, 32% sand and 16% loam, with a grey colour. The following preliminary analyses were carried out on samples T1, T2 and T3: -Visual examination: appearance, colour, size of the components. -Tactile examination: testing the behaviour of the soil to check whether it is sandy or rough and abrasive to the touch, silty or fine and easy to pulverise, clayey or difficult to break, fine and sticky. -Hand washing test: the more clayey the soil is, the more difficult is to clean up off one's hands after handling it. -Cohesion and resistance test: forming a sphere of a 6 cm diameter that, once dried, is dropped to the ground from a known height. -Consistency test: forming a 2,5 cm diameter 'cigar' with moist soil and hanging it little by little. If the cigar breaks between 10 and 15 cm -and in any case above 5 cm-the earth is suitable for building. If breakage occurs below 5 cm, the earth is more sandy; conversely, if it breaks above 15 cm the soil is too clayey ('fat') and the addition of a temper is necessary (Figure 7). -Shrinkage and crack test: laying a layer of soil on a flat surface and checking the number and extent of the cracks that form during drying. Stone surfaces of about 20x15 cm were chosen for this test and an earth layer 2,5 -3 cm thick was laid down above them and kept wet over the first two-three hours by frequently spraying water. Samples were then covered with a plastic sheet for onetwo days to face the hot temperature on the site (it must be noted that this last operation is clearly not possible when actually using the plaster for construction, while repeated pouring of water is commonly carried out by our mason). The tests indicate that T1 comprises loam as the main component plus a low percentage of clay. All the tests confirm a certain degree of cohesiveness due to its water permeability. For this reason, it is subject to swelling and shrinkage. A bad performance in the consistency test confirmed the insufficient content of binder (i.e., clay) in its composition. Overall, this soil is not suitable for construction without any addition of corrective components. Tests on soil T2 indicate it is composed mainly of clay with a low loam content. Although similar to T1 it is subject to swelling and shrinkage, due to a high degree of cohesiveness linked to water permeability. The consistency test confirmed the prevailing percentage of binder in its composition, which makes this soil 'fat' and not suitable as a construction material without the addition of any sort of temper. Finally, soil T3 showed it comprised a high percentage of loamsand with a low percentage of clay. All the tests show that this soil is also subject to shrinkage, although less extensive than the other samples. However, this soil also needs the addition of balancing components in order to become suitable as a construction material. The following step was the 'correction' of the analysed soil composition with the addition of natural tempering material available on the site. Different mixtures were tested in order to understand if performance improvements could be achieved by balancing the components. The tests were made using 27x57 cm mudbricks as standard surface, scrubbed clean and moistened before each application to prevent water from being drained out of the plaster layer into the brick. The layers were forcefully applied by hand in order to increase adherence of the plaster to the brick surface. Even in this case, test layers were sprayed with water once or twice during the first hours after being laid down and covered with a plastic sheet for one-two days. The results of these tests were not optimal but highlighted the necessity to increase the sand component in sample T2, which can conversely be used as a binder due to its high clay content. Further tests were made to improve the quality of earthen plaster by balancing the lacking (sandy) components. In fact, better performance of mud-based plaster and mortar is achievable by controlling the quantity of the sand fraction in the soil, for example keeping a ratio of no less than three parts of sand to one of clay, for example. This helps reduce cracks without compromising cohesion (Ruskulis, 2009). New samples were prepared: -Sample 1b, made mixing 1 part T2+3 parts S. Once dried, cracks could be completely closed with a simple, manual surface friction treatment with water. On part of the sample a second, very fluid coat was applied so as to experiment different a finishing. This sample showed a weak cohesion and a poor resistance to washing; -Sample 3b, made mixing 1 part T2+3 parts G. Cracks were completely closed with the secondary manual treatment with water. This sample showed good cohesion and reasonable resistance to washing; -Sample 4b made mixing 1 part T2+1 part S+1 part G and tempered with a small amount of straw and gravel (0mm<ϕ<20mm), in a way that their presence was not visually predominant on the surface finishing but could anyhow help to avoid cracks and increasing cohesion and resistance to washing away. Cracks were completely closed with the secondary manual treatment with water. This sample showed good cohesion and reasonable resistance to washing; -Sample 4b made mixing 1 part T2+1 part S+1 part G and tempered with a small amount of straw and gravel (0mm<ϕ<20mm), in a way that their presence was not visually predominant on the surface finishing but could anyhow help avoiding cracks while increasing cohesion and resistance to washing away. Cracks were completely closed with the secondary manual treatment with water. This sample showed good cohesion and reasonable resistance to the washing, together with an ideal finishing and colour surface. For such reasons it was selected for further tests. Additional improvement of the plaster's stability could be achieved by adding small quantities of other binders, such as in the following test samples: -Sample 5b. made mixing 1 part T2+1 part S+1 part G and tempered with 1 small trowel of grey Portland cement, gravel (ϕ<2mm) and little straw. Cracks were not completely closed with the secondary hand treatment with water. This sample did not show a good cohesion and resistance to washing. Moreover, the colour of the finishing turned too greyish to be suitable for our purpose; -Sample 6b, made mixing 1 part T2+1 part S and tempered with 1 part of industrially produced sarooj, gravel (ϕ<2mm) and little straw.The sample showed deep cracks with remarkable detachments and, unexpectedly, it didn't show a really good cohesion and resistance to the washing, at least some days after; -Sample 7b made mixing 1 part S and tempered with 1 part of sarooj, gravel (ϕ<2mm) and little straw. The sample presented cracks with little detachments and it didn't show good resistance to the washing. For such reasons it was selected for further tests, adding a gravel temper. The final step was implementing the tests on the vertical and large surface of a recently built mudbrick wall. Traditionally, mud plasters are often applied in one coat both internally and externally. If applied in two coats, the first can contain more clay, even if this leads to the development of more cracks, while the second is sandier and is applied in a thinner layer. This second coat will close the cracks in the first one, provided the surface has been lightly wetted before plastering. The more sustainable methods at Salūt appeared to be the use of two coats, considering the availability of clay soil on the site and the scarcity of sand. First, the wall surface was cleaned and moistened. The first plaster coat was forcefully applied by hand with a thickness of 20 mm, in order to create a naturally wrinkled surface to provide good bonding for the second coat. It was important the plaster was sprayed with a coat twice or three times a day with water during the first days to reduce cracking. Plastic sheets were also used to shade the wall. Based on the results of the tests described above, the following mix was used for the wider-area plaster tests: -Sample A, coat 1: (Figure 10, top left): 1T2+ 1S+ 1G(0<ϕ<2mm)+ 1/8 straw approx.+ 1/8 gravel (2<ϕ<20mm) approx; This was compared with a sample with the same composition as the one normally used by our mason in the restoration works: -Sample B, coat 1: made of 1 T2+ 1/2 straw approx. For the second coat, a mix with a majority of sand content was used and only applied when the first coat was dry, with a thickness of 10-15 mm and then smoothed by hand to get an even finish. Also, in this case, the plaster was accurately kept wet for a few days and shaded to reduce cracking, especially for samples that tested the addition of binders such as sarooj and chalk. In fact, samples made by experimenting the use of sarooj, chalk and cow dung were made to understand their waterproofing contribution and they were selected because these are traditionally used easily available materials. For all these samples small cracks were fixed with one surface friction treatment once the plasters were dry. All the applied techniques and the tested recipes for the rendering and plastering on the wall gave satisfying results both in terms of finishing and performance. Such tests confirm the fact that improving the mud-based plaster mixes by controlling the quantity and the quality of the sand fraction in the local soil and adding specific tempers, provides a perfectly suitable product for the protective coating of earthen heritage structures. Moreover, these mixes fulfil the aims of using locally available materials and respect the physical, hygroscopic and aesthetic characteristics of the original materials while using and perpetuating the traditional techniques. For such reasons, this protocol represents a valid alternative to the use of the simple clay and straw mix, previously used in the restoration works at the site. THE IMPORTANCE OF TRADITIONAL TECHNIQUES: RECOVERY AND DIFFUSION Earthen architecture is one of the most original and powerful expressions of man's ability to create a built environment with readily available resources. Attention to cultural roots, conservation and care of the built heritage are constitutive elements for the harmonious development of modern Omani society and are also the prerequisites for future life. In creating the balance between past and future it is possible to allow this unique cultural landscape to regain its meaning and serve its inhabitants to their advantage. The loss of oral history Even in the twenty-first century, despite the development of modern educational methods, oral narrations should continue to be an important tool in handing down the know-how of traditional construction techniques, architecture and material culture. Without doubt, oral history has to be considered a primary source for historical information. It is, therefore, urgent to learn (and store the information) from old Omani craftsmen, the only holders of the knowledge related to these traditional mudbrick-making techniques, before this wisdom fades away without any new apprentices following in their wake. Against this background, the comprehensive, multi-media documentation work of all the phases and techniques involved in traditional mud-based architecture that was mentioned above takes on further relevance. In Oman traditional tales, the theory underlying material culture, were until modern times passed on from person to person and, generally, directly taught from the old craftsman to younger ones who will become the co-creators of the conservation messages to be passed on (Ogega, 2011). However, since the 1970s, the civil and cultural revolution (the 'Omani Renaissance'), which after years of obscurantist isolation has been led by Sultan Qābūs bin Saʿīd Āl Saʿīd, has allowed the formation of what is called 'modern Oman', promoting a policy of modernisation and tolerance that led the Sultanate to have one of the highest development rates in the world. For millennia the socio-economic model based on inland oases constituted the backbone of Omani society, as shown by evidence dating as early as the 3 rd millennium BC. The above mentioned transformations in the everyday culture, especially in the internal areas of the country where settlement was tightly connected with the existence of such oases as an indivisible combination of housing and work, are the causes of the partial or total abandonment of historic villages, and the irreparable consequent loss of the need to continue to maintain the efficiency of traditional houses made of mudbricks and protected by mud-and-straw plaster (Mershen, 2010). Given this scenario, the younger generations are no longer encouraged to learn the traditional building techniques, even going so far as to refuse their teaching, as masonry is seen as a profession that is no longer dignified. CONCLUSION This paper illustrates the process that led to a defined conservation strategy for Salūt, a key-site for Southeast Arabian prehistory, through a multidisciplinary approach, primarily via collaboration between architects and archaeologists. This process started with an accurate photogrammetric documentation phase and, step by step, arrived at an awareness of the use and improvement of traditional techniques for providing the best intervention solutions and defining a feasible plan of systematic maintenance. Actually preventive maintenance, along with the procedures adopted during excavation, is a key factor in having a large scale conservation plan. For this reason one of the most important aspects of the project has been the empirical experiments on plaster mix to be tested during a reasonable period of use, before being adopted on a wider scale. This may represent without fail, one of the future aims. The reintroducing of earthen traditional techniques, and exploiting the life-long experience in mud-based masonry of a local mason, turned out to be the best way for providing an opportunity to revive knowledge of this technique, with its associated materials and local terminology. This conservation strategy, firstly, is highly sustainable and, secondly, by recording interviews and shooting videos of every step of the work of a local mason, it encourages younger Omani to study and re-appropriate a traditional heritage that is suddenly disappearing. This can be adopted in future aims involving such materials in disseminating public information on this site. Reviving traditions is a keystrategy for site management and community involvement. Thus this comprehensive restoration programme, has given positive results in different directions that could represent such guidelines to be followed for promoting and defining the recovery of the archaeological sites of the whole Sultanate.
v3-fos-license
2017-08-03T01:02:20.505Z
2017-03-02T00:00:00.000
19218821
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://gsejournal.biomedcentral.com/track/pdf/10.1186/s12711-017-0302-9", "pdf_hash": "c5e865bd239d1d2a830c5c5f0751ad2a99e97beb", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:208", "s2fieldsofstudy": [ "Mathematics" ], "sha1": "c5e865bd239d1d2a830c5c5f0751ad2a99e97beb", "year": 2017 }
pes2o/s2orc
Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance–covariance matrix Background An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance–covariance matrix. However, obtaining the prediction error variance–covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance–covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance–covariance matrix of estimated contemporary group fixed effects. Results In this paper, we show that a correction to the variance–covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance–covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance–covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Conclusions Our method allows for the calculation of a connectedness measure based on the prediction error variance–covariance matrix by calculating only the variance–covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced. Electronic supplementary material The online version of this article (doi:10.1186/s12711-017-0302-9) contains supplementary material, which is available to authorized users. Background A goal of genetic evaluation is to predict genetic merit, while optimising accuracy and minimising bias. Ideally, a breeder of seed stock should be able to compare all individuals in an evaluation irrespective of contemporary group. This is problematic when there is little or no genetic connectedness between groups, unless there is a belief that the model assumptions, specifically assumptions concerning genetic relationships between animals, completely describe the population in question, which is not the case in general. Estimation and reporting of genetic connectedness are important as there are, taking the example of the New Zealand sheep industry, hundreds of flocks evaluated over disparate environments and within each, there are many more contemporary groups. There is sharing of genetic material (rams) between groups and individual seedstock breeders and a centrally co-ordinated progeny test to increase genetic connectedness [1], but many flocks or groups of flocks likely lack genetic connectedness to allow comparison, therefore, in New Zealand, genetic connectedness is reported to seed stock (rams) breeders [2]. In the work of Foulley et al. [3,4] and Laloë et al. [5,6], genetic connectedness is regarded as a measure of predictability, where predictability is the random effect extension of estimability [7]. More recently, this was the approach to connectedness taken by Kerr et al. [8]. An estimable function [9,10] is defined in the context of a fixed effect model. In particular, a function is said to be estimable if vectors a and k exist such that E(a ′ y) = k ′ β . For random effects, all linear combinations can be predicted, regardless of their distribution [3], even if they are not estimable when treated as fixed effects. To get around this, connectedness was defined as the loss of information due to a lack of orthogonality [4] measured by using the Kullback-Leibler divergence. It was shown in Laloë [5] that for a linear mixed model, the expected information is a function of the ratio of the posterior and prior variance for u, alternatively known as the prediction error variance-covariance matrix (PEV) and the relationship matrix, respectively. They also showed that the expected information could be re-arranged to give a co-efficient of determination (CD) statistic [5,6]. To reduce the computational cost of this measure, simulation and the repeated use of iterative solvers were proposed [11]. Alternative measures of connectedness have been designed either to ease interpretability or minimise computational cost [12][13][14]. Usually these measures attempt to measure the level of genetic linkage between contemporary groups. They often also allow for the possibility that the model is incorrectly specified, such as omitting genetic groups. They include methods based on PEV, the variance-covariance matrix of estimated fixed effects Var(β), the covariance structure fitted for the random effects (the relationship matrix), or a combination of these. Those based on PEV include the ratio in determinants between full and reduced models [4], differences in PEV of contrasts [12] and correlations of random effect contrasts [15]. Methods based on the variance-covariance matrix of estimated fixed effects include variance of differences between estimated fixed effects (VED) [12], and correlations between estimated fixed effects referred to as connectedness rating (CR) [16]. The fixed effect usually considered is contemporary group (such as flock by year or herd by year). Methods based on the relationship matrix include genetic drift variance [17] and direct genetic links [18,19]. The focus here is on measures of connectedness that are functions of the PEV or the variance covariance matrix of estimated fixed effects, the links between them and the changes observed as the fitted effect structure is changed. The inclusion of genotype data was also considered to assess the impact changes in the relationship matrix have on the relationship between PEV and the variance-covariance matrix of estimated fixed effects and on the connectedness measure being considered. The first of the measures that we investigated was the PEV of contemporary group differences (PEVD ij ) [12]. This is calculated from Z, the incidence matrix indicating which animals have records, x ij , a vector of contrasts comparing two groups i and j and Var(û − u), the prediction error variance-covariance matrix of random effects. If the groups i and j being compared are contemporary groups such that x ′ ij u is the difference in the mean random effect between group i and j, PEVD can be simplified to a function of the prediction error variance-covariance matrix of random effects averaged by contemporary group. The coefficient of determination (CD(x ij )) [5,6] is also calculated from Z, x ij and Var(û − u) but it also includes Var(u). Flock correlation (r) [15] is calculated from the elements of the prediction error variance-covariance matrix of random effects averaged by contemporary group. For the variance of differences in management unit effects (VED), Kennedy and Trus [12] used the variances and covariances of estimated contemporary group fixed effects, where β i is the estimated effect for contemporary group i and β j is the estimated effect for contemporary group j. The basis for using VED is that Var(β) is an approximation of Var(û − u) [12] . In this scenario, VED should estimate the PEV of contemporary group differences, PEVD. As a connectedness rating (CR), [16] used the variances and covariances of estimated contemporary group fixed effects, where β i is the estimated effect for contemporary group i and β j is the estimated effect for contemporary group j. Using the same argument as for VED, CR approximates the flock correlation. The aim of this paper is to give an exact measure of Var(û − u) using functions of the variance-covariance matrix of the estimated fixed effects. We also demonstrate that, under certain circumstances, the approximations provide poor estimates of Var(û − u) and hence are poor predictors of genetic connectedness. For the remainder of this paper, Var(û − u) will be referred to as PEV and Var(û − u) as PEVMean. Data The data available, collected by New Zealand seed stock (ram) breeders and previously used in Holmes et al. [20], consisted of 40,837 animals with live-weight recorded at eight months of age. These animals were born between 2011 and 2013. Together with ancestors, 84,802 animals with pedigree information were obtained from the database of the New Zealand genetic evaluation system for sheep, Sheep Improvement Limited (SIL) [2]. A total of 269 animals were genotyped using the 50K Illumina SNP chip and of these, 21 had live-weight records. A total of 31,615 animals without genotype information were descendants of a genotyped animal. As these data were previously collected by commercial seed stock breeders, special animal ethics authorisation was not required. Models For modelling purposes, we considered the following variables as fixed effects. The contemporary group variable was flock-sex-contemporary group combination, as is standard for growth traits in SIL. There were 202 flocksex-contemporary groups in the dataset. The combination of birth and rearing rank (four levels) and age of dam (three levels) were treated as categorical variables. Date of birth was treated as a continuous covariate and defined as the difference (in days) between the animals date of birth and the average date of birth in its flock and year combination. Weaning weight was fitted as a continuous covariate. Three models were fitted. Model 1 fitted flock-sex-contemporary group combination as the only fixed effect. Model 2 fitted flock-sex-contemporary group combination, date of birth, and birth rearing rank as fixed effects. Model 3 fitted all available fixed effects. The animal genetic effect was fitted into all models as a random effect. Two variations on the variance-covariance matrix of the random animal effect were considered. These were A and H. Matrix A used only the pedigree information available to construct the variance-covariance matrix. The method of Meuwissen and Luo [21] was used to construct the inverse of A required for the mixed-model equations. Matrix H used genotype and pedigree information to construct the variance-covariance matrix. The genomic component of the variance-covariance matrix G was constructed using the first method of VanRaden [22] and the inverse of H was constructed using the method outlined in Aguilar et.al. [23]. The variance components were estimated for Model 3 using A to model the covariance structure of the animal effect in ASReml [24]. Estimates of variance components were σ 2 g = 1.81 and σ 2 e = 7.43 resulting in a heritability of 0.20. Standard errors for the variance components were 0.13 and 0.11 respectively. The variance components were then fixed at these values for all other models, regardless of whether the variance-covariance matrix of the random effect was A or H. Functions of the fixed effects considered Three functions of the variance-covariance matrix of estimated fixed effects were compared to the directly calculated PEVMean. Function 1 is the approximation Var(β 1 ), where β 1 is the vector of contemporary group fixed effects. The elements of this function were used to calculate CR [16] and VED [12]. Function 2 is the function of the variance-covariance matrix of estimated fixed effects that gives PEVMean for a model with only one fixed effect fitted. Function 3 is the function of the variance-covariance matrix of estimated fixed effects that gives PEVMean for a model with multiple fixed effects fitted. The derivations and notations for function 2 and function 3 are in the "Appendix". Correction factors used in function 2 and function 3 Both function 2 and function 3 are matrix additions to function 1, Var(β 1 ). Therefore, the extra calculations required can be regarded as correction factors to obtain PEVMean. In function 2, we subtracted σ 2 e (X ′ 1 X 1 ) −1 from function 1, where (X ′ 1 X 1 ) −1 is a diagonal matrix with entries ii equal to 1 n i , where n i is the number of observations in contemporary group i. Therefore, σ 2 e (X ′ 1 X 1 ) −1 is the correction factor for the number of records. Due to the inverse relationship with contemporary group size, this correction is more pronounced for small contemporary groups. Function 3 is the addition of This addition is therefore the correction to account for the inclusion of other fixed effects in the model. Calculation of connectedness measures and their comparison The fixed effect variance covariance matrix Var(β) and PEV were extracted from the inverse of the mixed model equations. PEVMean was calculated from PEV. From this, the PEV of contemporary group differences (PEVD) and flock correlation were calculated. From Var(β 1 ), VED and CR were calculated. All calculations used R [25]. The three functions described earlier were compared using correlations between the elements of PEVMean and the corresponding elements of the function in question. Diagonal elements were considered separately from off-diagonal elements. As mentioned in the "Background" section, CR is the analogue to the flock correlation and VED is the analogue to PEVD under the assumption that Var(β 1 ) approximates PEVMean. Therefore, correlations between the flock correlation and CR and between VED and PEVD were calculated to assess whether variance of differences or correlation functions of Var(β 1 ) gave a more accurate approximation to the corresponding functions of PEVMean than the individual elements of Var(β 1 ) did for the individual elements of PEVMean. Both Pearson and Spearman correlations were considered for all examples to assess whether a linear relationship or just the relative rank was maintained. Model 1: Flock-sex-contemporary group interaction is the only fixed effect fitted Correlations between the elements of PEVMean and the elements of function 1 and function 2 are in Table 1. For function 1, correlations were high for diagonal elements (Pearson: 0.994 for A, 0.994 for H. Spearman: 0.932 for A, 0.928 for H), regardless of whether A or H was used as the variance-covariance matrix of the animal random effect. The off-diagonal elements of PEVMean and Var(β 1 ) were exactly equivalent. As expected from the derivations earlier, function 2 produced an exact one to one correspondence with PEVMean. A high correlation between the elements of PEVMean and the elements of function 1 and function 2 was observed because the correction to function 1 that is required to obtain PEVMean, when only one fixed effect is fitted, is the correction for the number of records. As mentioned earlier, the correction factor for the number of records was a diagonal matrix and the off-diagonal elements of Var(β) were unchanged when converting to PEVMean. The diagonal elements of PEVMean will be less than Var(β) (Fig. 1), in particular for contemporary groups with few records. This also means that CR consistently gave lower values than the flock correlation. The basis for using VED was that Var(β) approximated PEVMean. By the same logic, CR should also approximate the flock correlation. Correlations of CR with the flock correlation and of VED with PEVD are in Table 2. Pearson correlations of CR with the flock correlation were lower than the correlation between the elements of function 1 and PEVMean, which are in Table 1. Spearman correlations of CR with the flock correlation were higher. Correlations between VED and PEVD were high, but Pearson correlations were higher than Spearman correlations. This was as expected based on the high correlations for both the diagonals and off-diagonals. However, Table 1 Pearson and Spearman correlations of PEVMean with functions 1, 2 and 3 for three models and two relationship matrices (A and H) Measure 3 is not applicable for Model 1. Correlations marked with a* round to 1 as opposed to being exactly 1 the values of VED were in a higher range than PEVD due to the inflation of diagonal elements of Var(β) compared to PEVMean. The inflation of VED compared to PEVD, due to not applying the correction factor for the number of records, was most pronounced for small contemporary groups. Model 2: Contemporary group, date of birth and birth rearing rank fitted Correlations between the elements of PEVMean and the elements of function 1, function 2 and function 3 are in Table 1. Correlations between the elements of PEVMean and function 1 were high for diagonal elements but lower for off-diagonal elements. Due to the inclusion of noncontemporary group fixed effects, elements of function 2 did not give an exact correspondence to the elements of PEVMean. In function 2, correlations with the diagonal elements of PEVMean increased compared to function 1, while the off-diagonal elements were unchanged because the correction factor for the number of records applied to diagonals only. As expected from the derivations obtained above, function 3 produced an exact one to one correspondence with PEVMean. The diagonal elements of function 2 gave almost a one to one correspondence with the diagonal elements of PEVMean regardless of whether A (Fig. 2) or H was used. This indicates that the magnitude of the correction factor to account for the other fixed effects was negligible relative to the magnitude of function 2. The correction factor lowered the off-diagonal elements of Var(β 1 ) uniformly. For both diagonal and off-diagonal elements, the relative impact of including the correction for other fixed effects in the model was therefore higher for elements with a lower absolute value. Inclusion of other fixed effects lowered the correlation between CR and the flock correlation and between VED and PEVD compared to model 1. CR usually gave lower values than the flock correlation. Exceptions were due to both the diagonal and off-diagonal elements of Var(β 1 ) that overestimated the corresponding element of PEVMean. Correlations between VED and PEVD were high; with Pearson correlations higher than Spearman correlations. As in Model 1, VED had a higher range than PEVD. Model 3: Contemporary group, age of dam, date of birth, birth rearing rank and flock × sex interaction fitted Correlations between the elements of PEVMean and function 1 were high for diagonal elements but lower for off-diagonal elements. Inclusion of additional fixed effects means that, as in Model 2, elements of function 2 did not give an exact correspondence to the elements of PEVMean. Correlations of the diagonal elements of function 2 with the diagonal elements of PEVMean increased compared to function 1, while the off-diagonal elements were unchanged because the correction factor for the number of records applies to diagonals only. As expected from the derivations obtained above, function 3 produced an exact one to one correspondence with PEVMean. The correction factor to account for the other fixed effects in the model was typically about 35 times larger than in Model 2. As a result, diagonal elements of function 2 were increased compared to diagonal elements of PEVMean (Fig. 3). For the off-diagonal elements, the correction factor accounting for other fixed effects in the model was uniform when the off-diagonal element of PEVMean moved away from zero. There was more variation in the correction factor when the off-diagonal element of PEVMean was near zero. Inflation seen in off-diagonal elements of function 1 compared to offdiagonal elements of PEVMean was due primarily to not correcting for other fixed effects rather than not correcting for the number of records. CR generally gave larger estimates than the flock correlation and over-estimation was most pronounced when off-diagonal elements of PEVMean and hence the flock correlation were near zero. Inclusion of weaning weight and age of dam in the model decreased the correlations of CR with the flock correlation compared to Models 1 and 2 ( Table 2). In particular, flock correlations that approach 0 in this model may have a high CR. The reasons for this will be elaborated in the "Discussion" section. The largest difference between CR and the flock correlation was between contemporary groups 98 and 107 when A was used (flock correlation = 0.022, CR = 0.818), and between contemporary groups 147 and 152 when H was used (flock correlation = 0.056, CR = 0.803). The correlation between VED and PEVD remained high in Model 3. Impact of using H compared to A to model the variancecovariance of the animal random effect The use of H instead of A did not significantly change the Pearson correlation of PEVMean with the approximations functions 1 and 2, except for the off-diagonals in Model 3 (Table 1). Similarly, it did not result in large differences in the Pearson correlations between CR and the flock correlation or between VED and PEVD, except between CR and the flock correlation in Model 3 ( Table 2). The use of H increased the Spearman correlations for off-diagonal elements of PEVMean with functions 1 and 2 (Table 1) and of CR with the flock correlation ( Table 2) for Models 2 and 3. Additional file 1: Figure S1 shows the impact of using H as opposed to A, which was to increase PEVMean, particularly when the value of PEVMean using A was near zero. This was particularly obvious for the off-diagonals. The result was an increase in the flock correlation and CR compared to the equivalent model in which A was fitted. Patterns in the correction factor accounting for the inclusion of other fixed effects in the model The relationship between the correction factor and the PEVMean for the two models (Models 2 and 3), for which the correction factor was relevant is in Fig. 4. The correction factor was similar for both the diagonal and off-diagonal elements. There was no relationship between the value of the correction factor and the value of PEVMean, except for an increase in variability in the correction factor when the element of PEVMean was near zero. The correction factor was approximately 35 times larger in Model 3 than in Model 2, as indicated by traces of the correction factor. The low degree of variation in the correction factor for other fixed effects suggested that the dataset that we used was approximately balanced across contemporary groups. Connectedness rating The flock correlation was compared to CR (Fig. 5). As mentioned, CR underestimated the flock correlation in Model 1 for all pairs of contemporary groups and for most pairs in Model 2. Conversely, CR overestimated the flock correlation for most pairs in Model 3. In Model 2 and especially in Model 3, there was a collection of contemporary group pairs for which the flock correlation was near zero (completely disconnected), while the corresponding CR estimate was much higher than zero. This was due to the correction factor for the other fitted fixed effects, which was similar for both the diagonal and offdiagonal elements, and had the largest impact on very small covariances and hence correlations. The divergence between CR and the flock correlation when the flock correlation was near zero was also a function of contemporary group size. Since the variances were inversely dependent on the number of records in the contemporary group, the most pronounced differences between CR and flock correlation occurred between contemporary groups that were not linked and had a large number of records. Additional file 2: Figure S2 shows the relationship between the harmonic mean 2 Variance of estimated differences of management units (VED) Unlike CR compared to the flock correlation, VED showed a stronger relationship with PEVD (Fig. 5). However, for all three models, there were certain pairs of contemporary groups that had similar VED, but substantially different PEVD. This variation increased PEVD and was probably due to VED not correcting for the number of records in each contemporary group because VED, PEVD and the correction factor for the number of records were all inversely dependent on the number of records in the contemporary groups in question. Table 3 shows that VED corrected for the number of records was equivalent to PEVD in Model 1, as expected, while the corrected VED showed a near one to one relationship with PEVD for both Models 2 and 3. An almost exact one to one relationship between corrected VED and PEVD for Models 2 and 3 was due to the correction factor for the other fixed effects being fairly uniform and thus cancelling out in the calculation of variances of differences, which both VED and PEVD are examples of. Sensitivity to the presence of other fixed effects in the model fitted In the example used by Kennedy and Trus [12], a correlation of 0.995 was found between Var(β 1 ) and the mean PEV. However, they only considered a model where contemporary group was the only fixed effect. For the three models that we fitted, the correlation between the variance-covariance matrix of estimated contemporary group fixed effects and the prediction error variancecovariance matrix of contemporary group averages was sensitive to the inclusion of other fixed effects in the model. This sensitivity depended on the correction factor for the other fixed effects included in the model. Situations where it is unnecessary to use the correction factor for other fixed effects included in the model If we assume that the incidence matrices for the contemporary group effect X 1 and the other fixed effects X 2 are orthogonal, then X ′ 1 X 2 = 0. In this scenario, the correction factor for the other fixed effects included in the model becomes zero and the calculation of PEVMean from the variance-covariance matrix of estimated fixed effects can be done as if the contemporary group is the only fixed effect. An individual element ij of matrix X ′ 1 X 2 represents the number of observations of effect j in contemporary group level i if the other effect is a factor and is the sum of the covariate values for effect j in the contemporary level i if effect j is continuous. In practice, X ′ 1 X 2 = 0 would be limited to the situation where the other fixed effects considered in the model are continuous, centred on zero and balanced across all levels of the contemporary group effect, i.e. the mean of the other variables is zero for all contemporary group levels. Situations where parts of the correction factor for the other fixed effects in the model can be ignored If all the columns of the other fixed effects present in the model lie in the null-space of X ′ 1 Var(y) −1 , where X 1 is the incidence matrix of contemporary group effects and Var(y) = ZVar(u)Z ′ + σ 2 e I is the variance-covariance matrix of the observations, then Var(β 2 ,β 1 ) = 0 and the correction factor for the other fitted fixed effects reduces to ( The variance-covariance matrix of estimated contemporary group effects Var(β 1 ) is unchanged when moving from the reduced model, (only contemporary group is fitted) compared to a full model where other fixed effects are fitted. To measure how close the model considered could come to such a state, the covariance ratio [26] was considered. The covariance ratio is the ratio of determinants for Var(β) between a full and reduced model. Therefore, it is similar to the γ statistic proposed by Foulley et al. [3]. In our particular case, we considered the covariance ratio of contemporary group effects between a full and reduced model. If the correction factor reduced to (X ′ 1 X 1 ) −1 X ′ 1 X 2 Var(β 2 )X ′ 2 X 1 (X ′ 1 X 1 ) −1 , the covariance ratio was equal to 1. A covariance ratio that diverged from 1 indicates that estimates of Var(β) 1 are influenced by the addition of more fixed effects. The covariance ratios of the three models fitted are in Table 4. The covariance ratio for Model 1 compared to Model 2 (0.406 when A was used and 0.452 when H was used) is close to one, while for Model 1 compared to Model 3, it was not (0.005 when A was used, 0.006 when H was used). Correction factor for other fixed effects in the model when those effects are balanced across contemporary groups. When all other effects in the model are balanced across contemporary group, defined as having equal means (if continuous) or occurring for the same proportion of observations (if factors) for all contemporary groups, then the elements in each row of the incidence matrix (X ′ 1 X 1 ) −1 X ′ 1 X 2 are the same. Therefore, (X ′ 1 X 1 ) −1 X ′ 1 X 2 = 1r ′ , where 1 and r are column vectors of length p 1 and p 2 , respectively, and p 1 , p 2 are the number of contemporary group and non-contemporary group effect levels in the model. As a consequence, where c is the constant r ′ Var(β 2 )r and 11 ′ a p 1 × p 1 matrix of ones. In this situation, the relationship between VED and PEVD simplifies to the result below when contemporary group is the only fixed effect fitted. Table 3 Simple linear regression between VED corrected for the number of records and PEVD for three models and two relationship matrices (A and H) Numbers with a * only round to and are not exactly 0 or 1 Sensitivity to the mean of continuous covariates fitted in the model To obtain the relationship between PEVMean and the variance-covariance matrix of estimated contemporary group fixed effects, the intercept must be absorbed into the contemporary group effects. The variance of the intercept depends on the mean of the variables included in the model [9]. By absorbing the intercept into the contemporary groups, Var(β 1 ) becomes dependent on the means of the other variables included in the model. Since PEVMean itself is invariant to rescaling of continuous fixed effects, the impact of the correction factor for the other fixed effects in the model is itself influenced by the means of the other effects. This can be illustrated by fitting a fourth model. Model 4 is equivalent to Model 3 except that the weaning weight covariate is standardised to have a mean of 0 and standard deviation of 1. The zero mean for weaning weight minimises the influence of the weaning weight covariate on Var(β 1 ). While the PEVMean was unchanged when moving from Model 3 to Model 4, Additional file 3: Figure S3 shows that the correction factor for the other fixed effects in the model was reduced. It also reduced but did not eliminate the overestimation of flock correlation when using CR, particularly when the flock correlation was near zero. Link to postulated mixed model r 2 and correction factor for the inclusion of other fixed effects To measure the impact of including fixed effects other than contemporary group into the model, we considered the coefficient of determination (r 2 ). Unlike the general linear model, linear mixed models do not have a commonly agreed r 2 statistic. We considered two methods to measure r 2 m for the fixed effect component of the model. The first was marginal r 2 [27]. This was calculated as: where ŷ were the predicted values for the observation without the random effects. The second method was r 2 β [28]. This is calculated as a function of the Wald F statistic, β ′ V(β) −1β with n − p as ν, where n was the number of observations, and p was the number of fixed effects to be estimated. While we did find the r 2 statistics useful for indicating improvement in model fit, we did not find any relationship with the correction factor. Therefore, r 2 statistics like those considered should not be used as a diagnostic of the impact that the inclusion of additional fixed effects in the model had on the correction factor. A diagnostic to assess the need to include the correction factor The value of the correction factor for calculating PEVMean from Var(β) can be assessed as the trace of the matrix of the correction factor for other fixed effects included in the model. Specifically, the trace was considered as a diagnostic to determine whether it is appropriate to just use Var(β 1 ) − (X ′ X) −1 σ 2 e as an approximation to PEVMean. The trace of the correction factor can be written as 2Tr(Var(β 2 ,β 1 )( . This formulation was less computationally demanding when the number of contemporary group fixed effect levels was greater than the number of other fixed effect levels. Traces that were further from zero indicated that the correction factor had a greater impact in the calculation of PEVMean. Table 5 provides the traces of the correction factor for the inclusion of other fixed effects in the model. For Model 3 the trace is approximately 35 times greater than for Model 2, which suggests that ignoring the other fixed effects in Model 3 results in a poor approximation of PEVMean. . Utility of the method Solving blocks of the mixed model equations The exact PEVMean given by function 3 requires the calculation of the variance-covariance matrix for all estimated fixed effects in the model. This can be done directly by calculating (X ′ Var(y) −1 X) −1 , where Var(y) = ZVar(u)Z ′ + σ 2 e I. This is computationally demanding since the direct inversion of Var(y) requires n 2 (n + 1)/2 operations, where n is the number of observations. An alternative method is to find the block of the mixed model equation inverse corresponding to the fixed effects. Mathur et al. [16] wrote a program that calculated these blocks for CR. Many software programs have in-built functions that can be used to solve equations of the form AX = B, where A and B are known matrices. Examples include the solve() function in R [25]. Using this method to find Var(β), A would be the mixed model equation matrix and B would be the first p columns of the identity matrix, where p is the number of fixed effects to be estimated in the model. The elements of PEVMean can then be calculated from Var(β). However this method would also calculate Var(β,û − u) in addition to Var(β). Calculating PEVMean from Var(β) After Var(β) is obtained, the number of operations required to obtain the components that go into function 3 is as follows. To avoid re-calculation of the same matrix, we assume that these steps are done in the order outlined in Table 6. Since X ′ X is required to form the mixed model equations, X ′ 1 X 2 is assumed to have no cost. In the number of operations, p 1 represents the number of contemporary group fixed effects and p 2 represents the number of other fixed effects estimated. In the models we considered p 1 >> p 2 . This means that the number of operations required to obtain PEVMean after Var(β) was obtained is of order p 2 1 . Conclusions For single-trait models in which only one random effect is fitted, a function of the variance-covariance matrix of all fixed effects fitted can be used to calculate the prediction error variance-covariance matrix averaged by contemporary group. Depending on the other fixed effects included, the use of just the elements of the variancecovariance matrix of the estimated contemporary group fixed effects can give suboptimal estimates of connectedness. This is particularly the case when correlation-based measures are used, such as CR. These inaccuracies can be reduced by centring any continuous variables included in the model to have a mean of zero. When difference-based measures such as PEVD are used, the need to consider the other fitted fixed effects is eliminated when those effects are balanced across the contemporary groups effect levels. Nevertheless, there was always a notable improvement in the approximation of PEVMean by subtracting σ 2 e (X ′ 1 X 1 ) −1 from Var(β 1 ). The proposed formula for calculating PEVMean from Var(β) can be also used to calculate the flock correlation, the prediction error variance of differences, and the PEV component of the coefficient of determination for contrasts between contemporary groups by calculating only the block of the inverse of the mixed model equations corresponding to the fixed effects, rather than the full prediction-error variance-covariance matrix of random effects. By being able to calculate PEVMean exactly from functions of Var(β), a more accurate assessment of connectedness can be obtained in livestock genetic evaluation compared to traditional fixed effect based measures such as connectedness rating and VED, without the computational cost of PEV based measures. A future goal of research is to give tractable solutions to calculate this for industry evaluations which may include millions of animals. In addition, tens of thousands of these animals will typically have genotype data and in the future this number will increase and Table 6 Operations required to calculate the correction factor Step Component Number of operations Multiplying (X ′ 1 X 1 ) −1 on both sides of step 2 2p 2 1 4 X ′ 1 X 2 Var(β 2 ,β 1 ) p 2 1 p 2 5 Multiplying (X ′ 1 X 1 ) −1 on the left side of step 4 p 2 1 6 Addition to obtain correction factor for other fixed effects Total calculations 2p 1 + 6p 2 1 + p 1 p 2 (2p 1 + p 2 ) hence will require a re-evaluation of the connectedness measures used in the New Zealand sheep industry. Better measures of genetic connectedness between groups will allow seed stock breeders to make better decisions on the appropriateness of comparing animals in evaluations, which will, in an industry such as the New Zealand sheep industry, lead to increased genetic gain. The exact relationship between PEV and the variance of estimated fixed effects Var(β) was found by taking the variance on both sides of Eq. (1) and applying the results for the variances from mixed model equations [10]. (1) Formula for function 2 If contemporary group is the only fixed effect included, X ′ X is a diagonal matrix with the entry (X ′ X) ii corresponding to the number of observations in contemporary group i. The entries of X ′ Z are an incidence matrix indicating which contemporary group a particular animal belongs to. In this setting, the matrix (X ′ X) −1 X ′ Z is the linear transformation from u to ū, where ū is the vector of breeding values averaged by contemporary group. This simplifies Eq. 4 as follows. Formula for function 3 If contemporary group is not the only fixed effect included, the incidence matrix, X, is split into two parts. X 1 is the incidence matrix for contemporary groups and X 2 is the incidence matrix for other contemporary (X ′ X) −1 Var(X ′ Z(û − u))(X ′ X) −1 = (X ′ X) −1 (Var(X ′ Xβ) − σ 2 e X ′ X)(X ′ X) −1 Var((X ′ X) −1 X ′ Z(û − u)) = Var(β) − σ 2 e (X ′ X) −1 Var(û − u) = Var(β) − σ 2 e (X ′ X) −1 groups. In this setting, the matrix (X ′ 1 X 1 ) −1 X ′ 1 Z is the linear transformation from u to ū with respect to contemporary groups. To derive function 3, Eq. 4 was re-written partitioning X as described and similarly partitioning β into β 1 ,β 2 , which are the vectors of estimated contemporary group and non-contemporary group fixed effects respectively.
v3-fos-license
2020-12-24T09:12:49.868Z
2020-12-08T00:00:00.000
234582056
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=106018", "pdf_hash": "4b46a8f3e5bd2971854804da666f06ccec584845", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:209", "s2fieldsofstudy": [ "Education", "Sociology" ], "sha1": "2b5d8875412c34f9dd287b9a38c7456a80315daa", "year": 2020 }
pes2o/s2orc
A Study on Cross-Cultural Adaptation of International Students in China from Confucius Institute at the University of Khartoum Since China and Sudan have different cultural backgrounds, international students from Sudan certainly will have trouble in learning. How can they improve their cross-cultural adaptation? By taking the international students in China from Confucius Institute at the University of Khartoum (CIUOFK), and by means of questionnaire survey and interviews, this paper finds that difficulties they have to face in cross-cultural adaptation include living environment, the weather, dietary habits and means of transportation. And then this paper has an in-depth exploration of reasons behind these difficulties from four aspects, i.e. living environment, academic environment, interpersonal communication and mental adaptation. In the end, the author suggests that the students should be eased at both universities and individual levels, and also puts forward some pertinent suggestions in terms of those common difficulties for the students. Introduction In recent years, the number of international students in China is rising steadily, and the number of Sudanese students has also increased year by year. According to the data from CIUOFK, 137 students from CIUOFK established by Northwest Normal University (NWNU) have been admitted by major colleges and universities in China and won the Confucius Institutes Scholarships, of which 16 stu-dents are accepted by Northwest Normal University. This also allows CIUOFK of NWNU to once again become the Confucius Institute with the largest number of Confucius Institute Scholarships winners in the world after 2017. In recent years, CIUOFK has constantly strengthened its education and teaching capacity building and strived to its teaching methods and means so that its teaching quality and level has been steadily improved and it has also cultivated a group of outstanding students to study and exchange in China. In 1960, Oberg raised a concept of "cultural shock", meaning the cultural inadaptation a person encountered from one environment to another environment. For Sudanese students studying in China, differences in customs, values and language culture will make them feel uncomfortable, and serious situations even may cause such cultural conflicts as interpersonal conflict and inadaptation to learning environment. In this Paper, a survey is made for Sudanese students from many colleges and universities in China to discuss their social and cultural adaptation, mental adaptation and any other factors and research effects of cultural conflict on their study in China including study and lives, further giving advice to help them better adapt to China's culture. The researches in the filed of cross-cultural adaptation are fruitful up to now, but most focus on the research of Western scholars. Previous studies have mostly concentrated upon the adaptation problems of students from developing countries to study in developed countries, but there is a lack of cross-cultural adaptation studies for international students in China. After 2000, as the number of foreign students studying in China has been increasing constantly, researches in this field are also increasing accordingly. Meanwhile, still few researches on African countries have been conducted. Therefore, this Paper takes Sudanese students as its research object, probes into their problems in living environment, mental state and study and also gives specific advice, hoping to provide a reference for Sudanese students to better adapt to Chinese culture. Research Background The concept of cultural adaptation was first put forward in 1936 by American anthropologist Robert Redfield and others. Since the beginning of the last century, the phenomenon of cross-cultural adaptation has attracted attention in western countries, especially on the issue of immigration. Then the attention was turned to international students' cultural adaptation, and abundant research results were accumulated in this aspect. John. Schumann's cultural adaptation theory was proposed to explore the rule of foreign language learning from two aspects including social environment factor and learners' personal psychological factor and overcome disadvantages affecting foreign language learners during their study. Berry discussed in detail the connotation of cultural adaptation, modes of cultural adaptation theories and enlightenment to national education. In 1960, Oberg raised the concept of culture shock, that is, inadaptation to environment in a new culture. Representative studies on international students' emotional adaptation changes from time dimension are listed as Table 1: Table 1. Major representative scholars of cross-cultural adaptation and their views. Scholars Views Lysgaard U-shaped curve hypothesis: it pointed out that the cross-cultural mental state of international students presents a U-shaped curve. That is to say, the mental state of students who have stayed in the US for 6 -18 months is worse than that of those who have stayed in the US for less than 6 months or more than 18 months. Oberg, 1960 Four-stage theory: stages of honeymoon, crisis, recovery and adaptation. Ward, 1998Ward, 1996 From four time points measurement of Japanese students in New Zealand, it was observed that their depression disorder is worst when they first come to New Zealand. While in the other three time periods, the changes in depression are not apparent. The result cannot prove the U-shaped curve hypothesis. From time point measurement of Malaysian and Singaporean students in New Zealand, it was observed that students' mental changes resemble an inverted U-shape, that is to say, those who have been in New Zealand for a month or over a year have the strongest mental shift. Gullahorn, 1963 W-curve: honeymoon period; struggle period; dispute period; adaptation period; re-dispute period-still uncertain of some complicated problems; early period before returning-looking forward to returning with joy; shock after returning-a feeling of estrangement when contacting with people and in daily life. Related researches in China mainly include: Some scholars have discussed the cross-cultural adaptation of international students from different regions and countries (Fan & Hu, 2010;Xing, 2013;Anokin, 2016;El-Said Abbas Ibrahim, 2017;Lin, 2018). Some scholars have also conducted a survey on international students from a certain university. For instance, Xu and Hu (2017) have analyzed and discussed the motivation of studying abroad, eating habits and social interact by adopting the questionnaire method and taking the undergraduate international students from Nanchang Hangkong University as their respondents. They believe that international students should have a better understanding of Chinese culture and customs, and also emphasize that schools should organize activities that enable international students to understand and adapt to Chinese culture. Pu, Zhang and Peng (2013) have investigated the cross-cultural adaptability of international students from Northwest University, finding that the language barriers cause problems of interacting with other people. Ji (2011) has studied 14 Ukrainian students in China, expounded cultural differences between China and Ukraine from the perspectives of face-saving value, sense of time and means of expression and further proposed teaching strategies. They are of the opinion that great importance should be attached to cultivate multiple culture consciousness and take the initiative to learn about Chinese culture. Some scholars also put forward detailed suggestions. Zhang and Chen (2013) have proposed countermeasures from two aspects. First is from the level of schools. Colleges and universities should encourage students to participate in club activities and arrange training courses to let them understand Chinese culture. Second is from the international students level. They should make clear their goal of studying in China and respect Chinese law. Wang (2010) has summarized 35 ways of acculturation commonly used by international students in China. Researches of Xue (2014) indicate that, during the crisis stage and initial adaptation stage of U-curve and W-curve, cross-cultural adaptation guidance and social support are quite important for international students. Shi (2014) deems that cross-cultural adaptation should be positively associated with participation in local social and cultural life and acquisition of social support. Overall Design of Questionnaire Questionnaires the author designed are oriented to the international students in China from CIO. And a questionnaire is divided into four parts as below: living environment adaptation, academic environment adaptation, interpersonal communication and mental adaptation. This survey has collected 74 effective questionnaires. Gender The survey respondents include 50 males and 24 females, of which the number of males is the double of that of females. This is because Sudanese society is conservative and conventional, most girls get married at the age of 23 or so, and most parents are unwilling to allow their daughters to study abroad alone. Age Among the respondents under survey, there are 2 students under 18, 36 students between 18 and 25, 35 students between 26 -35 and only 1 student above 35, of which the respondents between 18 and 35 account for the largest number. This means most students come to China as soon as they graduate from high schools or universities. Student Type From Table 2, we can see that the number of undergraduates and postgraduates is the largest, as these two kinds of students have more opportunities to gain scholarships. The other types of students are mostly short-term advanced students who study various majors in China's colleges and universities. Among Duration of study in China For the students under survey, the time they've spent in China also differs. 22 students have been in China for less than 2 years, 25 students for 3 -5 years, 21 students for 5 -8 years, and 6 students for over 8 years. Among them, undergraduates account for the largest number, and doctors or post doctors account for the least number. Proficiency in Chinese From Table 3, we can see that about 65% students have passed the exam of HSK Level V or above, which show that Sudanese international students have proficiency in Chinese. Although they've learnt basic Chinese in Sudan, most of their teachers are Chinese people, and they also have a hanger for knowledge. Hence, they have a solid foundation of Chinese. Gudykunst & Kim (2007) believes that the living environment adaptation problem comes first in the cultural adaptation. Such climate changes caused by different geographical locations and differences in food culture, schools' accommodation management systems and public service systems have affected the cross-cultural adaptability of international students in China to some different extent. Learning Environment Adaptation Sudanese students attach much importance to their study and school management and they expect to study in schools which meet their requirements like many other foreign students. Many foreign students complain that their specialized courses are boring but they face high pressure and are assigned much homework at the same time. All these factors will have an influence on their academic performance. Sudanese students are more dependent on teachers, and only through the guidance of teachers can they make progress. However, they find that the situation in China is different. Chinese teachers only give students general guidance to students, and students' study mostly rely on their own efforts. This may also affect Sudanese students' attitude toward study. Interpersonal Communication Adaptation During international students' life of studying in abroad, it is inevitable to have cross-cultural communication with people of different cultural backgrounds. They are faced with a new cultural background completely different from their own country after they come to China. They will certainly run into cross-cultural communication problems as well as cross-cultural communication barriers. Sudanese students in China study in different cities. Most of them are in the in the north, especially in Gansu Province with the largest number that is mainly concentrated in NWNU and Lanzhou University. Instead, few Sudanese students choose to study in southern China. Each part of China has its own dialects such as Hakka language, Cantonese and Southern Fujian Dialect. These dialects may exert negative influences on Sudanese students' Chinese learning and make their Chinese slipping. Mental Adaptation Some foreign students will feel uneasy, anxious and even become wakeful after they first come to a new country. The mental adaptability of international students in China has a lot to do with the length of their stay in China: those who have been in China for a short time have relatively more depressive symptoms. The international students in China mainly have such psychological barriers as loneliness, self-imposed isolation, impatience, irritability, anxiety and depression, and they are prone to adopt negative reactions including fantasy and evasion in the face of stress events. Many Sudanese students in China have mental problems such as anxiety and restlessness during their early days in China, but they become accustomed to their lives in China as they stay here for a long time. Therefore, those Sudanese students who have lived in China for over 4 years have totally adapted to lives in China. Living Habits Sudanese students have different opinions on the life in China. Some are fairly satisfied with their life in China, some are not quite satisfied and some others are not still not sure. Sudanese students who have just come to China are not quite comfortable with China's living habits, and the influencing factors main include local climate, eating habits and social interaction, etc. From Table 4, over half of Sudanese students are not pleased with new environment, and feel hard to adapt to the life of cities where they are because of language barrier after they come to China. In an unfamiliar environment, they often feel lonely and lost because they have no relatives and friends around, so they will also feel a sense of anxiety. Dietary Habits Food has always been the biggest difficulty that international students have faced. Most Sudanese students have a belief in Islam, so they only eat halal food. They call speeches, behaviors and food conforming to Islam law as "HALAL" and those that don't conform to as "HARAM". Muslim food culture has two important features: legitimacy and pleasance. Legitimacy here has two meanings. The first is that food must be permitted by the Koran and are made as stipulated by the Islam law. The second is that food should also conform to social public law. Pleasance here means that each step for food making from procurement of raw materials to serving to the table should be clean, healthy, pollution free and nonhazardous. As a result, the biggest problem that Sudanese students in China face is to find halal food. The staple food of Sudan is Arabic flatbread and bread. They like to eat hand pilaf and can only eat halal meat. They are used to eating with their right hand and are prohibited drinking all kinds of drinks. These eating habits are strikingly different from ones in China. Table 5 and Table 6 show their viewpoints on Chinese food. Due to the huge difference in eating habits, many Sudanese students cannot get used to Chinese food and often have stomachaches. Table 6 illustrates Sudanese students like eating halal food and they generally eat meals in restaurants for Lanzhou stretched noodles or other Arabic restaurants. But there are also some students consider that the halal food in China are not to their liking. In the survey, 9 students feel the halal food tastes bad. Climate The climate in Sudan differs in each part. The arid desert has high temperatures sometimes up to 50˚C and it has an average annual temperature of 21˚C. Cities in northern China are cold and dry, while coastal cities in the south are warmer and humid. Thus most Sudanese students can adapt to the climate in the cities where they live, and some students have weak adaptabilities. The three international students the author interviews from Beijing Language and Culture University and NWNU consider that the climate in their cities is much cold so that they are likely to get cold and ill. Table 7 reveals the respondents' evaluation on climate and that 47% Sudanese students cannot adapt to the climate of their cities. Means of Transportation As shown in Table 8, 45.95% students are very dissatisfied with transportation in China, indicating that almost half the students hold that it is inconvenient to travel. 25.68% of them deem that the transportation in China is convenient. That is because the cities where they are have small population density, the frequency of using transportation is not high, and traffic jams rarely occur. But in big cities, there is a high population density and a high travel demand, so traffic jams occur frequently. Especially in certain periods, transportation is very inconvenient. For example, during rush hours on working days, it is inevitable to be crowded by bus, subway or taxi. And during such traditional festivals as the Spring Festival, the Mid-Autumn Festival and the National Day, it is extremely hard to buy a high-speed rail ticket. Many people had to choose a slower way like long-distance buses as they cannot scramble for the tickets. Campus Environment Factor Sudan's schools are very different from Chinese schools. Although each school has dormitories, the public facilities are not so good. Sudanese students pay more attention to campus environment after coming to China. Table 9 represents their opinions, i.e. nearly half the students are dissatisfied with the campus environment of their schools. According to a further investigation, it is found that they are mainly not satisfied with the conditions of their dormitories. Class Adaptation Factor Sudanese students are very dependent on teachers and they need the guidance of teachers when doing everything. Each time an activity is held, they will need their mentors, or they will not rehearse on their own. Sudanese student generally don't like too much homework, and homework has been always one of their troubles. However, homework is very common for Chinese students. From Table 10, it can be seen that Sudanese students in China don't feel liking going out. Over half of them prefer to stay in the dormitories, and they neither go out to amuse themselves nor work outside. Based on the author's further understanding, many Sudanese students feel that they have high pressure and much homework, so they opt to stay in dormitories for learning. Teaching Quality Factor FFrom Table 11, it can be seen that more than half of Sudanese students are satisfied with the teaching quality of their schools. But they feel stressed out after coming to China and deem that Chinese is very difficult in listening, speaking, reading and writing. This is mainly because Sudanese students are heavily dependent on their teachers. Before studying in China, they lack initiative and motivation in learning and depend much on their teachers. However, teaching methods in China are a bit different, and teachers expect students to do by themselves and give them more space in learning. Therefore, Sudanese students will feel and stressed after coming to China. Management of International Students In recent years, the number of foreign students has increased year by year, and the demand for related services has increased as well. But related service management agencies and personnel have not increased accordingly. Most international students feel discontented with management systems of colleges and universities. From Table 12, it can be seen that over half of Sudanese students think that school management systems in China are very rigid. For instance, too many absences will affect their exam and the students may be even expelled from school. Besides, students are not allowed to return late to dormitories that will close after 11 o'clock. Though these regulations are stipulated for safety of international students, some of them will regard them as a restriction of personal freedom and cannot understand. On the contrary, some schools are not strict enough to manage international students in China, so it becomes common for many international students to be late for or absent from classes. It is known that China has no speciality as international students management, and administrative staff for internal students management are currently trained discontinuously. So there is a lack of professional management talents. Influencing Factor in Interpersonal Communication Adaptation The difficulty of making friends for Sudanese students is proportion to the time they've spent in China. Those who have come to China for a long time are much more willing to join the circle of Chinese friends. In addition, men students are more likely to make friends from the perspective of gender distribution. From Table 13, it can be seen that Sudanese students contact more with foreign students, and the number of students who interact with Chinese students is also large. However, they seldom associate with their natives since they reckon that it is beneficial to learn Chinese. Therefore, a harmonious relationship between Sudanese students and the locals contributes to their cross-cultural adaptation in China. Communicative Culture Difference The international students are curious about China before coming to China. But there will be also a cultural shock and physical and mental inadaptation in a new environment as values and ideas inherent in their own parent environment will have a certain impact on them. This causes that the international students will opt to avoid or neglect when coming across different cultures. For Sudanese students, cultural difference has affected their lives in China. Their understanding of the new culture is limited to the cognitive level, and there will be many unpredictable situations in real life. From Table 14, it can be seen that most of Sudanese students adopt a positive attitude to deal with cultural conflicts. The Chinese cultural education they've received in Sudan are entirely divergent from the real life. The cultural knowledge they've learned from classes is relatively basic and shallow, but the cultural connotation in real life needs to be continuously explored and felt. Personal Characters Due to personality, age, adaptability and other factors, it will be difficult for foreign students to have interpersonal communication. The students who are outgoing and open-minded have more opportunities, while those who are introverted and reticent will have less opportunities. From Table 15, it can be seen that the biggest problem Sudanese students confront after coming to China is interpersonal communication barrier. According to the author's findings, most Sudanese students under 20 are introverted and unwilling to go out and take part in activities. They also don't feel like actively contacting with others. With the addition of limited Chinese proficiency, the difficulties in this area are even greater. Influencing Factor in Mental Adaptation Mental adaptation refers to personal adaptability in cognition, emotion, attitude and mental health when one is in a new environment. Many international students have lived in their native language for a long time, and some of their experiences have left a deep mark in their hearts and cannot be changed. In a new environment, both values and way of thinking will produce a certain shock to them as well as psychological impact. Gender Sudan is a relatively conservative nation, and Islam provides that men and women should keep a certain distance. Many students will observe China's customs, especially for boys. Men students will have more chances to make friends and go out. Women students are not. They have to wear headscarves when going out, and their relationship with classmates is normal as Islam stipulates that men and women can only be together in a legal relationship. As a result, Sudanese girls are more conservative than Sudanese boys whose parents are not against their studying abroad. Age Factor The older international students who come to study in China, the stronger their psychological adaptability, while the younger they are, the weaker their psychological adaptability. The survey reveals that, most of Sudanese students in China are young people who come to China after graduating from high school. There are no older students. The author has interviewed 3 Sudanese students under 20 from South China University of Technology. They often think of their families and friends, and sometimes suffer from insomnia at night. Sometimes they will want to go home. This shows that international students' adaptability has a lot to do with their age. conducive to both employment and business startups in the future. At present, many Chinese enterprises invest in Sudan, mainly in the oil and agricultural fields. Therefore, Sudanese students learn Chinese mainly for the purpose of a better job selection in the future. Time Factor As they stay longer in China, the international students will become more informed of surroundings. The cross-cultural adaptability of international students who have been in China for 3 to 4 years has been remarkably improved than that of students who have been in China for 1 to 2 years. Their mental adaptability will be influenced by how long they stay in China. The Sudanese students who have just come to China will feel anxious, scared, and miss their families, while those who have been in China for a long time have adapted to the living environment of the cities where they live, so their anxiety will also decrease accordingly. International students will have few difficulty in adapting to other cultures if they stay in China for a long time. Therefore, all the schools should organize rich and colorful culture experience activities for the new comers from foreign countries so as to help them enhance their mental adaptability. Principles of Persuasion Principles of persuasion require teachers to be patient and persuasive when educating students and start with raising students' awareness, mobilize students' initiative to make them positive. Principles of persuasion have two aspects included. One is to find out the root of problems, and the other is to give guidance to them so as to help them form good habits. These two aspects complement each other, and neglecting neither of them will achieve the purpose of our education. Principle of Respect When going to a new country, foreign students should be strict with themselves and respect the laws and regulations of that country. They must also observe the principle of respect and respect their teachers and schoolmates in classes to build favorable teacher-student relationship. Moreover, the international students should also respect China's etiquette and traditional festivals including the Spring Festival, Qingming Festival and the Dragon Boat Festival as well as customs of each festival. Principle of Understanding The principle of understanding is an indispensable one for international students in life and they also have to follow this principle. China has a vast land and abundant resources and a profound culture, and also each place has its own unique customs and habits, so the international students need to keep to the principle of understanding in both class and real life and improve their own quality. Countermeasures for Persuasion In order to improve the cross-cultural adaptability of international students in China, relevant training should be given to them before they come to China. Thus their adaptability will be strengthened, and then they can be able to better solve the problem of cross-cultural adaptation after coming to China. On the Level of Colleges and Universities Many Sudanese students have affirmed the efforts their schools have made. For instance, there are no classes scheduled on Friday afternoon so that Muslim students worship on that day without affecting their studies; schools help new freshmen with troubles in eating, shelter and means of transportation; schools also hold culture week activities to invite the students to display their own cultures so that those from different countries can better understand their own customs. In addition, colleges and universities can also implement some other specific measures: hold cross-cultural activities or set up relevant courses for new international students; organize exchange activities between Chinese students and foreign students to help foreign students enhance their interpersonal communication capability; add language tutoring courses for freshmen and regularly offer some cultural tutoring classes to help them build up confidence and increase interest. Cultural tutoring courses is beneficial to have a deep understanding of Chinese culture, cultivate international students' communicative ability, and decrease misunderstanding and obstacles brough about because of cultural difference. On an Individual Level On an individual level, foreign students should develop favorable learning and living habits in their daily life, and work out some short-term, medium-term and long-term learning goals to realize them. They also have to improve their will-power, overcome difficulties and promote their own quality. Many Sudanese students have not accepted any training on cross-cultural adaptation before coming to China, so their psychological adaptability is weak. Therefore, foreign students had better read some works and textbooks on cross culture and watch some relevant videos before coming to China, which will contribute to improving their cross-cultural adaptability. Others 2) How do you feel when you first come to China?
v3-fos-license
2017-04-19T03:15:15.524Z
2013-04-08T00:00:00.000
11280391
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0058350&type=printable", "pdf_hash": "586f34b1944f2796fb3b2bad46b566a6091a1c5e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:210", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "586f34b1944f2796fb3b2bad46b566a6091a1c5e", "year": 2013 }
pes2o/s2orc
Evaluating Genome-Wide Association Study-Identified Breast Cancer Risk Variants in African-American Women Genome-wide association studies (GWAS), conducted mostly in European or Asian descendants, have identified approximately 67 genetic susceptibility loci for breast cancer. Given the large differences in genetic architecture between the African-ancestry genome and genomes of Asians and Europeans, it is important to investigate these loci in African-ancestry populations. We evaluated index SNPs in all 67 breast cancer susceptibility loci identified to date in our study including up to 3,300 African-American women (1,231 cases and 2,069 controls), recruited in the Southern Community Cohort Study (SCCS) and the Nashville Breast Health Study (NBHS). Seven SNPs were statistically significant (P≤0.05) with the risk of overall breast cancer in the same direction as previously reported: rs10069690 (5p15/TERT), rs999737 (14q24/RAD51L1), rs13387042 (2q35/TNP1), rs1219648 (10q26/FGFR2), rs8170 (19p13/BABAM1), rs17817449 (16q12/FTO), and rs13329835 (16q23/DYL2). A marginally significant association (P<0.10) was found for three additional SNPs: rs1045485 (2q33/CASP8), rs4849887 (2q14/INHBB), and rs4808801 (19p13/ELL). Three additional SNPs, including rs1011970 (9p21/CDKN2A/2B), rs941764 (14q32/CCDC88C), and rs17529111 (6q14/FAM46A), showed a significant association in analyses conducted by breast cancer subtype. The risk of breast cancer was elevated with an increasing number of risk variants, as measured by quintile of the genetic risk score, from 1.00 (reference), to 1.75 (1.30–2.37), 1.56 (1.15–2.11), 2.02 (1.50–2.74) and 2.63 (1.96–3.52), respectively, (P = 7.8×10–10). Results from this study highlight the need for large genetic studies in AAs to identify risk variants impacting this population. Introduction Breast cancer is one of the most common malignancies diagnosed among women worldwide, including women of African descendent. African American (AA) women experience a disproportionate burden of breast cancer. Age-adjusted mortality rate of this cancer is more than 40% higher in AAs than in Europeanancestry populations. AA women tend to be diagnosed with breast cancer at a younger age and with more aggressive types of the disease, such as ER-(estrogen receptor negative) and ER2/PR2/ HER2-(estrogen receptor negative, progesterone receptor negative, HER2 expression negative) breast cancer. Within AAs, women having a higher African ancestry level, estimated by ancestry informative markers (AIMs), have been shown to have an increased likelihood of ER2/PR-versus ER+/PR+ breast cancers [1]. However, such association was not observed in the Women's Contraceptive and Reproductive Experiences (CARE) study [2]. To date, four high-penetrance genes (BRCA1, BRCA2, TP53, and PTEN) and four moderate-penetrance genes (CHEK2, ATM, BRIP1, and PALB2) have been discovered for breast cancer [3]. Candidate gene studies have largely failed to identify lowpenetrance loci which can be robustly replicated in other studies [3]. Genome-wide association studies (GWAS) have emerged as the most widely used approach to identify genetic variants for complex diseases [4]. Since 2007, 67 common genetic susceptibility loci have been discovered, including 25 from several earlier GWAS [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21] and 42 from a recent international Collaborative Oncological Gene-Environment Study (COGS) [22]. However, except the 5p15/TERT locus which was discovered among AA women [20], all other risk variants initially were identified in studies conducted in European or Asian descendants. Given the considerable differences in genetic architecture, including allele frequencies, linkage disequilibrium (LD) structure, and genetic diversity between the African-ancestry genome and genomes of Asian-and European-ancestry populations [23], it is important to investigate whether GWAS-identified variants are associated with breast cancer risk in African-ancestry populations. This investiga-tion not only assesses the generalizability of initial GWAS findings, but also provides valuable data to guide fine-mapping efforts in the search for causal variants. In this current study, we evaluated risk variants in all 67 breast cancer loci identified to date in an AA population of 1,231 cases and 2,069 controls. Materials and Methods This study uses resources from the Southern Community Cohort Study (SCCS) and the Nashville Breast Health Study (NBHS). The NBHS is a population-based case-control study [10]. Incident breast cancer cases were identified through the Tennessee State Cancer Registry and a network of major hospitals that provide medical care for patients with breast cancer. Controls were identified mostly via random-digit dialing of households in the same geographic area as cases and frequency-matched to cases on age (5-year group). All participants were phone-interviewed to obtain information related to personal and family medical history, and other lifestyle factors. A total of 437 cases and 252 controls from the NBHS who provided exfoliated buccal cell samples were included in the project. The SCCS is a prospective cohort study initiated in 2002 investigating racial disparities in the risk of cancer and other chronic diseases [24]. SCCS includes approximately 86,000 participants with two-thirds being AAs. In the SCCS, participants completed a comprehensive, in-person, baseline interview or completed a study questionnaire covering various aspects of health conditions, behavioral factors, personal and family medical history, and other lifestyle factors. In the SCCS, 679 breast cancer cases (217 incident and 462 prevalent cases) were included in the project. They were selected from those who were diagnosed with breast cancer and provided a blood or buccal cell sample. In the SCCS, controls (n = 680) were selected randomly from those who were cancer-free and frequencymatched to cases in a 1:1 ratio on age at enrollment (61 year), recruitment method, and sample type (blood/buccal cell). Ethics Statement Written, informed consent was obtained from all participants prior to interview, and the study protocols have been approved by Institutional Review Boards at Vanderbilt University (for NBHS and SCCS) and Meharry Medical College (for SCCS). Genotyping for samples described above was conducted using the protocol for the COGS Project as described elsewhere [22]. In brief, a custom Illumina Infinium BeadChip which contains 211,155 SNPs was genotyped. Individuals were excluded for any of the following reasons: 1) genotypically not female, 2) call rate ,95%, 3) low or high heterozygosity (P,10 -6 ), 4) genotyping not concordant with previous data, 5) duplicates or ''cryptic'' duplicates, 6) first-degree relative, 7) ethnic outliers based on a subset of 37,000 uncorrelated markers which passed QC. SNPs were excluded for any of the following reasons: 1) call rate ,95%, 2) deviated from HWE in controls at P,10 -7 , 3) genotyping discrepancies in more than 2% of duplicate samples. Data cleaning was conducted within the whole COGS Project. After QC, a total of 199,961 SNPs for 1,116 cases and 932 controls were included in the dataset. We then performed principal component analyses (PCA) using a set of 4,613 uncorrelated SNPs (neighboring distance .500 kb, MAF .0.2, r 2 ,0.1, and call rate .99%). Five additional participants (three cases and two controls) were excluded due to .6 s away from the means of PCA1 and PCA2. In total, 1,113 cases and 930 controls from the SCCS and NBHS were successfully genotyped by the COGS SNP array. In our previous project, nine GWAS-identified SNPs were genotyped using Taqman/Sequenom in 810 cases and 1,784 controls from the NBHS and SCCS [25]. Among them, 118 cases and 1,139 controls were not included in the study using COGS SNP array. Data for these nine SNPs, including rs13387042, rs10941679, rs889312, rs2046210, rs13281615, rs1219648, rs2981582, rs3817198, and rs3803662, were combined from the data genotyped through Taqman/Sequenom [25] and data newly obtained using COGS SNP array. In total, 1,231 cases and 2,069 controls from the SCCS (743 cases and 1,797 controls) and NBHS (488 cases and 272 controls) with genotype data available were included in the final analyses. Statistical Analysis Individual African ancestry level was estimated from 612 AIMs included in the COGS SNP array using the program frappe (http://med.stanford.edu/tanglab/software/frappe.html), which implements an Estimation-Maximization algorithm for simultaneously inferring each individual's ancestry proportion and allele frequencies in the ancestral populations [26]. Associations between individual SNP and breast cancer risk were assessed using odds ratios (ORs) and 95% confidence intervals (CIs) derived from logistic regression models and adjusted for age and study site. In the present study, data are available for ER, PR and Her2 among 564, 555, and 250 breast cancer women, respectively. Subgroup analyses were conducted within breast cancer subtypes including ER+, ER2, and ER2/PR2/HER22. Principal component analyses (PCA) were conducted based on 4,613 uncorrelated SNPs using EIGENSTRAT [27]. The first ten principal components were included in the logistic regression model to test association for the SNPs in the present study. To evaluate the combined effect of SNPs on breast cancer risk, we created a weighted genetic risk score (GRS) for each study participant by multiplying the number of risk alleles (0/1/2) of each SNP by the weight (log scale of the per-allele OR derived from the current study) for that SNP, and then summing them together. Since data for a complete set of SNPs were only available for the 1,110 cases and 929 controls genotyped using COGS SNP array, GRS analysis was conducted among these subjects. We constructed a GRS using SNPs that showed a statistically significant association with breast cancer risk in this study. In the present study, ten SNPs were associated with breast cancer (P,0.1) with direction of association consistent with previous reports. SNP rs1219648 was not included in the COGS SNP array and only 66% women of the COGS subjects had data available for this SNP, genotyped in our previous project through Sequenom (23). SNP rs13387042 was genotyped in both by the COGS SNP array and Sequenom (23). This SNP, however, did not reach P,0.1 in the samples analyzed using COGS SNP array and thus was not included in the GRS analyses. Thus, these two SNPs, rs1219648 and rs13387042, were excluded in GRS analyses. The remaining eight SNPs, rs10069690, rs999737, rs8170, rs17817449, rs13329835, rs1045485, rs4849887, and rs4808801 were included in the GRS analyses. All statistical analyses were conducted in SAS, version 9.3, with the use of two-tailed tests. Results The distributions of demographic characteristics and known breast cancer risk factors for cases and controls are shown in Table 1. Cases were more likely to have a family history of breast cancer, an earlier age at menarche, fewer live births, older age at first live birth and high body mass index. African Ancestry Level and Breast Cancer We did not find any statistically significant association between African ancestry level and breast cancer risk. On average, cases had 83.22% African ancestry, and controls had 83.86%. No significant association was observed between African ancestry level with breast cancer subtype, either. The African ancestry proportion was 82.50%, 82.74%, and 82.25% for ER+/PR+, ER2/ PR2, ER2/PR2/HER22 cases, respectively (Table S1). However, difference was observed between the two study cohorts with a higher African ancestry level in the SCCS than in the NBHS. On average, the African ancestry level is 85.23% and 80.14% in controls, and 84.50% and 81.22% in cases from the SCCS and NBHS, respectively. Evaluation of SNPs in 25 Previously Reported Loci Two SNPs have been discovered in the 10q21/ZNF365 locus: rs10995190 in Europeans [21] and rs10822013 in East Asians [17]. These two SNPs are not in LD based on data from HapMap Africans (r 2 = 0.001), and both of them were included in the present study. Similarly, in the locus 16q12/TOX3, SNPs rs4784227 and rs3803662 were discovered in Asians [14] and Europeans [7], respectively. SNP rs4784227 was not included in the COGS SNP array. SNP rs17271951, which was in strong LD with rs4784227 based on HapMap Africans (r 2 = 1.0), was used as a substitute. SNPs rs17271951 and rs3803662 are not in LD in HapMap Africans (r 2 = 0.03), and both of them were included in the final analyses. In the 10q26/FGFR2 locus, both rs1219648 and rs2981582 were discovered in Europeans but were in moderate LD in HapMap Africans (r 2 = 0.25). For the 6q25/TAB2 locus, rs9485370 was used to replace the originally reported SNP rs9485372 [28] which was not included in the COGS SNP array (r 2 = 1 in HapMap Africans). For the other 21 loci, one index SNP (reported in previous GWAS) per locus was selected. Therefore, in total, 28 SNPs in previously reported 25 loci were investigated in the present study. Five of them were deviated from Hardy-Weinberg test with P,0.05, including rs1045485, rs10995190, rs13281615, rs889312, and rs999737. Of the 28 SNPs, 19 SNPs had an OR of breast cancer risk in the same direction as initial reports. This is higher than expected under the null hypothesis (P = 0.04, binomial sign test). Five SNPs were nominally statistically significant (P#0.05) in the same direction as previously reported ( Table 2 and Table S2). SNP rs10069690 in the 5p15/TERT locus, previously discovered in African-ancestry population, showed a moderatestrong association in the present study with OR (95% CI) 1.19 (1.05-1.36) and P-value 0.007. Notably, the MAF is very low (0.047 in controls) for SNP rs999737 at 14q24/RAD51L1, however, a strong association was observed for this SNP, with OR (95% CI) 1.59 (1. 15-2.19) and P-value 0.005. Allelic OR (95% CI) for the other 3 SNPs are 1.17 (1.04-1.33) for rs13387042 (P = 0.011), 1.17 (1.04-1.33) for rs1219648 (P = 0.011), and 1.25 (1.07-1.47) for rs8170 (P = 0.006). The CASP8 SNP (rs1045485) identified previously through a candidate gene approach has a low frequency (MAF = 0.065) in AAs. A marginally significant association was found for this SNP with OR (95% CI) of 1.25 (0.96-1.62) and P-value 0.096. Significant association was observed for SNP rs4973768 at 3p24/SLC4A7 in this study, however, in the opposite direction as previously reported. No association was observed with the other 21 SNPs. Among them, the minor allele in rs10771399 at 12p11/PTHLH is very rare in AAs with MAF = 0.034 in controls. Evaluation of Associations by Breast Cancer Subtypes Association results between SNPs with risk of breast cancer by subtypes, including ER+, ER-, and ER2/PR2/HER22, are presented in Table 3. Only SNPs that showed significant (P#0.05) or marginally significant (P#0.1) association with any of these three subtypes of breast cancer are presented. Three SNPs that were not associated with overall breast cancer showed nominally statistical significance (P#0.05) in analysis by subtypes in the same direction as previously reported. SNPs rs1011970 at 9p21/ CDKN2A/2B and rs941764 at 14q32/CCDC88C were associated with ER+ breast cancer with ORs (95% CI) 1.27 (1.05-1.54) (P = 0.014) and 1.26 (1.02-1.56) (P = 0.032), respectively. SNP rs17529111 at 6q14/FAM46A was associated with ER2/PR2/ HER22 tumor with OR (95% CI) of 1.97 (1.02-3.82) (P = 0.043). Differences in the strength of the association were also observed for three other SNPs across breast cancer subtypes. SNP rs17817449 at 16q12/FTO showed an association with ER+ tumor with OR (95% CI) of 1.32 (1.09-1.60) but not with ER2 or ER2/PR2/HER22 tumors. Both rs10069690 at 5p15/TERT and rs999737 at 14q24/RAD51L1 showed the strongest association for ER2/PR2/HER22 breast cancer though associations were also observed for ER+ and ER2 cancers. Genetic Risk Score (GRS) Analyses for Overall Breast Cancer Significant associations were observed between GRS and risk of breast cancer (Table 4). ORs (95% CIs) for overall breast cancer risk across increasing quintiles of GRS were 1.00 (reference), 1 Discussion In the present study, we investigated associations of 70 index SNPs in 67 breast cancer susceptibility loci identified to date in up to 1,231 cases and 2,069 controls of AA women. We found that seven SNPs were significantly associated (P,0.05) and three SNPs were marginally significantly associated (P,0.10) with overall breast cancer risk in the same association direction as previously reported. Three additional SNPs showed a significant association (P,0.05) when stratified by breast cancer subtype. GRS analyses showed significant associations with the risk of overall or subtype of breast cancer. In the present study population, on average, approximate 83% of genetic ancestry is African origin, which is similar to the estimate in other studies [1,2,29,30]. Women in the SCCS have a higher African ancestry level than those in the NBHS. In the NBHS, most women were recruited in Tennessee, while SCCS women were recruited in 12 southern states including Alabama, Arkansas, Florida, Georgia, Kentucky, Louisiana, Tennessee, Mississippi, South Carolina, North Carolina, Virginia, and West Virginia. We did not find a significant difference in African ancestry level between breast cancer cases and controls or across breast cancer subtypes. These results are consistent with the recent finding in the CARE Study [2]. However, in another study, significant association was observed between genetic ancestry with ER+, PR+, or localized breast tumors [1]. In the present study, data for ER, PR and HER2 were available for only a portion of the subjects. Therefore, statistical power in regard to subtype of breast cancer analyses may be limited. Among the ten SNPs that showed significant or marginally significant association with overall breast cancer risk in the present study, six have been investigated in previous studies of Africanancestry populations [20,25,[31][32][33][34][35][36]. SNP rs10069690 (5p15/ TERT) was originally discovered in a GWAS of AAs [20], so it is expected that this SNP should be replicated in the present study. For SNP rs1219648 at 10q26/FGFR2, association was observed in our previous study [25], the Carolina Breast Cancer Study (CBCS) [32], and the Women's Health Initiative (WHI) study [35], but not in the Women's Insights and Shared Experiences study [36]. SNP rs13387042 was replicated in our previous study [25] and in a consortium study of 3,016 cases and 2,745 controls [33], but not in the WHI study [35], the CBCS [32], nor another pooled study [34]. SNP rs8170 at 19p13/BABAM1 was only investigated in a pooled study from Africans and AAs and no association was observed [34]. Both CASP8 SNP (rs1045485) and RAD51L SNP (rs999737) have a MAF ,5% in African-ancestry populations. Therefore, it is not surprising that these two SNPs were not replicated in all previous studies of African-ancestry populations [32][33][34]. The other four SNPs that showed associations with overall breast cancer in the present study, rs4849887, rs17817449, rs13329835 and rs4808801, were recently discovered [22] and have not been evaluated in previous studies of African-ancestry populations. We did not replicate associations for the other 60 reported SNPs with the risk of overall breast cancer. Inconsistent results were reported for some of them in previous studies of African-ancestry populations. For example, SNP rs3803662 at 16q12/TOX3 was replicated in the WHI [35], but not in the Black Women's Health Study (BWHS) [37], the CBCS [32] and others [25,34]. In addition, a significant association has been identified for this SNP with association in the opposite direction as previously reported [9]. SNP rs2981582 (10q26/FGFR2) was significantly associated Table 3. Association of breast cancer risk with selected GWAS-identified SNPs located in reported breast-cancer susceptibility loci in African Americans, by breast cancer subtypes. with breast cancer risk in two studies [32,33], but not in the other studies [25,35,36,38]. The WHI study reported a significant association for rs10941679 at 5p12/MRPS30 [35], and one study showed association for rs865686 at 9q31/KLF4 [33], however, other studies did not replicate these two SNPs [25,34,39]. In general, our results that approximately 90% of index SNPs were not replicated in AAs are consistent with results from previous studies in African-ancestry populations [25,[32][33][34][35][36][37][38][39]. Rebbeck et al [36] did not find any association for three investigated SNPs. Huo et al [34] evaluated 19 genetic loci and none of them were replicated. Five of the seven investigated SNPs in the CBCS [32], 21 of the 22 investigated SNPs in the WHI [35], and 14 of 19 SNPs in a consortium study [33] were not replicated. Because of the large difference in genetic architecture between African-ancestry and European/Asian ancestry populations, failure to replicate most of the reported SNPS in AAs is not surprising. Most, if not all, index SNPs identified in GWAS are associated with breast cancer risk through their strong LD with causal variants. African-ancestry populations have shorter LD and more genetic variations than European/Asian ancestry populations and may have different SNPs in LD with the causal variant. This may be the major reason why index SNPs are not replicated in African descendants. For example, in the BWHS, originally reported index SNPs rs10941679 and rs3803662 were not replicated, but other SNPs in these regions, rs16901937 and rs3104746, were associated with breast cancer [37,39]. It has been reported that other markers were identified in AAs to better capture the association signal than the index SNPs originally discovered in the 2q35/TNP1, 5q11/MAP3K1, 10q26/FGFR2, and 19p13/BABAM1 loci [33]. Second, allele frequencies for the index SNPs differ considerably across ethnic groups. Many index SNPs have lower MAF in AAs than in Europeans/Asians. Even if the effect size of the index SNP is the same across populations, larger sample size is required to detect association in AAs due to the lower MAF. Third, the vast majority of SNPs were originally discovered among European descendants, who have a much higher proportion of ER+ than ER-breast cancer. Because of this, most of the reported risk variants are, in general, more strongly associated with ER+ than ER-cancer [22]. African-ancestry women have a higher proportion of ER-breast cancer than European-ancestry women; this may be another reason for the non-replication in AAs. To our knowledge, this is the first study in AAs that has evaluated index SNPs in all breast cancer susceptibility loci identified to date. However, the sample size in our study is relatively small, especially when stratified by breast cancer subtype. Some of the null associations observed in this study could be due to inadequate statistical power. Meta-analysis by pooling together all existing data in the AA populations will increase the statistical power to evaluate the effects of these variants in AAs. The other limitation of this study is that we only investigated index SNP in each locus. Large-scale fine mapping studies are needed to identify genetic risk variants at these loci in African-ancestry populations. Such work will be very helpful to identify causal variants for breast cancer. In summary, in this African-ancestry population study, we replicated approximately 10% of index SNPs in 67 breast-cancer susceptibility loci. Heterogeneity was observed across the breast cancer subtype. These results show the complexity in applying GWAS findings to African-ancestry populations. Large-scale studies in AAs are needed to discover genetic risk variants which impact this population.
v3-fos-license
2016-05-12T22:15:10.714Z
2012-12-07T00:00:00.000
17629569
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0051446&type=printable", "pdf_hash": "0090e478c4ea857f15b9ec3ddd6d4268952d95b5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:214", "s2fieldsofstudy": [ "Medicine" ], "sha1": "0090e478c4ea857f15b9ec3ddd6d4268952d95b5", "year": 2012 }
pes2o/s2orc
Effect of Body Mass Index on Breast Cancer during Premenopausal and Postmenopausal Periods: A Meta-Analysis Objective There is no universal consensus on the relationship between body mass index (BMI) and breast cancer. This meta-analysis was conducted to estimate the overall effect of overweight and obesity on breast cancer risk during pre- and post-menopausal period. Data Sources All major electronic databases were searched until April 2012 including Web of Knowledge, Medline, Scopus, and ScienceDirect. Furthermore, the reference lists and related scientific conference databases were searched. Review Methods All prospective cohort and case-control studies investigating the association between BMI and breast cancer were retrieved irrespective of publication date and language. Women were assessed irrespective of age, race and marital status. The exposure of interest was BMI. The primary outcome of interest was all kinds of breast cancers confirmed pathologically. Study quality was assessed using the checklist of STROBE. Study selection and data extraction were performed by two authors separately. The effect measure of choice was risk ratio (RRi) and rate ratio (RRa) for cohort studies and odds ratio (OR) in case-control studies. Results Of 9163 retrieved studies, 50 studies were included in meta-analysis including 15 cohort studies involving 2,104,203 subjects and 3,414,806 person-years and 35 case-control studies involving 71,216 subjects. There was an inverse but non-significant correlation between BMI and breast cancer risk during premenopausal period: OR = 0.93 (95% CI 0.86, 1.02); RRi = 0.97 (95% CI 0.82, 1.16); and RRa = 0.99 (95% CI 0.94, 1.05), but a direct and significant correlation during postmenopausal period: OR = 1.15 (95% CI 1.07, 1.24); RRi = 1.16 (95% CI 1.08, 1.25); and RRa = 0.98 (95% CI 0.88, 1.09). Conclusion The results of this meta-analysis showed that body mass index has no significant effect on the incidence of breast cancer during premenopausal period. On the other hand, overweight and obesity may have a minimal effect on breast cancer, although significant, but really small and not clinically so important. Introduction Breast cancer is the most common cancer in women both in the developed and the developing countries, comprising 16% of all female cancers. It is estimated that breast cancer led to 519,000 death in women in 2004 [1]. Although breast cancer is thought to be a common cancer in the developed countries, a majority (69%) of all breast cancer deaths occurs in developing world. Indeed, increase life expectancy, increase urbanization and adoption of western lifestyles have increased the incidence of breast cancer in the developing countries [1,2]. A recent study indicated that breast cancer is the leading cause of cancer and cancer related mortality in woman worldwide so that cause-specific mortality rate increases with age among postmenopausal women with hormone receptorpositive breast cancer [3]. The etiology of breast cancer is not well known. However, several risk factors have been suggested to have an influence on the development of this malignant tumor including genetic, hormonal, environmental, sociobiological and physiological factors [2]. Weight gain and obesity is another potential risk factor which may influence the incidence of breast cancer. There are numerous observational studies which have investigated the correlation between obesity and breast cancer. However the results are inconsistent. Some researchers believe that body mass index greater than 30 may increase the risk of breast cancer both in preand postmenopausal periods [4][5][6][7] whereas others claim that obesity may reduce the risk of breast cancer during premenopausal period but increase the risk during postmenopausal period [8][9][10][11]. There is no universal consensus on the relationship between BMI and breast cancer. To date, a few meta-analyses have been conducted to estimate a summary measure of the effect size of overweight and obesity on breast cancer. However, these studies were limited to the English language studies cited by Medline [12,13]. Thus, the present up-to-date meta-analysis was conducted to assess the results of both cohort and case-control studies addressing the correlation between BMI and breast cancer cited by all major international electronic databases in order to estimate the overall effect of body mass index (BMI) on breast cancer risk. Searching We planned to include cohort and case-control studies addressing the association between body mass index and breast cancer. We developed a search strategy using and combing a set of keywords including breast cancer, body mass index, waist hip ratio, obesity, overweight, body size, cohort studies, case-control studies, and observational studies. We search all major electronic databases including Web of Knowledge (January 1945 to April 2012); Medline (January 1950 to April 2012); Scopus (January 1973 to April 2012); ScienceDirect (January 1823 to April 2012). In order to find additional references, we scanned the reference lists of all retrieved studies. In addition, we contacted authors of retrieved studies for additional unpublished studies. Furthermore, the following conference databases were searched for unpublished data until April 2012: Criteria for including studies We included prospective cohort studies and case-control studies investigating the association between BMI and breast cancer irrespective of publication date and language. The retrospective cohort and matched case-control studies were excluded. We included those apparently healthy women irrespective of age, race and marital status. The exposure of interest was obesity and overweight using BMI. The term 'BMI' is a commonly used index to classify overweight and obesity in adults and is defined as the weight in kilograms divided by the square of the height in meters (kg/m 2 ). Based on the World Health Organization classification [14] BMI,18.5 is considered as underweight, 18.5#BMI,25 as normal weight, 25#BMI,30 as overweight, and BMI$30 as obese. The primary outcome of interest was breast cancer of any type which was confirmed pathologically. We planned to include all kinds of breast cancers irrespective of pathological characteristics and stage of the tumor. Data collection and validity assessment Two authors (ZC and ADI) read the retrieved publications separately in order to identify the studies that would meet the inclusion criteria of this review. The authors were not blinded to the authors' names of the publications, journals, and results. Any disagreements were resolved by adjudication with a third author (JP). The inter-authors reliability based on kappa statistics was 85%. Two authors (ZC and ADI) extracted the data from the included studies. The variables which were extracted for data analysis included study design, year and location of study conduction, sample size, number of outcomes, mean age, gender, and body mass index. The extracted data were entered in the electronic data sheet. In cases of missing data or need for clarification, study authors were contacted. We intended to assess the risk of bias of the included studies using the recommended checklist of STROBE [15] Two authors (ZC and ADI) assessed the studies independently. The items which were evaluated for judgment about cohort studies included (a) state specific objectives of the study; (b) present key elements of study design; (c) give the eligibility criteria; (d) clearly define exposure (here obesity and overweight); (e) clearly define outcome (here breast cancer); and (f) explain how loss to follow-up was addressed. The last item was merely evaluated for cohort studies. Measures of exposure effect and data analysis The effect measure of choice for cohort studies was risk ratio (RR i ) and rate ratio (RR a ) and that of case-control studies was odds ratio (OR). RR i was defined as the probability of a disease in exposed people to the probability of the disease in unexposed people in a cohort study. RR a was defined as the proportion of a disease in exposed people to a specified person-year (a statistical measure representing one person at risk of development of a disease during a period of 1 year) in a cohort study. OR was the proportion of the exposed population in whom disease has developed over the proportion of the unexposed population in whom disease has developed in a case-control study [16]. Meta-analysis was performed to obtain summary measure with 95% confidence interval (CI). Both Review Manager 5 [17] and Stata 11 (StataCorp, College Station, TX, USA) were employed for data analysis. Data were analyzed and the results were reported using random effect models [18]. Heterogeneity and publication bias We explored statistical heterogeneity using the chi-squared (x 2 or Chi 2 ) at the 5% significance level (P,0.05). We quantified inconsistency across studies results using I 2 statistic [19]. We also estimated the between-study variance using tau-squared (t 2 or Tau 2 ) statistic [20]. We used funnel plot to investigate publication bias visually [20] as well as Begg's [21] and Egger's [22] tests to assess publication bias statistically. Description of studies We retrieved 9163 studies up to April 2012, including 8370 references through searching electronic databases, 241 references through conference databases, 546 references through checking reference lists, and six references through personal contact with studies' authors. Of 9163 retrieved references, 2680 references were excluded because of duplication, 6273 references did not relate to the objective of this review, and 160 references did not meet the eligibility criteria. Eventually, we included 50 studies in the meta-analysis including 15 cohort studies [4][5][6]10,15,[23][24][25][26][27][28][29][30][31][32] involving 2,104,203 people and 3,414,806 person-years and 35 case-control studies [8,9, involving 71,216 people. Some case-control and cohort studies evaluated breast cancer during premenopausal period, some during postmenopausal period and some during both periods. Thus, some studies presented only once and some others presented more than once in forest plots. However, the total number of 35 case-control and 15 cohort studies were included in meta-analysis. Effect of exposure The effect of BMI on breast cancer risk during pre-and postmenopausal period was assessed using odds ratio (OR) (Figure 1 and 2) in case-control studies and using risk ratio (RR i ) (Figure 3 and 4) and rate ratio (RR a ) (not shown) in cohort studies. The results of both case-control and cohort studies showed that increase in BMI during premenopausal period reduced the risk of breast cancer: OR = 0.93 (95% CI 0.86, 1.02); RR i = 0.97 (95% CI 0.82, 1.16); and RR a = 0.99 (95% CI 0.94, 1.05). That means Effect of Body Mass Index on Breast Cancer PLOS ONE | www.plosone.org women who were overweight or obese during premenopausal ages were at lower risk of breast cancer compared to women with normal weight although the observed inverse correlation was not statistically significant. The results of both case-control studies and cohort studies showed that overweight and obesity in postmenopausal period increased slightly the risk of breast cancer: OR = 1.15 (95% CI 1.07, 1.24); RR i = 1.16 (95% CI 1.08, 1.25); and RR a = 0.98 (95% CI 0.88, 1.09). That means the women who were overweight or obese during postmenopausal period were significantly at higher risk of breast cancer. The effect of overweight and obesity on the breast cancer risk was evaluated separately. According to the RR i and OR values, obese women had lower risk of breast cancer compared to overweight women during premenopausal period. However, the correlation was reversed during postmenopausal period so that obese women were at higher risk of breast cancer compare to overweight women although the difference was not statistically significant. Heterogeneity and publication bias The between studies heterogeneity was assessed using the Chi 2 test and the I 2 statistics. The results of Chi 2 test indicated that casecontrol studies were significantly heterogeneous (P,0.001). The I 2 statistics for premenopausal period was 72% and for that of postmenopausal period was 80% (Figures 1 and 2). On the contrary, the results of cohort studies were homogenous (P = 0.220). The I 2 statistics for premenopausal period was 32.8% and for postmenopausal period was 34.5% (Figures 3 and 4). Of 42 case-control studies (Figure 1) assessed the effect of BMI on breast cancer risk during premenopausal period, 24 studies reported negative associations (9 out of which were statistically significant) and 18 studies reported positive associations (7 out of which were statistically significant). Of 47 case-control studies (Figure 2) investigated the effect of BMI on breast cancer during postmenopausal period, 11 studies reported negative associations (5 out of which were statistically significant) and 36 studies reported positive associations (14 out of which were statistically significant). Of eight cohort studies (Figure 3) assessed the effect of BMI on breast cancer during premenopausal period, six studies reported negative associations (one out of which were statistically significant) and two studies reported positive associations (one out of which were statistically significant). Of 16 cohort studies ( Figure 4) investigated the effect of BMI on breast cancer during postmenopausal period, no study reported negative associations while all 16 studies reported positive associations (9 out of which were statistically significant). We assessed publication bias using the funnel plot as well as Begg's and Egger's tests. The studies scattered nearly symmetrically on both side of the vertical line reflecting absence of publication bias. The results of Begg's and Egger's tests for both OR and RR i estimated in pre-and postmenopausal periods confirmed the absence of publication bias ( Figure 5). Discussion The results of this meta-analysis revealed that BMI during premenopausal period can decrease the risk of breast cancer by 0.07 although the association was not statistically significant. Contrary, increase in BMI during postmenopausal period can significantly increase the risk of breast cancer by 0.21. This evidence means that BMI is not a protective factor against breast cancer during premenopausal period. However, BMI is a weak but significant risk factor for breast cancer during postmenopausal Table 1. Effect of body mass index on incidence of breast cancer by quality of the studies, menopausal period, and study design. period, although its effect is really small and not clinically important. The stronger the association, the more likely it is that the relation is causal while a weak association is more likely to be confounded although a weak association does not rule out causal connection [16]. Furthermore, increase in BMI during premenopausal period decreases the risk of breast cancer while increases the risk during postmenopausal period. This implies the presence of interaction between BMI and menopausal period. In such situation, the association should be assessed for each period separately and it is not reasonable to pool the data to estimate overall effect of BMI on breast cancer risk. Suzuki et al [13] conducted a similar meta-analysis in order to assess the effect of BMI on breast cancer risk. They retrieved 31 references including nine cohort and 22 case-control studies indexed in Medline until December 2007. They reported that overweight during premenopausal period would decrease the risk of breast cancer; OR = 0.80 (95% CI 0.70, 0.92); while it might increase the risk of cancer during post menopausal period; OR = 1.89 (95% CI 1.52, 2.36). The results of their meta-analysis were rather different from ours. One reason was that we searched and retrieved the relevant references from all major international databases while Suzuki et al had searched only Medline database which might introduce selection bias in their results. Another meta-analysis with same topic was conducted by Ryu et al [12]. They searched Medline database until 1999 and retrieved 12 case-control studies. They reported that overweight and obesity could increase the risk of breast cancer 1.56 times. However, they did not report the effect of body mass index on breast cancer during pre-and postmenopausal period separately. Furthermore, they had limited the search to the English language literatures indexed in Medline. This issue might also introduce selection bias in their results. There was evidence of heterogeneity (small P value of Chi 2 test and large I 2 statistic) among the results of the included studies. However, care must be taken in the interpretation of the statistical tests for heterogeneity. The Chi 2 test has low power when the sample size is small. On the other hand, the test has high power in detecting a small amount of heterogeneity that may be clinically unimportant when there are many studies in a meta-analysis [20]. Therefore, we can attribute major part of the observed heterogeneity in the results to the number of studies (including 15 cohort and 35 case-control studies) included in the meta-analysis and the large sample size (involving 2,104,203 participants in cohort studies and 71,216 participants in case-control studies). Regardless of the effect of overweight and obesity on breast cancer, there are several well-documented risk factors for breast cancer. A familial history of breast cancer, reproductive factors associated with prolonged exposure to endogenous estrogens, such as early menarche, late menopause, late age at first childbirth are among the most important risk factors for breast cancer. Exogenous hormones such as oral contraceptive and hormone replacement therapy also exert a higher risk for breast cancer. Furthermore, alcohol use, and physical inactivity are among the modifiable risk factors for breast cancer [67]. There were a few limitations and potential biases in this metaanalysis. First, 15 studies seemed potentially eligible to be included in this meta-analysis but the full texts were not accessible. This issue may raise the possibility of selection bias. Second, we intended to assess the effect of other potential confounding variables such as onset of menarche, onset of menopause, smoking status, oral contraceptive consumption, and family history of breast cancer. However, these variables were not reported exactly in majority of the studies. Hence, we could not conduct subgroup analysis based on these variables. This issue may raise the possibility of the information bias. Despite its limitations, this meta-analysis could present strong evidence about the correlation between BMI and breast cancer by retrieving 9163 studies from all major databases and including 50 studies in the meta-analysis (15 cohort studies having 2,104,203 people and 3,414,806 personyears and 35 case-control studies involving 71,216 people). In addition, our work brought some new information about the relationship between BMI and breast cancer, including (a) consolidation of the data to obtain summary measure of odds ratio, risk ratio, and rate ratio estimates regarding the effect of BMI on breast cancer; (b) non-significant inverse correlation between overweight and obesity and the incidence of breast cancer during premenopausal period; (c); significant direct correlation between overweight and obesity and the incidence of breast cancer during postmenopausal period; (d) the impact of various variables on the correlation between BMI and breast cancer such as studies designs, period of menopause, various types of BMI, and quality of the studies. Conclusion The results of this meta-analysis showed that body mass index has no significant effect on the incidence of breast cancer during premenopausal period. On the other hand, overweight and obesity may have a minimal effect on breast cancer, although significant, but really small and not clinically so important.
v3-fos-license
2017-06-25T18:12:02.491Z
2009-08-11T00:00:00.000
3258475
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://retrovirology.biomedcentral.com/track/pdf/10.1186/1742-4690-6-75", "pdf_hash": "98eecd869e508e7d66d04132d640efa7700569e4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:217", "s2fieldsofstudy": [ "Biology" ], "sha1": "d1b8181513aa11d3a7ced1887fcfa2d8dd50f2c3", "year": 2009 }
pes2o/s2orc
Complementation of diverse HIV-1 Env defects through cooperative subunit interactions: a general property of the functional trimer Background The HIV-1 Env glycoprotein mediates virus entry by catalyzing direct fusion between the virion membrane and the target cell plasma membrane. Env is composed of two subunits: gp120, which binds to CD4 and the coreceptor, and gp41, which is triggered upon coreceptor binding to promote the membrane fusion reaction. Env on the surface of infected cells is a trimer consisting of three gp120/gp41 homo-dimeric protomers. An emerging question concerns cooperative interactions between the protomers in the trimer, and possible implications for Env function. Results We extended studies on cooperative subunit interactions within the HIV-1 Env trimer, using analysis of functional complementation between coexpressed inactive variants harboring different functional deficiencies. In assays of Env-mediated cell fusion, complementation was observed between variants with a wide range of defects in both the gp120 and gp41 subunits. The former included gp120 subunits mutated in the CD4 binding site or incapable of coreceptor interaction due either to mismatched specificity or V3 loop mutation. Defective gp41 variants included point mutations at different residues within the fusion peptide or heptad repeat regions, as well as constructs with modifications or deletions of the membrane proximal tryptophan-rich region or the transmembrane domain. Complementation required the defective variants to be coexpressed in the same cell. The observed complementation activities were highly dependent on the assay system. The most robust activities were obtained with a vaccinia virus-based expression and reporter gene activation assay for cell fusion. In an alternative system involving Env expression from integrated provirus, complementation was detected in cell fusion assays, but not in virus particle entry assays. Conclusion Our results indicate that Env function does not require every subunit in the trimer to be competent for all essential activities. Through cross-talk between subunits, the functional determinants on one defective protomer can cooperatively interact to trigger the functional determinants on an adjacent protomer(s) harboring a different defect, leading to fusion. Cooperative subunit interaction is a general feature of the Env trimer, based on complementation activities observed for a highly diverse range of functional defects. Background The envelope glycoprotein (Env) of human immunodeficiency virus type 1 (HIV-1) promotes virus entry by catalyzing direct fusion between the virion membrane and the target cell plasma membrane; similarly, Env-expressing cells can fuse with target cells to form multinucleated giant cells (syncytia). Env is synthesized as a gp160 precursor protein that assembles into homo-trimeric complexes in the endoplasmic reticulum. During transport through the secretory pathway, gp160 is cleaved in the trans-Golgi network by a furin-like protease(s) to yield the external gp120 subunit noncovalently associated with the gp41 transmembrane subunit (derived from the N-and Cregions of gp160, respectively) [1]. The functional Env spike on mature virions of HIV-1 and the related simian immunodeficiency virus consists of a homo-trimer of gp120/gp41 hetero-dimers [2]. Env-mediated fusion involves a strict division of labor between the two subunits: gp120 is responsible for sequential binding to specific target cell receptors, first to CD4 and then to the coreceptor (a specific chemokine receptor, typically CCR5 or CXCR4); receptor binding then triggers gp41 to promote membrane fusion. These steps involve a tightly orchestrated series of conformational changes in both Env subunits that drive the fusion process. The emerging understanding of the complexities of HIV Env/receptor interactions and the subsequent events leading to fusion/entry have been the central focus of numerous review articles over the past decade [3][4][5][6][7][8]. Xray crystallographic analyses of gp120 from HIV-1 [9] and the closely related simian immunodeficiency virus [10] have revealed that CD4 binding induces a profound rearrangement of the relatively disordered gp120 subunit to create a new surface consisting of four anti-parallel beta strands derived from discontinuous regions of the linear sequence; this highly conserved "bridging sheet", which is not present in the unliganded pre-CD4-bound state, is directly involved in binding to coreceptor [11] in conjunction with the third variable loop (V3) of gp120, which determines coreceptor specificity [12,13]. Binding of gp120 to coreceptor then triggers the fusogenic activity of gp41 in a process believed to involve insertion of the gp41 N-terminal fusion peptide (FP) into the target cell plasma membrane [14,15]. Detailed structural information is not yet available for the native state of gp41, but the structure of the final post-fusion state has been determined to be a trimer of hairpins in the form of a six-helix coiled-coil bundle [16][17][18]. A transient intermediate conformation is thought to exist in which the gp41 subunits adopt an extended triple-helix coiled-coil with the N-terminal FPs inserted into the target cell membrane. The heptad repeat (HR) segments near the external C-terminal region (HR2) then fold to insert in anti-parallel fashion into the grooves formed by the cluster of the three N-terminal heptad repeat (HR1) segments; the resulting formation of a 6-helix bundle brings the virion and target cell plasma membranes together, and provides the driving force for membrane fusion underlying HIV entry. The molecular complexity of the HIV entry process presents a variety of targets for novel antiviral agents [19][20][21][22]; the T-20 peptide (enfuvirtide, Fuzeon) targeting the gp41 intermediate conformation is the first-in-class HIV-1 fusion inhibitor [23], and the recently approved maraviroc is the first-inclass inhibitor that binds to the CCR5 coreceptor and blocks the gp120 interaction [24]. While each gp120/gp41 hetero-dimeric complex contains all the determinants required for fusion, it is possible that molecular interactions between complexes within the trimer influence Env function. In a previous study we used a quantitative vaccinia expression-based cell fusion assay to demonstrate that individual subunits within the Env trimer can interact cooperatively during fusion [25]. By coexpressing Env proteins with defects in different essential determinants, we found that functional complementation could occur between subunits within a mixed trimer. In the present report, we show that subunit complementation is a general capacity of the HIV-1 Env trimer, though its efficiency and detectability are dependent on the particular defective variants examined and the assay systems employed. The results are discussed in terms of potential biological implications for Env function and HIV neutralization. Construction and expression of Env variants For vaccinia virus expression-based cell fusion assays, HIV-1 Envs were transiently expressed from pSC59-based plasmids under control of a strong synthetic vaccinia virus early/late promoter [26]. Previously described plasmids [27,28] were used to express wild-type Envs from the following HIV-1 strains: LAI [29] (LAV isolate, unless indicated otherwise), plasmid pCB-41; SF162, plasmid pCB-32; Ba-L, plasmid pCB-43, and CM235, plasmid pCB-52. In addition, a Kpn I-Xho I fragment encoding wild-type YU-2 Env was substituted into a variant of pCB-41 containing a unique Xho I site at the 3' end of Env (pKS-9) to create the plasmid pKS-10. As a negative control, an uncleaveable (Unc) mutant form of LAI (IIIB isolate) was used (plasmid pCB-16). For MAGI cell HIV infectivity assays, virus was expressed from the pNL4-3 proviral clone [35] encoding wild-type LAI Env (LAV isolate). pNL4-3 containing a frame-shift mutation at the Nhe I site within Env (pNL4-3Δenv) [33] was used as a negative control. Nhe I-Bam HI fragments encoding the LAI-FP26 and LAI-BS mutants [25] were subcloned into pNL4-3 to create pKS-11 (encoding NL4.3-FP26) and pKS-12 (encoding NL4.3-BS), respectively. The phenotype for each construct, both when expressed alone and in complementation experiments, was confirmed using two independent plasmid clones constructed in parallel. Vaccinia virus-based cell fusion assay Env-mediated cell fusion activity was measured using a quantitative vaccinia-based reporter gene assay as described previously [36,37]. Each vaccinia virus was used at a multiplicity of infection of 10. Target cells were prepared by co-infecting NIH 3T3 cells with vaccinia virus recombinant vCB21R-LacZ containing the E. coli lacZ reporter gene linked to the T7 promotor [38], plus vaccinia recombinants encoding the following cDNAs linked to vaccinia early/late promoters: CD4, vCB-3 [39] and the designated coreceptor CCR5, vHC-1 [40] or CXCR4, vCBYF1-fusin [41]. Effector cells were prepared by transfecting HeLa cells with the above-described plasmids containing the Env genes linked to a strong synthetic vaccinia early/late promoter and infecting with vaccinia recombinant vP11T7gene1 encoding bacteriophage T7 RNA polymerase [42]. Transfection was performed with DOTAP (Boehringer Mannheim, Indianapolis, IN); the total amount of DNA was held constant at 5 μg DNA per 25 cm 2 flask, in both single-transfection and cotransfection experiments. Effector and target cells were incubated overnight at 31°C to allow expression of the recombinant proteins. After these cells were washed by centrifugation, they were mixed in equal numbers in duplicate wells of a 96-well plate (2 × 10 5 of each per well) and incubated for 2.5 hr at 37°C. Fusion reactions were terminated by addition of nonidet-P40 (0.5% final) and quantified by spectrophotometric measurement of β-galactosidase activity as described previously [36]. For each data point, error bars indicate the standard errors of the mean of duplicate samples; in cases where error bars appear to be absent, the data points were so close that error bars are not visible. All experiments were repeated at least twice; representative data are shown for each experiment. MAGI cell assays for cell fusion and virus entry HIV-1 entry and Env-mediated cell fusion in the context of HIV-1 provirus expression were measured using the HeLa-CD4/LTR-β-gal (MAGI) indicator target cell line [43], which was obtained from the NIH AIDS Research and Reference Reagent Program (originally contributed by M. Emerman). BS-C-1 cells plated at 3 × 10 5 per well in 6-well Mutations in HIV-1 Env Figure 1 Mutations in HIV-1 Env. A schematic representation of HIV-1 Env. Functional and structural domains within the gp120 and gp41 subunits are labeled at the top: V3 loop (3 rd variable loop), CD4 BS (CD4 binding site), FP (fusion peptide), TRR (tryptophan-rich region), TM (transmembrane domain). For various inactivating mutants in the LAI Env (designations encircled), the approximate locations of specific point mutants are indicated underneath, and the deletion mutants are indicated on the right. The specific point mutations are: V3 (R320G in the conserved GPGR motif at the tip of the V3 loop); BS (D368R within the CD4 binding site); various positions in the fusion peptide including FP2 (Val→Glu), FP9 (Leu→Arg) and FP26 (Leu→Arg); heptad repeat mutations HR1e (V570R) and HR1a (I573P). HT-1 and HT-2 are chimeric LAI Env/Thy-1.1 glycoproteins that are membraneassociated via a glycosyl-phosphatidylinositol (GPI) anchor. HT-1 contains the gp41 ectodomain minus the tryptophanrich region (K665 through I682), whereas HT-2 contains the entire gp41 ectodomain; both constructs have 22 intervening amino acid residues from the C-terminus of Thy1.1. The Δ665-682 construct has a selective deletion of the TRR (K665 through I682) and the Δ665-856 construct has an introduced premature stop codon that results in deletion of the C-terminal 192 aa of Env, including the TM and cytoplasmic domains. See Methods for construction and references. plates the previous day were transfected (or cotransfected) with the designated pNL4-3-based proviral construct(s) using FuGENE 6 (Boehringer Mannheim, Indianapolis, IN) according to the manufacturer's protocol. The next day, cells were washed and given fresh media (2 ml per well) containing 10 mM HEPES. Three days post-transfection, the supernatants were removed, filtered through a 0.45 μm filter to remove cellular debris, and stored at 4°C. For cell fusion assays, the cells were trypsinized, washed, mixed 1:10 with MAGI cells, and replated in duplicate at 1 × 10 5 total cells per well of a 24-well plate. Cells were allowed to fuse overnight at 37°C and were then stained with X-gal. Cell fusion was quantitated by counting the total number of blue multi-nucleated syncytia per well with the aid of a grid. For the virus entry assays, p24 levels in the filtered supernatants were quantitated using the HIV-1 p24 Antigen Assay (Coulter), and supernatant volumes were normalized accordingly. MAGI cells were infected in duplicate with 300 μl of filtered supernatant per well of a 24-well plate and stained with X-gal 48 hrs post-infection. Virus entry was quantitated by counting the total number of blue foci per well. For complementation pairs, supernatants containing up to 2.35 ng of p24 per well were used (equivalent to 628 infectious units for wild-type). This corresponds to approximately 15-20% of the total supernatant from the cells. For each data point, the standard errors of the mean of duplicate samples are shown. Results To test the ability of fusion-inactive Env subunits to functionally complement one another in the context of mixed Env trimers, we first employed a vaccinia-based quantitative cell fusion assay system wherein fusion between effector cells expressing Env and target cells expressing the necessary receptors leads to reporter gene activation (βgalactosidase production) [36,37]. We examined complementation between variants in gp120 that were inactive due to inability to interact with CD4 (CD4 BS mutation) or coreceptor (mismatched specificity, or mutation in the V3 loop), as well as variants in gp41 with mutations at different points within the FP and HR1 regions, as well as modifications of the membrane proximal tryptophan-rich region (TRR) and the transmembrane (TM) domain ( Fig. 1). Throughout these studies, target cells lacking coreceptor served as negative controls; where indicated, an uncleaveable mutant Env (Unc) containing a mutation in the gp120/gp41 cleavage site provided an additional negative control. Complementation by Env subunits from HIV-1 primary isolates of different genetic subtypes Previously we demonstrated complementation between Env constructs from two HIV-1 strains that were highly laboratory-adapted and both clade B: LAI (X4, i.e. CXCR4specific) and SF162 (R5, i.e. CCR5-specific) [25]. To deter-mine whether complementation potential is a more general property of HIV-1 Envs, we analyzed the relative complementation efficiencies of an LAI Env mutant containing a defective FP (LAI-FP26) with Envs from diverse R5 isolates in a CXCR4-dependent cell fusion assay. When tested alone under these conditions, wild type LAI showed potent activity whereas the LAI-FP26 mutant and all four wild type R5 Envs were non-fusogenic ( Fig. 2A, top section). In coexpression experiments that enabled mixed trimer formation, complementation of LAI-FP26 with the R5 Envs occurred not only with SF162 as shown previously, but also with the laboratory-adapted Ba-L strain (clade B) and the primary YU-2 (clade B) and CM235 (CRF01_AE recombinant) isolates ( Fig. 2A, middle section). The differences in the relative complementation efficiencies of the various Envs correlated roughly with their relative intrinsic fusogenicities in a CCR5-dependent assay (Fig. 2B). Complementation by Env subunits containing a mutationally inactivated V3 loop Our previous results [25] coupled with the data above demonstrate that an Env with a mutational defect in gp41 can complement an Env containing a gp120 subunit incapable of interacting with coreceptor due to mismatched coreceptor specificity. We wished to extend this finding by testing a gp120 subunit rendered inherently defective for coreceptor interaction by site-directed mutation. The V3 loop, though highly variable, contains a conserved β-turn motif at its crown (typically GPGR or GPGQ) that is essential for coreceptor binding activity [12,13]. We analyzed a point mutant (LAI-V3) containing a G in place of the R residue in the GPGR motif, which has been shown previously to abolish fusogenicity [44]. Our results demonstrate that the fusion-defective LAI-V3 was able to complement LAI-FP26 ( Fig. 2A, bottom section). The fusion activity was in the same range observed for the coreceptor-mismatched Envs ( Fig. 2A, middle section), indicating that complementation efficiency was not limited by structural incompatibilities between Envs from these different strains. Previously we demonstrated complementation between Envs containing different nonfunctional gp120 subunits within a mixed trimer; functional mixed trimers were formed when LAI-BS (defective for CD4 binding) was coexpressed with wild-type SF162 (incapable of coreceptor interaction in a CXCR4-specific assay) [25]. To extend this finding we tested the ability of LAI-BS to complement LAI-V3, i.e. gp120 subunits incapable of interacting with CD4 and coreceptor, respectively (Fig. 3). The efficiency was similar to that observed for complementation between LAI-BS and SF162, again indicating that there were minimal structural incompatibilities in mixed trimers between these two strains. As reported previously [25], these examples of complementation between Envs with distinct gp120 receptor binding deficiencies were somewhat less active than complementation between Envs containing a defective gp120 and a defective gp41 (LAI-BS + LAI-FP26) (Fig. 3). As expected, no complementation was observed upon coexpression of LAI-V3 with SF162, since the gp120 from neither Env is capable of functioning with the CXCR4 coreceptor. Varying complementation efficiencies of different point mutations within the gp41 FP Our previously described data and the experiments above demonstrated functional complementation of a particular gp41 FP mutation, i.e. substitution of Arg for Leu at the 26 th position from the gp41 N-terminus (LAI-FP26). To extend these analyses, we analyzed two additional FP mutations previously shown by others to abolish fusogenic activity without affecting Env processing or CD4 binding [30]: LAI-FP2 substitutes Glu for Val at the 2 nd position of the FP, and LAI-FP9 substitutes Arg for Leu at the 9th position (Fig. 1). The LAI-FP2 mutant has been shown to dominantly interfere with cell fusion when coexpressed with wild-type Env, whereas the LAI-FP9 mutant reduced fusion two-fold and the LAI-FP26 mutant had no negative effect when coexpressed with wild-type Env [45]. The results of complementation experiments with these gp41 FP mutations are shown in Fig. 4. Consistent with previous reports, each mutation alone strongly impaired cell fusion activity compared to wild type (top sections in Figs. 4A-C). The relative efficiencies of complementation, FP26 > FP9 > FP2 was observed whether the complementation partner was LAI-BS (Fig. 4A), LAI-V3 (Fig. 4B), or SF162 (wt) (Fig. 4C). Complementation with gp41 subunits lacking the normal membrane anchoring and membrane proximal external regions Highly conserved regions close to the membrane are known to be critical for Env function, including the 22 amino acid TM domain that anchors Env to the surface of virions and infected cells, and the membrane-proximal external region, generally defined as the last 24 C-terminal residues of the gp41 ectodomain (L 660 -K 683 ) [15]. This Complementation with laboratory-adapted and primary Envs from different clades LAI-BS + region contains the TRR (defined here as K 665 -K 683 ) and contains or overlaps the epitopes for the broadly neutralizing 2F5 and 4E10 monoclonal antibodies. We tested five previously described defective mutants in this gp41 region for their ability to support complementation. HT-1 and HT-2 are chimeric LAI Env/Thy-1.1 glycoproteins that are membrane-associated via a glycosyl-phosphatidylinositol (GPI) anchor. HT-1 contains the gp41 ectodomain minus the tryptophan-rich region whereas HT-2 contains the entire gp41 ectodomain; both constructs have 22 intervening amino acid residues derived from the C-terminus of Thy1.1 [32]. HG-1 is analogous to HT-1 except that a minimal GPI attachment signal has been used without the intervening Thy-1.1 residues (K. Salzwedel and E. Hunter, unpublished data). LAI-Δ665-856 contains a stop codon in place of the Lys at position 665, resulting in deletion of 192 amino acids from the C-terminus, including the entire tryptophan-rich, TM, and cytoplasmic domains; this protein is secreted into the medium [31]. Finally, LAI-Δ665-682 contains an 18-amino acid deletion of the tryptophan-rich region (K 665 -I 682 ) [33,34]. The results of complementation experiments with these mutants are shown in Fig. 5A. Each of the fusion-defective GPI-anchored Envs was capable of complementing LAI-BS. Perhaps surprisingly, even the truncated nonanchored LAI-Δ665-856 Env was able to complement the full-length LAI-BS665-856 mutant, with comparable or higher efficiency compared to the GPI-anchored constructs. Complementation with gp41 subunits containing heptad repeat mutations Two previously characterized point mutations within the N-terminal heptad repeat region (HR1) of the gp41 ectodomain ( Fig. 1) were analyzed for complementation, i.e. substitution of Pro for Ile at residue 573 at the "a" position within the HR1 heptad repeat motif (LAI-HR1a) and Arg for Val at residue 570 at the "e" position within the HR1 heptad repeat (LAI-HR1e). The LAI-HR1a mutation has been shown previously to disrupt self-association of HR1 to form the trimeric coiled-coil pre-hairpin intermediate structure [46] and the LAI-HR1e mutation is suspected to block association of HR2 with the HR1 trimer to form the 6-helix coiled-coil hairpin structure [47]. Interestingly, each of these mutants displayed some complementation activity with LAI- BS Fig. 5B). However the efficiency was relatively low, and no significant complementation by these mutants was observed with Envs defective in CXCR4 interaction (LAI-V3 and SF162 wt, data not shown). Complementation requires coexpression of Env mutants within the same cell The observed complementation activities involved coexpression of two distinct nonfusogenic Env variants within the same cell. Although in our previous study [25] we verified the formation of mixed trimers between the two variants, we could not rule out the possibility that the complementation activity was due to cooperative interactions between nonfusogenic homo-trimers of each variant. While this seemed an unlikely explanation, it has been reported that cell fusion can occur when CD4 and coreceptor are expressed on separate target cells [48]. To determine whether this might also be true for Env trimers Complementation with Envs containing point mutations in the gp41 fusion peptide expressed on separate cells, we expressed LAI-BS and LAI-FP26 in separate effector cells and asked whether mixing the two cell populations with target cells could result in fusion. As shown in Fig. 6, fusion was detected only when the constructs were cotransfected into the same effector cell population, indicating that functional complementation requires both Env variants to be expressed within the same cell. Complementation activity is dependent on the nature of the functional assay The cell fusion assay used in the above experiments employed vaccinia virus expression technology to pro-duce Env and CD4 and to provide the reporter gene activation system for readout. We wished to assess whether complementation could also be detected in a more biologically relevant situation, i.e. under conditions of HIV-1 proviral expression, using a target cell reporter system more commonly used to quantitate this process. Furthermore, we asked whether complementation can be detected not only by measuring Env-mediated cell fusion, but also virion entry. To address these questions, we expressed Env from molecular variants of an HIV-1 infectious molecular clone and analyzed both cell fusion and virus entry using as targets the well studied HeLa-CD4/ LTR-β-gal (MAGI) indicator cell line [43]. Proviruses encoding the LAI-FP26 and the LAI-BS mutant Envs were cotransfected into BS-C-1 producer cells. This Env variant pair was selected because it consistently yielded the highest levels of fusion complementation in the vacciniabased system. Two alternative assays were then compared. First, the BSC-1 producer cells were used as effectors and mixed with MAGI target cells in a cell fusion assay; second, filtered supernatants from the BSC-1 producer cells containing cell-free HIV-1 virions were used to infect MAGI cells in a parallel virus entry assay. In both cases, complementation was assessed by counting the number of blue foci observed upon in situ staining with X-gal. As shown in Table 1, functional complementation was detected in the cell fusion assay: cells transfected individually with either NL4-3-FP26 or NL4-3-BS infectious molecular clones were fusion-incompetent, whereas cells cotransfected with both gave significant fusion activity. By Complementation with Envs containing alterations in the membrane-spanning domain, the TRR, and HR1 contrast, no complementation was observed in the virus entry assay using the viruses produced from these same cells (Table 1). Discussion The present results extend our earlier findings [25] by demonstrating that the capacity for functional subunit complementation is a general feature of the HIV-1 Env trimer. We interpret our results to reflect cooperative subunit interactions within mixed heterotrimers, consistent with our previous verification that mixed heterotrimers do indeed form upon coexpression of different HIV-1 Env variants, as well as our previous reference to other examples of mixed trimer formation with glycoproteins from different enveloped viruses [25]; however we cannot formally exclude the possibility that the observed complementation activities reflect complex interactions amongst homo-trimers with different defects expressed on the same membrane. In the present work, complementation was observed upon coexpression of Envs from primary as well as laboratory-adapted HIV-1 strains of different genotypes, and with a wide diversity of defects within both gp120 and gp41 (Figs. 2, 3, 4, 5). Thus fusion does not require every gp120 subunit in the trimer to be competent for CD4 or coreceptor binding, nor every gp41 subunit to contain a functional fusion peptide, a normal membrane anchoring region, a native TRR, or a functional HR1 region. We also demonstrate that complementation requires coexpression of the Env variants in the same cell (Fig. 6), and provide further evidence (by virtue of complementation between LAI-BS and LAI-V3, Fig. 3) against the interpretation that the observed complementation requires reassortment of gp120 and gp41 subunits to form homo-trimers composed of completely functional gp120/ gp41 protomers. We interpret complementation as a reflection of cooperative cross-talk between defective protomers within a mixed trimer, whereby the wild type determinants on one protomer transmit structural changes to activate wild type determinants on an adjacent protomer(s), thereby overcoming defects that are otherwise inactivating in the context of homo-trimers. For example when a CD4 BS mutant is coexpressed with coreceptor inactive variant, we propose that CD4 binding to the subunit(s) with functional BS promotes the conformational changes required for coreceptor binding, which are then transmitted to an adjacent gp120 subunit(s) that can then undergo the essential coreceptor interaction, thereby triggering activation of the wild type gp41 subunits. Particularly striking is the wide range of gp41 mutants capable of complementation when coexpressed with a nonfunctional gp120 variant. Nearly all tested displayed some level of activity, the only exception being LAI-Δ665-682, a normally anchored form containing a deletion of the TRR. The mere absence of the TRR cannot be the simple explanation, since several constructs lacking this region did show complementation activity (HT-1, LAI-Δ665-856). Perhaps misalignment of the LAI-Δ665-682 mutant with wild type gp41 is not tolerated for Env function. Several Env defects have been described in the literature as "dominant negative", based on their potent functional suppressive activities when coexpressed with wild type Env. Mutants in the gp120/gp41 cleavage site, which alone are completely inactive for fusion and infectivity, are reported to have strong dominant negative activities when coexpressed with wild type Env [49,50]. In our previous studies complementation was not detected with mixed trimers in which Unc was one of the defective variants [25]; thus dominant suppression appears to be the major functional activity for uncleaved Env. However we show here that this is not the case for all reported dominant negative mutations. Despite the strong inhibitory activities in fusion and infectivity assays reported for the FP2 mutation when coexpressed with wild type [45], we found that the same mutant is still able to complement fusion activity (albeit at relatively low levels) in mixed trimers with various non-functional Envs (Fig. 4). In fact the relative complementation efficiencies of the FP mutants (FP26 > FP9 > FP2) (Fig. 4) inversely correlated with their previously described inhibitory effects when coexpressed with wild type Env (FP2 > FP9 > FP26) [45]. Another example involves the TM domain, for which it has been reported that substitution of this HIV-1 Env region with its counterpart from the influenza virus hemagglutinin glycoprotein results in potent dominant inhibition [51]. The present studies indicate that this suppressive effect is not due simply to the absence of the native functional HIV-1 TM region, since we observed complementation with fusion-defective constructs in which the normal membrane-spanning domain was replaced by a GPI anchor, or was completely deleted (Fig. 5A). Thus the complementation analyses help distinguish between a fusion-defective Env variant that exerts a strictly dominant suppressive activity, vs. others that, though defective, can permit a low level of functionality as revealed by the ability of their active determinants to complement when coexpressed with a variant defective in another function. Complementation in cell fusion assays was observed not only with the previously described robust vaccinia-based expression and reporter system (Figs. 2, 3, 4, 5, 6), but also with Env expression from an HIV-1 infectious molecular clone using the MAGI reporter cell line as targets (Table 1). However there were marked variations in complementation efficiencies depending on the assay system employed. Thus, for complementation between LAI-BS and LAI-FP26, the activities in the vaccinia cell fusion system ranged from about ~30-50% relative to wild type WT, consistent with our previous findings [25]; in contrast, the relative activities were much lower with the infectious HIV/MAGI system, i.e. only ~6% in the cell fusion assay and below detection in the virus entry assay (Table 1). We believe these differences mainly reflect variations in the robustness of the functional Env-receptor interactions and the associated reporter gene activation readouts in the different assays, rather than fundamental mechanistic distinctions. Numerous variables can influence the efficiency of Env-receptor interactions leading to fusion/entry, including surface densities of the participating molecules, gp120 affinities for CD4 and coreceptor, varying receptor conformations and molecular associations, the biochemical environments of both effector and target membranes (lipid composition, facilitating or interfering accessory factors), etc. [52,53]. Similarly for the reporter gene readouts, multiple parameters can influence detectability (signal sensitivity, signal/background ratios, etc.), and different factors might be limiting for the measured readout in alternate assay systems. Thus, while a particular assay might be quantitative in terms of yielding numerically reproducible values, such data are not necessarily proportional to the inherent functional activities of the particular Env-receptor interactions involved. Therefore, some assays might reveal weak activities not detected by others, but might overestimate their relative efficiencies. Further complicating the quantitative interpretation is the fact that unlike wild type Env, for which all trimers are potentially active, the complementation activities result only from mixed trimers which presumably represent a subset (theoretically 75%) of the total; moreover upon cotransfection of two nonfunctional Env variants (A & B), the relative functionalities of the two possible mixed trimers (AAB and ABB) might be very different. Thus the reported absence of Env complementation in assays of both reporter virus entry and cell fusion [54] could reflect the absence of functional interactions between the partic-ular mutant Env constructs examined (different from those tested in this report), or limited sensitivities in the assays used. Another point worth noting is that our approach to studying cooperativity within the Env trimer involved complementation analyses between Env mutants that were inactive when expressed alone; thus functional activity was detected despite the presence of a fusion-impairing determinant in every protomer of the mixed trimer. It seems reasonable to propose that the contributions of subunit cooperativity to Env function might be greater with wild type native Env molecules, in which all subunits are fully functional. We propose that our inability to detect complementation in the virus entry assay despite its clear measurement in the parallel cell fusion assay does not necessarily imply fundamental differences in the corresponding membrane fusion mechanisms. Several inter-related factors presumably contribute to the inability to detect complementation in the virus entry assay. For one, the density of trimeric spikes on HIV-1 virions recently observed by cryo-electron microscopy [55,56] is quite low (<15 trimers per virion, range ~1-3 dozen). A second point concerns uncertainties in the trimer stoichiometry required for Env-mediated virion entry, as indicated by differences in recent effort to fit experimental data to mathematical models. Thus from analyses of pseudotype assays with mixed trimers, it has been concluded in one report that HIV-1 virion-cell fusion requires only a single trimer [57]; by contrast, fitting the same experimental data using alternative models with different underlying assumptions led to conclusions of multi-trimer requirements: ~5, with a wide range of uncertainty in one analysis [58], and ~8 with a range of 2 -19 in another [59]. According to these multi-trimer mechanisms, an infectious HIV-1 particle does not display a significant excess of functional fusion units. For the complementation analyses described herein where each Env protomer contains a functional defect, it is likely that the complementing trimers are less active than the fully wild type counterparts; moreover as noted above, only a subset of possible trimer forms are likely to be active. Thus in the virus entry assay with complementing Envs, there may be an insufficient number of functional fusion units on most virions, resulting in a major reducton in the fraction of virions with a functional fusion unit. Thus the absence of detectable complementing activity in the virus entry assay need not imply that virus-cell fuson proceeds by a different mechanism than cell-cell fusion. However, we acknowledge that the potential for mechanistic differences is not formally excluded, as emphasized by a recent report arguing that HIV virion entry proceeds by endocytosis and dynamin-dependent fusion out of the endosomes, with direct plasma membrane fusion failing to promote content delivery [60]. Given the experimental complexities, assessing the biological significance of subunit cooperativity for HIV entry is a challenging problem. It is well known that there are functional constraints on subunits within the trimer compared to their monomeric counterparts. A particularly striking example is the interaction of soluble CD4 (sCD4) with gp120; the comparably high affinity of sCD4 for soluble monomeric gp120 from primary and T cell lineadapted HIV-1 isolates stands in marked contrast to the relatively weak binding and neutralization activities of sCD4 for native trimeric Env on the former compared to the latter [61]. Cooperative subunit interactions, whereby binding of gp120 to CD4 on one protomer in the trimer initiates fusion-related conformational changes in the other protomers, might thus enhance Env fusogenic activity, particularly toward target cells containing low densities of CD4 and coreceptor. Another consideration involves the extensively studied phenomenon of epitope masking within the HIV-1 Env trimer [62]. For example, some highly conserved epitopes are freely accessible on monomeric gp120 but are masked in the trimer prior to CD4 binding; cooperative subunit interactions may facilitate exposure of such epitopes on subunits within the trimer that have not yet engaged CD4. With questions such as these in mind, the combination of detailed functional and structural studies will potentially delineate the molecular basis for subunit cooperativity within the native HIV-1 Env trimer, and help define its biological significance. Conclusion The data presented herein demonstrate that every subunit within the Env trimer need not be competent for all critical activities. Cooperatvie cross-talk occurs between subunits, thereby enabling adjacent protomers to complement different functional defects. The diversity of defects that can be complemented illustrates the general nature of cooperative subunit interactions within the HIV-1 trimer. Cooperativity may have important implications for Env function and sensitivity to neutralization.
v3-fos-license
2017-03-14T07:35:54.109Z
2017-02-01T00:00:00.000
6861362
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jneuroinflammation.biomedcentral.com/track/pdf/10.1186/s12974-017-0789-6", "pdf_hash": "9a6c08694e3a0071dfe461b40e6da425604d115a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:220", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3621a53f581fc248599de3670f1f0736cf5b9271", "year": 2017 }
pes2o/s2orc
Intrathecal Th17- and B cell-associated cytokine and chemokine responses in relation to clinical outcome in Lyme neuroborreliosis: a large retrospective study Background B cell immunity, including the chemokine CXCL13, has an established role in Lyme neuroborreliosis, and also, T helper (Th) 17 immunity, including IL-17A, has recently been implicated. Methods We analysed a set of cytokines and chemokines associated with B cell and Th17 immunity in cerebrospinal fluid and serum from clinically well-characterized patients with definite Lyme neuroborreliosis (group 1, n = 49), defined by both cerebrospinal fluid pleocytosis and Borrelia-specific antibodies in cerebrospinal fluid and from two groups with possible Lyme neuroborreliosis, showing either pleocytosis (group 2, n = 14) or Borrelia-specific antibodies in cerebrospinal fluid (group 3, n = 14). A non-Lyme neuroborreliosis reference group consisted of 88 patients lacking pleocytosis and Borrelia-specific antibodies in serum and cerebrospinal fluid. Results Cerebrospinal fluid levels of B cell-associated markers (CXCL13, APRIL and BAFF) were significantly elevated in groups 1, 2 and 3 compared with the reference group, except for BAFF, which was not elevated in group 3. Regarding Th17-associated markers (IL-17A, CXCL1 and CCL20), CCL20 in cerebrospinal fluid was significantly elevated in groups 1, 2 and 3 compared with the reference group, while IL-17A and CXCL1 were elevated in group 1. Patients with time of recovery <3 months had lower cerebrospinal fluid levels of IL-17A, APRIL and BAFF compared to patients with recovery >3 months. Conclusions By using a set of markers in addition to CXCL13 and IL-17A, we confirm that B cell- and Th17-associated immune responses are involved in Lyme neuroborreliosis pathogenesis with different patterns in subgroups. Furthermore, IL-17A, APRIL and BAFF may be associated with time to recovery after treatment. Background Lyme neuroborreliosis (LNB) is the dominating disseminated form of Lyme borreliosis in Sweden [1] as well as in Europe [2]. The pathogenesis of LNB involves a complex immune response with an initial innate response elicited by Borrelia burgdorferi (B.b.) interacting with recognition receptors such as Toll-like receptor 2, subsequently resulting in activation and recruitment of B and T cells to the central nervous system (CNS). The chemokine C-X-C motif ligand (CXCL)13 is a key molecule in B cell recruitment to the CNS [3], and several studies have shown high concentrations of CXCL13 in the cerebrospinal fluid (CSF) in both children and adults with LNB [4][5][6]. CXCL13 is postulated to be a diagnostic marker in acute LNB since it may be elevated in CSF before intrathecally produced B.b.-specific antibodies can be detected; however, different cutoff levels have been discussed, e.g. 142 and 250 pg/mL, respectively [4][5][6][7][8]. The cytokines a proliferation-inducing ligand (APRIL) and B cell activating factor (BAFF) are important in B cell development and survival [9], and raised CSF levels have been detected in other neuroinflammatory conditions [10,11]. Although increased BAFF levels in CSF have been reported in LNB [12], the relative contribution of B cell-associated factors such as CXCL13, APRIL and BAFF in LNB inflammation and clinical outcome is mainly unknown. Recent studies have indicated involvement of T helper (Th)17 cells in the intrathecal immune response in patients with LNB [13][14][15]. IL-17A, a cytokine produced by Th17 cells, is a potent activator of neutrophils in defeating extracellular microbes, but its wider role in the pathogenesis and clinical outcome of LNB is unclear [13,16,17]. CXCL1 (previously known as growth regulated oncogene-α, GRO-α), a neutrophil recruiting chemokine, and CCL20 (macrophage inflammatory protein-3α, MIP-3α) a Th17 recruiting chemokine, are both induced by Th17 cells [17]. While elevated IL-17A levels in CSF have been reported in LNB [13][14][15], information on its potential association with clinical outcome is still lacking, and it is not known if chemokines downstream of Th17 are increased in LNB. A basic understanding of the molecules involved in the pathogenesis is a prerequisite for the identification of prognostic biomarkers and in the long run for finding potential therapeutic targets. The aims of this study were to evaluate the putative involvement of Th17-and B cell-associated immune response and to assess associations with disease course in LNB by analysing IL-17A and its downstream chemokines CCL20 and CXCL1, as well as B cell-associated factors APRIL, BAFF and CXCL13. Patients We included retrospectively 165 patients in Jönköping County, Sweden, who had been investigated by lumbar puncture (LP) and blood sampling during 2007-2009 to verify or exclude suspected LNB. Medical records were scrutinized, and the patients were divided into four groups based on the CSF findings (see Tables 1, 2 and 3 for demographic and clinical characteristics) and in accordance with the European Federation of Neurological Societies (EFNS) guidelines [18]. Patients in group 1 (definite LNB, n = 49) had both CSF pleocytosis and Borrelia-specific antibodies in CSF. Group 2 (possible LNB pleocytosis, n = 14) had symptoms strongly suggestive of LNB, short duration of symptoms and CSF pleocytosis but not (yet) Borrelia-specific antibodies in CSF. Group 3 (possible LNB Ab + , n = 14) had Borreliaspecific antibodies in CSF, but no pleocytosis and symptoms were less suggestive of LNB. As a non-LNB reference group, we selected 88 gender-and agematched patients from the same cohort investigated for suspected LNB 2007-2009, in whom LNB was excluded based on no Borrelia-specific serum or CSF antibodies, no CSF pleocytosis and normal CSF-albumin. The reference group consisted of patients where the LP was part of a neurological investigation and in whom no neurological diagnosis was verified (n = 56) or they later received other neurological diagnoses such as Bell's palsy (n = 18) or Alzheimer's disease, Parkinson's disease and stroke (n = 14). Serum and CSF Serum and CSF samples were drawn prior to antibiotic treatment and stored at −20°C. All tests were performed at the clinical laboratory of microbiology in Jönköping. Borrelia-specific antibodies in serum and CSF were analysed using Lyme Borreliosis ELISA kit 2nd generation (Dako Cytomation, A/S, Glostrup, Denmark) between 2007 and 2008. Intrathecal antibody index (AI) was calculated using total IgG as a reference molecule [19] according to the formula: ((Borrelia-specific IgG in CSF (OD)/Borrelia-specific IgG in serum (OD))/((total IgG in CSF (mg/L)/total IgG in serum (g/L)) [20]. A Borrelia-specific AI >2 was indicative of intrathecal anti-Borrelia antibody production. From 2009, the laboratory used the IDEIA (Lyme Neuroborreliosis kit, (Dako Cytomation)). Both antibody assays use purified, native B. afzelii strain DK1 flagellum as test antigen, and results were interpreted according to the manufacturer's instructions. Cytokine and chemokine analyses APRIL, BAFF and CXCL13 were analysed by ELISA (Invitrogen Immunoassay Kit, KHC3051, Life Technologies, USA, and Quantikine, DBLYSOB and DCX130, (R&D) Systems, Inc., USA, respectively). IL-17A, CXCL1 and CCL20 were analysed by Luminex multiple bead technology (Milliplex Human Cytokine/Chemokine Kit, Millipore Corporation, Germany). All analyses were conducted according to the manufacturers' instructions. The lowest detection limits were as follows: APRIL: 0.02 pg/mL, BAFF: 0.05 pg/mL in serum, 0.04 pg/mL in CSF; CXCL13: 0.04 pg/mL in serum, 0.03 pg/mL in CSF; IL-17A: 0.38 pg/mL in serum, 0.06 pg/mL in CSF; CXCL1: 3.2 pg/mL in serum, 0.14 pg/mL in CSF; CCL20: 0.29 pg/mL in serum, 0.84 pg/mL in CSF. Values under the detection limit were given half the value of the lowest point of the standard curve. Data handling and statistical analyses For statistical analyses, SPSS version 20 was used. Intergroup comparisons were performed by using the non-parametrical Kruskal-Wallis test and when p < 0.05 followed by Mann-Whitney U test as a post hoc test. For children (age < 15 years), a covariance analysis has been performed with pleocytosis as a covariate. Data are given as medians and interquartile (i.q. range). For categorical variables, the chi-square test was used. Correlations were determined by Spearman's rank order correlation. p values below 0.05 were considered significant. Table 1 presents the characteristics of the different study groups. There were significant differences in age with, older individuals in group 3 compared to group 2 which consisted mainly of children. Median duration of symptoms before LP was similar in groups 1 and 3 but several days shorter in group 2. A majority of patients in groups Tables 2 and 3 present symptoms and symptom duration before LP for patients in groups 2 and 3, respectively. All patients in group 2 had symptoms highly suggestive of LNB such as head or neck pain (or both), radiculitis or cranial nerve palsy. In group 3, no patients had cranial nerve palsy and only one had radiculitis and six patients had symptoms not typical for LNB, such as vision loss and dysarthria, and their duration of symptoms before LP ranged from less than a week to several years. B cell-associated cytokines and chemokines CSF levels of APRIL and CXCL13 (Table 4 and Fig. 1) were significantly elevated in all LNB groups compared to the non-LNB group (group 4) while there were no differences in serum. A majority of patients in groups 1 and 2, but not in group 3, had CSF levels of CXCL13 (Table 5). No correlations were seen between cytokine and chemokine levels in serum and CSF. Th17-associated cytokines and chemokines CSF levels of IL-17A, CXCL1 and CCL20 (Table 4 and Fig. 1), were all significantly elevated in group 1 compared to group 4. CCL20 was also significantly higher in groups 2 and 3 compared to group 4. In Table 4, serum levels of CXCL1 were significantly lower in groups 1 and 3 compared to group 4 whereas levels of CCL20 were significantly higher in groups 1 and 2 compared to group 4. CSF levels of IL-17A correlated with CXCL1 (rho = 0.72, p < 0.0001) in groups 1, 2 and 3. No correlations were seen between cytokine and chemokine levels in serum and CSF. Associations with demographic and clinical parameters There were no significant differences in cytokine/chemokine levels in serum or in CSF between men and women. Regarding differences in relation to age (data not shown), we found that children <15 years of age in groups 1, 2 and 3 (n = 34) had significantly higher levels of BAFF (median 108 pg/mL, i.q. range 60-165, p < 0.001) in serum and CXCL13 in serum and CSF (77 pg/mL, 47-109, p = 0.001 and 920 pg/mL, 398-1706, p = 0.03, respectively) compared to adults. BAFF in serum also showed a strong negative correlation with age in groups 1 and 2 (rho = −0.57, p < 0.01). APRIL, BAFF and CXCL13 in CSF were all positively correlated with pleocytosis (rho = 0.51, rho = 0.51 and rho = 0.55, respectively, all p < 0.001). When a covariate analysis was performed with pleocytosis as a covariate, BAFF in serum was still significantly higher in children (p < 0.0001), but CXCL13 in serum and CSF was not. Children in groups 1, 2 and 3 had significantly higher levels of CCL20 in serum and CSF (8 pg/mL, 4-12 and median 3 pg/mL, 2-3, respectively, both p = 0.03). CCL20 was however not significantly higher when performing a covariate analysis with pleocytosis as a covariate. IL-17A levels in CSF correlated with pleocytosis (rho = 0.51, p < 0.0001). Symptom duration before LP did not correlate with levels of cytokines/chemokines in serum or CSF in groups 1, 2 and 3 together. However, within group 2, duration of symptoms before LP correlated negatively with BAFF and CXCL13 in serum (rho = −0.57 and −0.58, respectively, both p < 0.01). When stratifying patient in groups 1, 2 and 3 according to duration of symptoms before LP, those with a shorter duration (<2 weeks, n = 49) had higher levels of BAFF in serum (median 764 ng/mL, 537-890, p = 0.002) compared to patients with longer symptom duration (n = 28) (556 ng/mL, 438-668). Regarding relation to disease course, patients in groups 1 and 2 were stratified according to time to recovery after treatment. Patients with shorter duration, group A (<3 months, n = 54) had lower levels of APRIL (p = 0.003), BAFF (p = 0.04) and IL-17A (p = 0.02) in CSF compared to patients with longer time of recovery, group B (>3 months, n = 6), (Fig. 2). Discussion In this study, we showed that levels of several cytokines and chemokines related to Th17 and B cell immunity are raised in CSF from patients with LNB, strengthening the involvement of both Th17 and B cell immunity in LNB. Furthermore, we noted several relations to demographic and clinical parameters. The lack of correlations between cytokine/chemokine levels in serum versus CSF indicates an intrathecal source of the cytokines and chemokines present in CSF, thus reflecting the pathological process in the CNS. B cell-related cytokines and chemokines CXCL13 was significantly elevated in CSF of all LNB groups, in particular the pleocytosis groups 1 and 2, as compared to the non-LNB group. CXCL13 has been suggested as a diagnostic marker for acute LNB, and a majority of patients in groups 1 and 2, those with most probable acute LNB, showed raised CSF levels above 142 and 250 pg/mL, respectively, while no patients in group 3 had levels over 142 pg/mL, supporting CXCL13 as a diagnostic tool and corroborating several studies [4,[6][7][8]21]. Patients in groups 1 and 2 with CSF CXCL13 levels below the cutoff values did not, however, differ in symptoms, duration of symptoms before LP or time to recovery after treatment compared to patients with higher levels of CXCL13. Other diagnoses than LNB cannot be completely ruled out in patients with CSF-CXCL13 levels below 142 pg/mL, especially in group 2, since this group only displayed CSF pleocytosis. We suggest that the LNB diagnosis in group 3 is questionable since these patients displayed no CSF pleocytosis and had CSF-CXCL13 levels below the suggested cutoff 142 pg/mL. Most of these patients reported symptoms less typical for LNB. Thus, the elevated AI could more likely reflect a previous infection, and other causes of their present symptoms are plausible. However, interestingly, slightly higher APRIL [6] of increased levels of CXCL13 in children in both serum and CSF. The involvement of B cell-related cytokines and chemokines in the pathogenesis of LNB is also supported by the raised CSF levels of both APRIL and BAFF in groups 1 and 2, the groups with most probable LNB. This is, to our knowledge, the first time elevated levels of APRIL has been reported in LNB patients, while BAFF has been studied previously [12]. The raised levels of APRIL and BAFF support the critical role of B cell activation and proliferation in LNB. However, moderately increased CSF levels of these cytokines were associated with shorter time to recovery (defined as <3 months), while higher levels were found in patients with longer time to recovery. Speculatively, moderate levels reflect an appropriate B cell response, while higher levels may reflect an over-shooting response mirroring or even contributing to more extensive CNS pathology. Increased levels of APRIL and BAFF have also been found in patients with multiple sclerosis [22] and systemic lupus erythematosus [11,23], linked to antibody-mediated pathology and neuropsychiatric symptoms, respectively. Clearly, the role of APRIL and BAFF in LNB needs to be further elucidated, preferably in a prospective manner. Th17-related cytokines and chemokines We found elevated CSF levels of IL-17A in LNB patients, which corroborates previous studies [13][14][15]. We here extended the concept of Th17 immunity by showing elevated levels of CXCL1 and CCL20, both induced by Th17 and involved in recruitment of neutrophils and Th17 cells, respectively. CXCL1 in CSF, was significantly raised in group 1, while CCL20 in CSF was significantly raised in all definite and possible LNB (groups 1, 2 and 3). This is, to our knowledge, the first study that shows these Th17-related markers in patients with LNB. IL-17A, CXCL1 and CCL20 have however been reported present in other Lyme borreliosis manifestations, such as Lyme arthritis (IL-17), erythema migrans and acrodermatitis chronica atrophicans (CXCL1 and CCL20) [24,25]. In experimental studies, CXCL1 was shown to be produced by human astrocytes and brain microvascular endothelial cells in response to B. burgdorferi [26]. Interestingly, patients who recovered within 3 months after treatment had lower levels of IL-17A in CSF. Thus, high levels of IL-17A in CSF may be a prognostic marker and speculatively, a Th17 response could be involved in the pathogenesis of a delayed therapeutic response. In line with this notion, patients with prolonged symptoms after treatment of neurosyphilis had higher levels of IL-17A in CSF [27]. Further on, Th17 immunity has been linked to many autoimmune conditions, like rheumatoid arthritis [28] and psoriasis [29]. In CNS, Th17-related immune responses play a role in experimental autoimmune encephalomyelitis (EAE), an animal model for multiple sclerosis [30]. CCL20 can bind to the choroid plexus and lead Th17-related cells into CNS [30]. Our findings add further aspects of the Th17related immune response in the pathogenesis of LNB and suggest that it may affect clinical course, although this needs to be confirmed. There are some limitations of the current study. The retrospective design hampers clinical assessments. Another potential limitation is the lack of truly healthy controls, although the chosen group represents a clinically relevant reference group. Group 2, with mostly children, had CSF pleocytosis and characteristic symptoms of LNB, but other, foremost viral, infections cannot be completely ruled out, especially in cases with low CSF levels of CXCL13, since presence of neurotropic viruses was mostly not investigated. Regarding the EFNS guidelines, we note some limitations in the classification of possible LNB cases. According to the guidelines, patients corresponding to our groups 2 and 3 are classified as possible LNB, while we find important differences between the two groups in terms of clinical presentation and CSF findings, including cytokine and chemokine levels. Conclusions We here demonstrate additional support for Th17 involvement in the intrathecal immune response in LNB as well as indications that high levels of IL-17A in CSF in the acute phase of the disease may be associated with slower recovery, hence proposing that IL-17A should be further evaluated as a possible biomarker for prognosis. Besides CXCL13, the B cell-related cytokines APRIL and BAFF are elevated in CSF from patients with LNB, and the levels could be associated with time to recovery after treatment.
v3-fos-license
2020-05-10T13:04:36.505Z
2020-05-01T00:00:00.000
218562337
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8220/20/9/2656/pdf", "pdf_hash": "1910c5cc4ab8d739496b163908ee9c2fb574668a", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:222", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "fdef81e329e8d5fa826800d6b20759679a88a3af", "year": 2020 }
pes2o/s2orc
Multi-Scale Global Contrast CNN for Salient Object Detection Salient object detection (SOD) is a fundamental task in computer vision, which attempts to mimic human visual systems that rapidly respond to visual stimuli and locate visually salient objects in various scenes. Perceptual studies have revealed that visual contrast is the most important factor in bottom-up visual attention process. Many of the proposed models predict saliency maps based on the computation of visual contrast between salient regions and backgrounds. In this paper, we design an end-to-end multi-scale global contrast convolutional neural network (CNN) that explicitly learns hierarchical contrast information among global and local features of an image to infer its salient object regions. In contrast to many previous CNN based saliency methods that apply super-pixel segmentation to obtain homogeneous regions and then extract their CNN features before producing saliency maps region-wise, our network is pre-processing free without any additional stages, yet it predicts accurate pixel-wise saliency maps. Extensive experiments demonstrate that the proposed network generates high quality saliency maps that are comparable or even superior to those of state-of-the-art salient object detection architectures. Introduction Salient object detection (SOD) is a fundamental task in computer vision that attempts to mimic human visual systems that rapidly respond to visual stimuli and locate visually salient objects in a scene. Estimating salient regions from an image could facilitate a lot of vision tasks, ranging from low-level ones such as segmentation [1] and image resizing [2] to high-level ones such as image captioning [3]; thus, it has been receiving increasing interest in the computer vision community and has been extended to other relevant topics, such as video SOD [4,5] and RGB-D SOD [6,7]. Numerous methods have been developed in the past decades. Most of them focus on two topics; the first one works on predicting eye fixations, and the other one aims at detecting salient object/regions from an image. In this work, we mainly focus on the latter one, i.e., detecting salient objects from clutter scenes. Since the pioneer work of Itti's computational saliency model [8], extensive efforts have been devoted to develop saliency methods identifying objects or locating regions that attract the attention of a human observer at the first sight of an image. Most of these methods draw inspiration from bottom-up human visual attention mechanisms, e.g., Feature Integration Theory (FIT) [9], and dedicate to measure uniqueness, distinctness and rarity of scenes to infer saliency maps, where the basic Related Work A great number of salient object detection methods have been proposed in the past decades; a comprehensive survey can be found from [19]. In this section we give a brief review of saliency computation models closely related to our method. Contrast Based Models Recent studies [20] have suggested that visual contrast is at the central of saliency attention. Most existing visual saliency computation models are designed based on either local or global contrast cues. Local contrast based methods investigate the rarity of image regions with respect to their local neighborhoods [12]. The pioneer work on these models is Itti's model [8], in which saliency maps are generated by measuring center-surround difference of color, orientation, and intensity features. Later, Harel et al. [21] estimates center-surround saliency maps in a graph computation manner and achieves superior performance to that of Itti's model. Similarly, Klein et al. [22] encodes local center-surround divergence in multi-feature channels and computes them in an efficient scale-space to deal with scale variations. Liu et al. [23] incorporates multi-scale contrast features with center-surround histogram and color spatial distribution by Markov random fields to detect salient objects. Without knowing the size of the salient object, contrast is usually computed at multiple scales. Jiang et al. [24] integrates regional contrast, regional property and regional background descriptors to form saliency maps. One major drawback of local contrast based methods is that they tend to highlight strong edges of salient objects thus producing salient regions with holes. Global contrast based methods compute saliency of a small region by measuring its contrast with respect to all other parts of the image. Achanta et al. [11] proposes a simple frequency-tuned salient region detection method, in which saliency value of a pixel is defined as difference between its color and mean color of the image. Cheng et al. [12] introduces a global contrast based salient object detection algorithm, in which saliency of a region is assigned by the histogram difference between the target region and all other regions. Later, they propose a soft image abstraction method to capture large scale perceptually homogeneous elements, which enables more effective estimation of global saliency cues [25]. Differently, in [10], contrast and saliency estimation is formulated in a unified way using high-dimensional Gaussian filter. Cnn Based Models Representing pixels or regions efficiently and compactly is critical for saliency models. The aforementioned methods only employ low-level features such as color and texture. Recently, inspired by the great success of CNNs in many computer vision tasks, researchers in the community are encouraged to leverage power of CNNs to capture high level information from the images. Vig et al. [26] is probably the first attempt at modeling saliency computation using deep neural networks. This work focuses on predicting eye fixation by assembling different layers using a linear SVM. Zhao et al. [16] and Li et al. [27] extract a global feature of an image and a local feature of a small region in it using different CNNs, and then, the saliency of this region is formulated as a classification problem. Wang et al. [28] proposes a saliency detection model composed of two CNNs; one learns features capturing local cues such as local contrast, textures and shape information, and the other one learns the complex dependencies among global cues. He et al. [29] learns the hierarchical contrast features using multi-stream CNNs. To obtain accurate salient boundaries, images are first segmented into super-pixels in multi-scales. Two sequences, color uniqueness and color distribution, are extracted from each super-pixel and fed into CNNs to obtain features. Saliency maps are generated by fusing saliency results inferred from each scale. Li et al. [17] adopts a two-stream deep contrast CNN architecture. One stream accepts original images as input, infers semantic properties of salient objects and captures visual contrast among multi-scale feature maps to output their coarse saliency maps. The other stream extracts segment wise features and models visual contrast between regions and saliency discontinuities along region boundaries. Reference [30] puts forward a multi-scale encoder-decoder network (MSED) by fusing multi-scale features from the image-level. Li et al. [31] presents a multi-scale cascade network (MSC-Net) for saliency detection in a coarse-to-fine manner, which encodes abundant contextual information whilst progressively incorporating the saliency prior knowledge to improve the detection accuracy. Li et al. [32] discloses the importance of inference module in the saliency detection and presents a deep yet lightweight architecture which extracts multi-scale features by leveraging a multi-dilated depth-wise convolution operation. Different from them, in this paper, we design an end-to-end multi-scale global contrast network that explicitly learns hierarchical contrast information among global and local features of an image to infer its salient object regions. Compared with the aforementioned multi-scale CNN-based models, our proposed model is lightweight and without any pre-processing operations. Multi-Scale Global Contrast CNN In this section, we will give details of our multi-scale global contrast CNN (denoted as MGCC) architecture. Formulation Salient object detection can be considered as a binary classification problem. Given an image I, the saliency value of a pixel i (i also could be a super-pixel) in it can be represented as follows, where S I is the saliency map of the image I (for notational simplicity, we will drop the superscript I in the remainder of this paper), S i is the saliency value of pixel i, f i and f I are features of the pixel i and image I, respectively. y i = 1 indicates the probability of the pixel i being salient, while y i = 0 indicates background, W is collection of parameters. In global contrast based methods, S i can be estimated through measuring the distance of the two features, where C[·] is a function estimating saliency maps from d(·), and d(·) is a metric function measuring the distance of f i and f I , which could be a simple Euclidean distance or other pre-defined distance metrics. For example, in [12], features are represented using color histograms, the saliency of a super-pixel is defined as its color contrast to all other regions in the image, which can be inferred from the weighted summation of color distances between the current region and all other ones. Since S i is a probability value ranging from 0 to 1, C[·] often adopts the following form, where σ(·) is a nonlinear function, e.g., sigmoid function, mapping d(·) to [0, 1]. If we represent f i and f I using deep features and define d(·) as a metric learned from training data, then Equation (3) can be solved using a convolutional neural network. In the following section, we will give details of the proposed network architecture to achieve this. Global Contrast Learning The essence of obtaining contrast information between two regions is quantifying a "distance" between features of them, thus inducing a measure of similarity. As discussed above, the function d(·) can be viewed as a metric function that captures distance between f i and f I , in which larger distance indicates higher contrast, thus higher probability being salient. There are multiple ways to calculate d(·). For instance, it can be formulated as pre-defined metrics, such as L 1 or L 2 norms. However, this requires the two features to have the same dimension, which is hard to achieve in CNNs. Suppose f l i is a feature of pixel i extracted from the l-th convolutional layer of a CNN (e.g., VGG-16 [33]). Although we can apply global pooling on this layer to obtain f I , thus making these two features have the same dimension, i.e., the channels of feature maps in this layer, lots of information will be lost during pooling process, especially when l is in low layers. Furthermore, low level features lack of semantic information, which is very important in detecting salient objects [34]. An alternative solution is adding an additional layer to project both of them into an embedding space, making them to have equal size and then calculating a distance matrix. However, it is hard to achieve satisfactory results by inferring salient objects directly from distance matrices; this is mainly because important semantic information about the salient objects is missing when computing distances. In addition to the pre-defined metrics, another solution is defining metric with the knowledge of the data, that is, learning the metric functions from the training samples. As a powerful tool as it is, CNNs have been proved to be very effective in approximating very complex functions and in learning visual similarities. To achieve this end, we attempt to design a CNN architecture that learns the distance function between f i and f I . One important thing that should be noted is that the semantic information of the object should be preserved because we intend to recover accurate object boundaries. To achieve this, we design a very simple architecture that could capture global contrast of f i and f I . Firstly, VGG-16 [33] is employed to extract features from input images. VGG-16 consists of 13 convolutional layers, 5 pooling layers and 3 fully connected layers. We modify it by removing the last 3 fully connected layers and using 256 × 256 input instead of original 224 × 224. The last pool layer of the modified VGG-16 (with size of 8 × 8 × 512) is used to represent the global feature. To emphasize contrast information and reduce distractions from semantic information, we apply an additional 1 × 1 convolutional layer to obtain compact 8 × 8 × 256 representations of global features. Then, we concatenate it with previous layers in a recurrent manner, and introduce more convolutional layers to learn visual contrast information, as shown in Figure 2. At the end of the network, output is up-sampled to meet the size of the ground truth maps. Although it is simple, this repeating concatenation strategy can successfully characterize contrast information of the image while preserving semantic information of salient objects. Convolutional layers i respectively represent the width, height and channel of the feature maps at the previous layers, and C (g) means the channel of the global feature map, resizing the global feature map to the same size as the feature maps at previous layers and concatenating them in a channel-wise manner. Multi-Scale Global Contrast Network Layers in a CNN from low to high levels capture different levels of abstraction. Neurons in early layers have small receptive fields that only respond to local regions of an input, thus producing low level features representing texture, edges, etc., while neurons in late layers have large receptive fields that may cover most of or even the entire image, thus capturing semantic information of the image or objects in it. It is very important to employ low level features when generating output with accurate and clear boundaries [15]. Inspired by HED [35], we design multi-scale outputs to capture features in different layers and integrate them together to produce finer results. Specifically, we propose a Multi-scale Global Contrast CNN, abbreviated as MGCC, which adopts truncated VGG16 as the backbone, there are five convolutional segments, each of which contain two or three convolution layers, followed by one pooling layer to down-sample the size of the feature maps. Our proposed model takes the final output feature map, i.e., the fifth convolution segment, as the global feature. Then, we concatenate it with previous layers in a recurrent channel-concatenation manner by first resizing the global feature map to the same size with the corresponding feature maps at previous layers (the global contrast module, which corresponds to the left-part in Figure 2). This process is somewhat similar to feature pyramid network (FPN) [36] but different from it in that we respectively take the outputs of the previous four layers to concatenate with the fifth convolution layer, i.e., global features. For example, the output feature map of the fourth segment has the size of 16 × 16 × 256; thus, we resize the global feature whose size is 8 × 8 × 512 by upsampling two times, to the size of 16 × 16 × 512. Then, we concatenate them in a channel-wise manner. To learn more visual contrast information, we introduce several more convolutional layers (referred to the right-part of Figure 2). Consequently, the proposed MGCC generates four scale outputs, each of which could produce accurate saliency maps. We resize all the saliency maps of the four scale outputs to the same size of the original image and then fuse them in an element-wise summation to obtain the final finer saliency map. Figure 1 shows several examples. The architecture of the proposed MGCC is shown in Figure 3. The detail parameters are given in Table 1. Table 1. Detail architectures of the proposed network. (m, n)/(k × k) means that there are m channels in previous layer and n channels in current layer; the filters connecting them have size k × k. Scale-4 architecture is slightly different to the other three ones in that it has one additional convolutional layer. As discussed above, the salient object detection task can be formulated as a binary prediction problem; thus we use binary cross entropy as loss function to train our network. Given a set of training samples {(X n , Y n )} N n=1 , where N is the number of samples, X n is an image, and Y n is the corresponding ground truth, the loss function L m for the m-th scale output is defined as whereŶ m j is the predicted saliency value for pixel j. The fused loss L fused takes a similar form to Equation (4), and the fusion weights w are also learned form training samples. Finally, the loss function for training is given by where W = {W vgg , W 1 , . . . , W 4 , w} is the collection of the parameters in the proposed network. w denotes the trainable parameters in the additional convolution layer for scale-4, which has been described in Table 1. α and βs are weights balancing different loss functions and all set to 1 in our experiments. • ECSSD [37] is a challenge dataset which contains 1000 images with semantically meaningful but structurally complex natural contents. • HKU-IS [27] is composed by 4447 complex images, each of which contains many disconnected objects with diverse spatial distribution. Furthermore, it is very challenging for the similar foreground/background appearance. • PASCAL-S [38] contains a total of 850 images, with eye-fixation records, roughly pixel-wise and non-binary salient object annotations included. • DUT-OMRON [39] consists of 5168 images with diverse variations and complex background, each of which has pix-level ground truth annotations. Evaluation Metrics Three metrics, including precision-recall (P-R) curves, F-measure and Mean Absolute Error (MAE) are used to evaluate the performance of the proposed and other methods. For an estimated saliency map with values ranging from 0 to 1, its precision and recall can be obtained by comparing the thresholded binary mask with the ground truth. Making these comparisons at each threshold and averaging them on all images will generate P-R curves of this dataset. The F-measure is a harmonic mean of average precision and recall, which is defined as, As suggested by many existing works [16,40], β 2 is set as 0.3. MAE reflects absolute difference of the estimated S and the ground truth saliency maps G. where W and H are width and height of the maps. Both metrics of MAE and F-measure are based on pixel-wise errors and often ignore the structural similarities, as demonstrated in [41,42]. In many applications, it is desired that the results of the salient object detection model retain the structure of objects. Therefore, three more metrics, i.e., weighted F-measure F w β [41], S-measure (S α ) [42] and E-measure (E m ) [43] are also introduced to further evaluate our proposed method. Specifically, F w β [41] is computed as follows: where precision w and recall w are the weighted precision and recall. Note that the difference between F w β and F β is that it can compare a non-binary map against ground-truth with thresholding operation, to avoid the interpolation flaw. As suggested in [41,[44][45][46], we empirically set β 2 = 0.3. S α [42] is proposed to measure the spatial structure similarities between saliency maps. where α is a balance parameter between object-aware structural similarity S o and region-aware structural similarity S r , as suggested in [42,47,48]. E-measure (E m ) [43,44,49,50] is to evaluate the foreground map (FM) and noise, which can correctly rank the maps consistent with the application rank. where φ denotes the enhanced alignment matrix, which is to capture pixel-level matching and image-level statistics of a binary map. Implementation Details We implement the proposed network in PyTorch [51]. As mentioned above, we utilize VGG-16 [33] pre-trained on ImageNet [52] as backbone to extract features. The MSRA10K dataset [12] is employed to train the network. Before feeding into the network, all images are resized to 256 × 256. During training, parameters are optimized using Adam optimizer. The learning rates for VGG-16 and other newly added layers are initially set as 10 −4 and 10 −3 and decreased by a factor of 0.1 in every 30 epochs. In addition, we set momentum as 0.9. The training was conducted on a single NVIDIA Titan X GPU with a batch size of 8. It will converge in 80 epochs. It should be noted that no data augmentation was used during training. Comparison with the Sate-of-the-Art We compare the proposed MGCC with 10 state-of-the-art saliency models, including 5 CNN based methods: LEGS [28], MDF [27], MCDL [16], ELD [53], DCL [17] and 5 classical models: SMD [40], DRFI [24], RBD [54], MST [55] and MB+ [56]. These methods are chosen because the first 5 are also CNN and contrast based methods, and the last five traditional methods are either reported as benchmarking methods in [19] or developed recently. For fair comparison, we employ either implementation or saliency maps provided by the authors. We report P-R curves in Figure 4 and list Max F-measure (MaxF β ), MAE, F w β , S α and E m in Table 2. From Figure 4 we can see that our method achieves better P-R curves on the four datasets; especially on ECSSD and HKU-IS datasets, it obtains the best results, showing that MGCC can achieve the highest precision and the highest recall comparing with other methods. On PASCAL-S and DUT-OMRON datasets, although MGCC drops faster than DCL [17] and ELD [53] on the right side of the P-R curves, we can observe that the MGCC obtains better or at least comparable break-even points (i.e., the points on the curves where precision equals recall), which indicates that our method can keep a good balance between precision and recall. From Table 2, we can see that deep learning based approaches significantly outperform traditional saliency models, which clearly demonstrate the superiority of deep learning techniques. Among all the methods, the proposed MGCC achieves almost the best results over all the four datasets, except for the HKU-IS dataset, on which, DCL, a leading contrast based saliency model, performs slightly better than ours in terms of MaxF β and F w β ; however, it underperforms ours in terms of MAE, S α , and E m . The proposed MGCC and DCL [17] obtain identical MaxF β on the PASCAL-S dataset, yet lower MAE is achieved by our MGCC. It can be seen that MGCC improves MAE with a considerable margin on all four datasets. This demonstrates that our method can produce more accurate salient regions than other methods. Table 2. Performance of the proposed MGCC and other 10 state-of-the-art methods on 4 popular datasets. Red, green and blue indicate the best, the second best and the third best performances. "-" represents no reported.`````````M ethods Datasets ECSSD [37] HKU-IS [27] PASCAL-S [38] DUT-OMRON [39] Some example results of our and other methods are shown in Figure 5 for visual comparison, from which we can see our method performs well even under complex scenes. It is worth mentioning that to achieve better performance and obtain accurate salient regions, many CNN based models adopt two-or multi-stream architectures to incorporate both pixel-level and segment-level saliency information [16,17,27,28,53]. For instance, DCL consists of two complementary components; one stream generates low resolution pixel-level saliency maps, and one stream generates full resolution segment-level saliency maps. They combine the two saliency maps to obtain better results. While our network only has one stream and predicts saliency maps in pixel wise, with simpler architecture and without additional processing (e.g., super-pixel segmentation or CRF), our method achieves comparable or even better results than other deep saliency models. Another thing that should be mentioned is that, with simple architecture and completely end-to-end feed-forward inference, our network produces saliency maps at a near real time speed of 19 fps on a Titan X GPU. Ablation Study To further demonstrate the effectiveness of the multi-scale fusion strategy, we compare our proposed model with the results output from scale-1, scale-2, scale-3, and scale-4, as illustrated in Table 2. From Table 2, we can observe that when merging global feature with the features of previous layers, the performance gradually increases from scale-1 to scale-4, which verifies that merging higher-level semantic features can further boost the performance. Additionally, from the metrics, we can see that fusing multi-scale information (i.e., our proposed MGCC model), the performance has significantly improved, which indeed demonstrates the effectiveness and superiority of our proposed multi-scale fusion strategy. Conclusions and Future Work In this paper, we have proposed an end-to-end multi-scale global contrast CNN for salient object detection. In contrast to previous CNN based methods, designing complex two-or multi-stream architectures to capture visual contrast information or directly mapping images to their saliency maps and learning internal contrast information in an implicit way, our network is simple yet good at capturing global visual contrast, thus achieving superior performance both at detecting salient regions and processing speed. As demonstrated in existing literature [57], the SOC dataset [58] is the most challenging dataset. Some attempts have been made on this dataset in Deepside [44] and SCRNet [59]. We look forward to conducting some experiments on this dataset in our future work to further demonstrate the effectiveness and superiority of our proposed approach.
v3-fos-license
2017-09-16T20:06:41.727Z
2012-03-28T00:00:00.000
31546728
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.intechopen.com/citation-pdf-url/34071", "pdf_hash": "ce3b8f1872fca19693134477c632466f0b0cff8b", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:224", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "305d55d85b1f5ab1fa19ff3fe1ece65d7db773fb", "year": 2012 }
pes2o/s2orc
Lightweight Plastic Materials Lightweight constructions are increasingly used in automotive, aerospace and construction sectors, because using the low density materials allows reducing the structural weight of products. That may result in substantial fuel savings and a lower carbon footprint in transportation and facilitates manipulation of details in the house construction applications. Moreover, the low material density leads to conservation of natural resources, since less material is required for manufacturing consumer goods. Introduction Lightweight constructions are increasingly used in automotive, aerospace and construction sectors, because using the low density materials allows reducing the structural weight of products. That may result in substantial fuel savings and a lower carbon footprint in transportation and facilitates manipulation of details in the house construction applications. Moreover, the low material density leads to conservation of natural resources, since less material is required for manufacturing consumer goods. In polymer engineering the lightweight solutions include: selection of polymers with a density lower than their counterparts of comparable properties, -using composites filled with natural fibers instead of glass fibers, -composite sandwich panels with cellular/honeycomb structure, -hollow components manufactured by the gas or water assisted injection molding, -polymer foaming. Cellular and hollow structure polymeric materials offer additional advantages resulting from their thermal insulating properties, thus allowing additionally energy savings. Preference given to low density materials belong to the factors deciding on a success of polypropylene (PP) in the automotive sector. Being lighter than other plastics for 15-20% PP allows substantial fuel savings -it is assumed that a weight reduction in a car body of 100 kg brings about 0.3-0.5 litres of fuel savings per 100 km. Biocomposites Polymer composites constitute a broad group of materials, composed of the macromolecular matrix and various fillers. Currently the filler market for plastic composites is dominated by calcium carbonate (40%) and glass fiber (31%) and some other inorganic fillers such as talc, mica and clay. Although the conventional fillers offer property changes in the composites, their high density is not beneficial to fuel savings in automotive applications. Polymer composites with cellulose fillers are growing rapidly, mainly in the construction and automotive industry. The main advantage of such composites is lower density in comparison to that of glass fiber reinforced plastics. In Fig. 1 the density of polypropylene and PP filled with wood flour (WF), without or with of a compatibilizer (PP/WF/comp) has been compared to that of PP composites with glass fibers (GF). One can observe an increase in density with the filler content for all composites, however markedly higher density is that of PP/GF materials. Low density of polymer composites filled with natural fibers was received due to a specific hollow structure of the fibers (Fig. 2), which is totally different from a bulky structure of glass fibers. Cellulose fibers do not exist in nature as separate items, but they form bundles. Each bundle contains 10-60 fibers diameter of 10-17 microns, linked together with pectines. Natural fillers are biodegradable and derive from renewable resources, which is higly advantageous for sustainable development (Ellison et al., 2004;Bledzki et al., 2008;Oksman Niska & Sain, 2008;Klesov, 2007;Kozlowski & Kozlowska, 2005). Apart from the ecological reasons, natural fillers offer possibility of reinforcing thermoplastic matrices (Figs. 3 and 4) and provide weight savings in comparison to glass fibers (Table 1) Increase in the stiffness of thermoplastic composites filled with cellulose fibers depends on a matrix polymer nature -it is particularly significant to composites based on the low density polyethylene (LDPE). In case of LDPE filled with 30wt.% of hemp fibers Young modulus increased for 300%, whereas at 50% loading it is for 5-times higher (Fig. 3 Tensile strength of LDPE composites filled with natural fibers also enhances as a function of the filler content -at 30wt.% an increase for 60% was observed in comparison to neat LDPE, while at 50wt.% of hemp fibers the improvement reached 100% (Fig. 4). Looking at the data presented in Table 1 and considering that the glass-fibre reinforced polypropylene composites are widely used in automotive components (front/end carriers, door panel supports, dashboards, consoles, seat backs, headliners, package trays, underbody shields etc.) one should expect significant environmental profits from replacing glass fibers with natural fibers (NF). The fibers used most frequently for reinforcement of polymer composites are the bast fibers grown in the climate of Europe (flax and hemp) and sub-tropical fibers (kenaf, jute and sisal) imported mainly from Asia. Composites filled with wood/natural fibers are called biocomposites or wood polymer composites (WPC), sometimes also a term "artificial wood" is applied. Their properties combine higher stiffness, hardness and better dimensional stability in comparison to plastics and lower density in comparison to mineral fillers. Currently biocomposites are being used by Fiat, Ford, Opel, Daimler Chrysler, Saturn, BMW, Audi, Peugeot, Renault, Mercedes Benz and Volvo. Natural fibers Glass fibers Different polymers can be used as a matrix for biocomposites, and the loading of cellulose fillers usually vary within a range of 10 to 70%. Although thermosets were used first as a matrix for wood composites, currently thermoplastics use to be applied for manufacturing of biocomposites. Most frequently used matrices are polypropylene, polyethylene, polyvinylchloride and polystyrene. Depending on a chemical structure of the matrix the interaction at the polymer-filler interface is of different strength. This is crucial for the mechanical properties and melt viscosity of composites. High adhesion is expected if both components have polar groups. Unfortunately, this is not a case of the most popular biocomposites, which are composed of hydrophobic polyolefines and hydrofilic cellulose fibers (-OH groups). Therefore, extensive research has been performed to enhance interfacial adhesion and improve dispersion of cellulosic fibers in polyolefines. Frequently used solution is addition of compatibilizers composed of blocks interacting on a physical or chemical way with each component of the composite. Good results are reported on using maleated polypropylene (PP-g-MAH) for compatibilization of PP-based composites with www.intechopen.com natural fillers (Fig. 1). Another approach might be hydrophobisation of cellulose fibers (esterification). Although polymer composites with natural fillers have been commercialized, their industrial applications are in some sectors limited because of their low impact strength and high density compared to natural wood and polyolefines. The original processing technology of biocomposites was nonwoven technology, which is a normal production precursor to compression moulding. Further developments extended the processing technology to extrusion or injection molding of composites reinforced with short cellulose fibers. What should be considered at processing of thermoplastic composites is the melt viscosity, which increases after addition of fillers. Melt flow rate (MFR) of LDPE and composites with hemp fibers has been presented in Fig. 5. Flowability of composite filled with 30wt.% of fibers is for 2.5 times lower than that of neat LDPE, droping down at 50wt.% of hemp for 6 times in comparison to MFR of polyethylene. For that reason either higher pressure or higher processing temperature has to be used in order to shape high quality products. Particular type of biocomposite is the particle board, which is a composite material made from small pieces of wood or other lignocellulosic material (branches, stem, saw dust, straw or bagasses) that are mechanically pressed into sheets being bonded with a resin. Similar sandwich structure have composites which external skin layers are made of a bulky plastic, whereas a core is lightweight (density of 40-70 kg/m3), made either as a foam or honeycomb structure. Such sandwich materials (Panelplus panels) while used in the manufacture of truck bodies weigh 60% less than the equivalent plywood panels (Institute of Materials, Minerals and Mining, 2004). Hollow, lightweight plastic components may be manufactured also by the gas or water assisted injection molding. In this technology the mould is only partially filled with a polymer melt, afterwards a gas or water is injected to pack the mould completely. After mold cooling and solidification of a polymer the gas or water is evacuated from the cavity and the hollow part is ejected (Fig. 6). www.intechopen.com Polymer foaming In parallel with other technologies a fast development in foaming technology of plastic parts is observed, driven primarily by the transportation sector demands. The main reason is that porous components allow to reduce the amount of raw materials and fuel consumption at the same time. Another characteristic of foams is thermal insulation, therefore an important application field is insulation of buildings and industrial constructions. A typical example is polystyrene foam, which is used in the construction industry for over 50 years. Polyurethane and polyethylene foams have been used for insulations of pipelines, air ducts, containers, solar collectors etc. (Fig. 7). Plastic foams are used extensively for thermal insulation of refrigerators and freezers. Cellular structure of foams allows also sound and vibration damping, which has been used in sound insulating panels, upholstery in furniture, car seats, protective pads etc. Cellular plastics may be manufactured either by the periodic or continuous technology, using chemical or physical foaming agents. The conventional foams have cells of large size (0.1-1 mm) and broad size distribution (Fig. 8), therefore their mechanical properties are inferior to that of bulky polymers. The cell density of conventional foams is in a range of 10 4 -10 6 cells/cm 3 . The microfoams contain much higher number of cells (>10 9 /cm 3 ), which size is markedly smaller (ca. 10 m). Such materials exhibit better performance than the conventional foams, with higher mechanical properties and better thermal insulating characteristics. Methods of foaming Foaming techniques can be divided into several groups. Similar to the plastics processing technology, they can be divided on continuous (extrusion foaming) and periodic (injection molding or press foaming) processes. Periodic technology requires a long time and excluding manufacturing of expanded polystyrene (EPS), they are rarely applied. Other method of rather seldom use is manufacturing of polymer composites filled with easily soluble compounds, like salt or sugar. After the filler is eluted with an appropriate solvent, the empty holes form cells of the resulted foam. Cellular structure can be also formed by sintering of polymer powders at high temperature. Soft surfaces of neighboring spheres stick each other, whereas the free volumes between them create foam cells. Akzo Nobel offers a foaming method based on mixing of a matrix polymer with thermoplastic spheres filled with volatile hydrocarbons (Expancel). At heating the polymer becomes soft, while the hydrocarbon evaporates, expanding the material. Initial sphere diameter is 12 m, which after expansion increases to 40 m. The material of spheres should be compatible with the matrix polymer, whereas a hydrocarbon is selected depending on a required decomposition temperature. The spheres of Expancel are added to a polymer in an amount of 2 -8% and such mixture is processed by extrusion or injection molding technology. Decrease in a density for 30% was reported after addition of 3% microspheres, however the cells were of diverse size (Fig. 9). Undoubtedly the principal polymer foaming technology is that involving a gas delivered to a polymer by means of the chemical (CFA) or physical foaming agent (PFA). Low density foams (2 -500 kg/m 3 ) are manufactured with physical blowing agents, whereas chemical blowing agents produce foams density of 500 -750 kg/m 3 . Foaming agents Physical blowing agents comprise of gases and low boiling hydrocarbons or their halogenated derivatives. Initially used blowing agents (pentane, butane, chlorofluoro hydrocarbons) are withdrawn because of ecological reasons (the Montreal Protocol Agreement) and fire hazard and replaced by noble gases (argone, nitrogen, carbon dioxide). Interesting properties have also hydrofluoro olefines, which have been used for manufacturing of polyurethane and EPS foams (Rosato, 2010). Unfortunately several safe blowing agents exhibit either too low solubility in polymers or too high heat coefficient, which deteriorates thermo-insulating properties of foams. Thermal insulation expressed with the heat transfer coefficient depend on the cell size and density (Schellenberg & Wallis, 2010), but also on a nature of gas in the cells. Nitrogen and oxygen have comparable values (at 0°C respectively 22,7 and 23,2 mW/m K), however that of carbon dioxide equals to 13,7 mW/m K. Thus, the foams filled with CO 2 exhibit much better thermo-insulating properties than others. www.intechopen.com Foaming with gases results mostly in foams of large cell size, however using supercritical liquids bring about manufacturing of microfoams (Cooper, 2000). At critical conditions (temperature and pressure) the density of a liquid and a gas equals. Above the critical temperature condensation of a gas is impossible, independing on a pressure applied. From that reason carbon dioxide is most appropriate for transportation, storage and dosing conditions, since its critical temperature is +31.1, whereas that for nitrogen is -146,9ºC and argone -122,3ºC. Chemical blowing agents decompose within a specific temperature range, emiting a stechiometric amount of gases (usually nitrogen or carbon dioxide). Chemical blowing agents are classified as egzo-or endothermic, depending on the effect of a decomposition process. Due to a vigorous character of the decompositin reaction, egzothermic CBAs produce large size cells (>100 m) of a non-uniform size distribution and cause a high overall expansion of the material (Fig. 10). Endothermic chemical blowing agents need heat to continue decomposition, therefore it is easier to control the process just by changing its temperature. For that reason one can produce with endothermic CBA foams of lower cell size. Most popular endothermic blowing agent is a mixture of sodium hydrogen carbonate and citric acid. It decomposes to carbon dioxide and water in a two-stage reaction: first at 130-140ºC, second at 180-200ºC (Fig. 11). Total amount of gases emitted in the above reaction equals to 120 cm 3 /g. Solubility of gases in polymers is a crucial factor for foaming. If it is high, the saturation time of a polymer with a gas is shorter and lower pressure level is required to keep the gas in the melt. Solubility of carbon dioxide in several polymers is higher than that of nitrogen, therefore it generates more cell nuclei, which is essential for the foam structure. The size of a foaming gas particle is also important for the foaming technology. Since CO 2 particle is small, it diffuses fastly through a polymer, which means that the cell growth rate should be high. However, from the other side, carbon dioxide may escape more easily to atmosphere, thus causing a collapse of the foamed material. Technology of polymer foaming Foaming process consists of four stages: 1. dissolving of a gas in a polymer under high pressure (if polymer is in a solid state) or at high temperature and high pressure (e.g. in a molten polymer); 2. cell nucleation due to a sudden change in the thermodynamic state of a material resulting from its decompression or temperature change; 3. cell growth -their size and density depend on the blowing agent content, process parameters and properties of the polymer; 4. morphology fixation by polymer solidification, e.g. cooling below the glass temperature or crosslinking. www.intechopen.com Cellular polymers may be manufactured by saturation of a solid polymer with gas in a high pressure vessel at elevated temperature. First trials concerned foaming of polystyrene with carbon dioxide. It was evidenced that the equilibrium amount of CO 2 adsorbed by PS at 80ºC under 240 bar pressure equals to 11.8%. The foam density varied within 0.05-0.85 g/cm 3 depending on the applied temperature, pressure and pressure drop rate. The cell size amounted to 1-70 µm (Arora et al., 1998). The foam morphology is related to a structure of polymers -the most important are branchings in the polymer chain and crystallinity (Huang et al., 2008;Rachtanpun et al., 2004;Su & Huang, 2010;Li et al., 2007). It is well established that foaming is easier with amorphous polymers like PS than with crystalline ones like polyolefines. Amorphous polymers usually have higher melt strength and are more viscous, therefore cell growth is more difficult, but they hold gas pores better. Crystalline resins are less viscous but difficult to foam due to their chain entanglement and crystals formation at cooling, which disturbs the cell growth process. Foaming of semicrystalline polymers is more complicated than the amorphous polymer foaming, because the gas dissolves exclusively in amorphous regions. That causes a non uniform cell nucleation and irregular foam morphology. Temperature range of the efficient foaming is limited from above by the polymer degradation temperature and from a bottom by the polymer melt viscosity, which allows for a cell growth. Because it takes place only in the amorphous regions, fast increase in a viscosity of the semi-crystalline polymers upon cooling makes the available temperature range small in comparison to that of amorphous polymers. Technology of a direct polymer saturation with a gas is useful rather for niche products due to a long time required for saturation because diffusivity of a gas in a solid polymer is low. Nevertheless, it may be used a. o. for scaffold manufacturing. Mooney et al. (1996) have shown that after treating copolymer of D,L lactic acid and glycolic acid with carbon dioxide under pressure of 5.5 MPa for 72 hours followed by fast decompression one observes cells in a polymeric material. Their size equals to ca. 100 m, while the cell density depends on the process parameters and crystallinity of a polymer, reaching 93%. Cell nucleation in a polymer starts spontaneously after a sudden change of a thermodynamic state of the system (homogeneous nucleation) or may be induced with addition of small amount of a filler (heterogeneous nucleation) (Lee, 2000;. Technology of foaming by means of extrusion or injection molding is more widely used, because a gas diffusion in molten polymers is faster and it is facilitated by mixing. These technologies were applied at late nineties at Massachussetts Institute of Technology (MIT) after the successful research on manufacturing PS microfoams by a solid state saturation with supercritical CO 2 . Since then the foaming injection molding received an industrial maturity, providing remarkable material savings. Chen et al. (2006) presented an example of polypropylene foaming, showing that the material savings for thin wall (0.5 mm) items equals to 4-9% and for the thick wall (15 mm) samples reached 50%. Unfortunately, in parallel a deterioration of mechanical properties was reported. Similar findings were presented by Bielinski (2004), who tested foaming of polypropylene and polyvinyl chloride using chemical blowing agents. He has found that depending on the CBA content (0,5-2 wt.%) and injection molding parameters the cell size varied in a range of 10-350 µm. www.intechopen.com The material savings and lower amount of waste cause that the chemical blowing agents are widely used. In Fig. 12 a yogurt cup made of PS foamed with CBA has been presented. The mass of a cup was decreased for 15-20% in comparison to the non-foamed item. Even if a foam structure is not uniform, the economical and ecological advantages are obvious. Technology of microcellular foam extrusion has been extensively studied a.o. by Park and co-workers (Park, 2000; . After injection of a gas into the polymer melt its diffusion is intensified by mixing, which results in a complete gas dissolution. The equilibrium reached in the extruder is lost after the melt exits the die (Fig. 13). Sudden pressure decrease causes also decrease in a gas solubility, which has to evolve from a polymer in a form of microcells. These sites form nuclei, of which the larger cells grow as more gas appears in the system as the polymer melt-gas solution decompression proceeds. That process is continued until the new equilibrium state is reached or the polymer solidifies. dominating stresses are related with shearing and elongational forces, therefore a knowledge of a dependence of the polymer melt viscosity on temperature and on the shear rate is essential. In Figs. 14 and 15 the basic characteristics for three different LDPE grades have been presented. The examples show that different grades of the same polymer differ in pseudoplasticity, therefore they should exhibit diverse melt reaction at low and high deformation rates. That means a different mixing efficiency of the melt with a gas (torque level and gas diffusion rate) and easy or more difficult cell grow (melt viscosity) and resistance to rupture (melt strength). One can expect that significant differences in viscoelastic properties should have an impact on morphology of cellular plastics. The most important stages of the foaming process (e.g. cell nucleation and growth) have been presented schematically in Fig. 16. Critical factor for foaming is the cell nucleation rate, which is related to a change in the thermodynamic state of a system and phase separation. The gas dissolved formerly in a www.intechopen.com molten polymer is evolving simultaneously at several sites of the material (a). Since the nucleation rate is much higher than the diffusion rate, the cell nuclei arise first, and only after some time they start growing due to a diffusion of next gas particles which appear as the gas solubility in a polymer melt falls due to the pressure and temperature decrease in a material after it exits the extrusion die. Number of cells nucleated in the polymer depend on the pressure difference in the melt and atmospheric pressure. High difference developed by a change in the processing parameters or equipment configuration facilitates generation of higher cell density and their smaller size (Figs. 17 and 18). Cell growth process (b) and (c) depends on several parameters, a.o. on the melt viscosity, melt strength and dynamics of cooling. For lower viscous melts the cells grow faster and are larger than these in a more viscous system (Fig. 19). Fig. 19. Cell size and population for different LDPE grades (130ºC) One should consider that the gas dissolved in a polymer causes its plasticization, therefore the melt viscosity decreases markedly, thus modifying the foaming progress. In Fig. 20 Total amount of cells and their size are related to the polymer melt strength. In case it is high the neighbouring cells may grow individually, however if it is low, the cell walls may disrupt due to an internal gas pressure and coalescence of cells occurs (Fig. 21). Another issue is a gas escape from the polymer melt to atmosphere and a resulting surface warpage. The process of cell growth outlined in Fig. 16 stops as soon as the polymer glass temperature reaches the lower foaming limit. It may proceed fastly, because T g increases with decreasing www.intechopen.com concentration of a gas in the polymer. At that stage a cooling dynamics is essential for the foam morphology (Fig. 22). Provided the cooling is fast in the cooled calibrator, cells remain at the size generated shortly after exiting a die. However, if cooling proceeds slowly, the cells continue growing until the material solidifies by heat exchange with the ambient atmosphere. Research on extrusion microfoaming of polystyrene with supercritical carbon dioxide has shown high influence of the extrusion die temperature and the dynamics of cooling on the resulting foam density and structure. The foaming level amounted to 15-25% (Sauceau et al., 2007). Extruded microfoams have small cell size and exhibit low cell size distribution (Fig. 23). Cell size depend on the pressure within the cells, the melt strength and interfacial tension. The higher is the pressure, the smaller is the melt strength and interfacial tension, the larger cells are generated. As the cell grow and they wall thickness decreases, the coalescence probability of neighbouring cells increases. Behravesh at al. (1998) have found that a coalescence is facilitated with a high shear stress during processing, however its probability decreases with lower melt temperature. Therefore cooling of the gas-polymer solution in a heat exchanger or within a die is advantageous. Polymer foaming may be performed with single screw extruders of high L/D ratio equiped with mixing elements at a last section of the screw, or with the twin screw or tandem extruders. In any case very important is a precise dosing of a gas, since its surplus causes large cells formation. In a tandem system (Fig. 24) the first extruder serves for polymer melting and mixing it with the injected gas. In the second extruder further homogenisation of the temperature and gas distribution within a polymer melt should be performed. Proper design of every detail of the extrusion foaming set, the processing parameters and composition of the foamed material are crucial factors for the final morphology of a foam. The die geometry is of high importance, because the cell nucleation takes place there. Extensive discussion of the die role for generation of a high cell density has been presented by Xu et al. (2003) at the example of PS foaming with carbon dioxide. Pressure in a die depends on the temperature and shear forces in the molten polymer. Provided the die temperature is high, the melt viscosity is low, which causes a low pressure drop in the melt after it exits the die and a low number of nuclei. However, at the low die temperature the melt pressure becomes high and the cell density is also high. www.intechopen.com Shear stress level in the melt also influences pressure in the die. High pressure is beneficial to the cell density, however its too high level may cause the melt instabilities and cells rupture. The nuclei number in a foamed polymer melt may be increased by means of fillers addition (Antunes et al., 2009;Khorasani et al., 2010). Foaming of polypropylene with carbon dioxide and addition of talc has shown that the cell density depends on both the filler and foaming agent content, however the nucleating effect of talc was observed only at low talc loading. Similarly, the positive effect of talc on the cell nucleation process and total cell density has been noted at foaming of PP with isopentane. It has been concluded that the nucleation mechanism depends on a size of the gas particles used for foaming. Recently also nanofillers have been reported as efficient cell nucleants for PP and HDPE foams (Khorasani et al., 2010). Interesting properties exhibit cellular plastics manufactured from polymer blends. Basing on a knowledge on polymer melt rheology and blends morphology one can generate bi-modal distribution of cells located in different polymer domains. That idea has been presented for PPE/SAN blend foaming with carbon dioxide (Ruckdaeschel et al., 2007). Another modification of foaming technology presents mixing of polymer with CBA, partial crosslinking of the polymer and than decomposition of the blowing agent with evolution of a gas. Cell structure can be limited by means of the crosslinking level, thus microfoams can be manufactured with such technology (Rodriguez-Perez et al., 2008). Microcellular polyethylene is used a.o. for pipe insulations, gaskets and in a healthcare as the wound dressing or sensitive skin protection. Foaming devices Extensive research on polymer foaming which was performed at MIT, Massachusetts (USA) received commercialisation of obtained results. For that purpose Trexel Inc. company was established, which has developed MuCell ® technology of microfoams manufacturing by means of a supercritical gas [9]. Its main advantage is low consumption o f p o l y m e r , h i g h m e c h a n i c a l p r o p e r t i e s (including the impact strength), good thermoinsulating properties, as well as high surface quality of manufactured details. Such characteristics have been received due to the low cell size (5 -50 m), their regular shape and high cell density (10 8 cells/cm 3 ). Applying MuCell ® technology for injection molding requires mounting the gas injector in a proper place of the cylinder (Fig. 25), selecting a screw geometry enabling thorough mixing of a gas with molten polymer and adjusting the gas injection with the overall injection cycle time. Decompression of the gas-melt solution should take place just in a mould. MuCell ® allows for a shorter cycle time (15-35%), lower density of details (6-12%), reduction of internal stresses and high dimension stability. While applying MuCell ® technology for extrusion foaming one should consider also the gas injector, proper screw design and well selected die geometry. Proper matching of all parameters should result in a high pressure in the melt, which is crucial for efficient foaming. Foaming extrusion by MuCell ® allows a.o. manufacturing HDPE pipes with cell size of 15 µm and material density of 0,54 g/cm 3 , EPDM gaskets for automotive applications etc. Another technology of cellular profiles extrusion offers Sulzer Chemtech (Switzerland). Optifoam TM technology anticipates a gas injector between the screw and extruder head (www.sulzerchemtech.com). Gas (CO 2 or N 2 ) is injected into the gas melt through a fluid injection nozzle made of sintered metal. The mixture of a gas and a polymer is next thoroughly homogenized in the static mixer section (Fig. 26). ErgoCell ® foaming technology developed by Demag Ergotech GmbH (Mapleston, 2002) assumes two stage process, with injection of a gas into mixing device located between the stationary screw and a melt accumulator equiped with an injection plunger (Fig. 27). ErgoCell ® allows for a shorter cycle time, lower weight details (6-25%), reduction of internal stresses and surface defects. Car manufacturer Mazda also applied solution of the supercritical gas (CO 2 or N 2 ) in the polymer melt. According to the Core Back Expansion Molding technology the material is injected into a mold and once the foamed polymer has filled up the mould, its volume is increased by moving the back of the mould (Fig. 28). Weight of the door panel for Mazda2 (Fig. 29) made with such technology is lower for 20%, while its stiffness is for 16% higher. Foaming of biocomposites Biocomposites used in the construction and automotive sector are frequently called "artificial wood" because their many properties and appearance are like wood (Matuana et al., 1998;Migneault et al., 2008;Bledzki et al., 2008). Unfortunately, the density of biocomposites, even if markedly lower than that of glass fiber reinforced composites, is still twice as high as the natural wood density. That drawback can be reduced by foaming of biocomposites that are lighter and feel more like real wood (Rodrigue et al., 2006;Guo et al., 2004, Bledzki & Faruk, 2006, Kozlowski et al., 2010. The earliest known foamed and woodfilled thermoplastics were based on polystyrene (PS) -this amorphous polymer is a perfect bubble catcher. Wood flour itsel f h a s b e e n p r o v e d a s a n e fficient nucleating filler in polyethylene foamed with azodicarbonamide (Rodrigue et al. 2006). As far as length of natural fibers is concerned, short fibers (75-125 m) are favorable for foaming, since they do not disturb the cell growth process, like do the long fibers (4-25 mm). Selection of the polymer matrix is very important for properties of biocomposites. Because cellulose fibers are polar, the hydrophobic matrices (like polyolefines) need addition of adhesion promoters in order to facilitate regular fiber distribution and efficient stress transfer across the composite during deformation in a molten state and during/after solidification. Cellular biocomposites can be manufactured both by the extrusion or injection molding technology, however the extrusion foaming provides better results (Fig. 30), as it allows for a more precise process control. The study on foaming of biocomposites confirmed the findings formulated for non-filled polymers. High drop in the melt pressure between that in a die and ambient is favourable for manufacturing foams of fine, regular cells (Fig. 31), however if it is too high, that causes a cell damage and foam collapse (Fig. 32). In general foaming is more difficult due to the high melt viscosity of biocomposites and low melt strength, however the results of research reported in recent years for foaming of wood composites with the chemical or physical blowing agents are promising. The technology of extrusion foaming seems to be fully controled and the profiles manufactured looks like wood outside (Fig. 33) and in a cross-section (Fig. 34). The possibility of foaming composites filled with cellulose fibers make them ideal candidates for the low weight and thermal insulating engineering materials in all transport modes (Fig. 35). Acknowledgements This work has been sponsored by the Polish structural grants POIG.01.03.01-00-018/08-00 under the Innovative Economy scheme. The Author wish to thank Dr. A. Kozlowska and Dr. S. Frackowiak for their valuable help in selected measurements.
v3-fos-license
2021-12-30T16:06:33.853Z
2021-12-27T00:00:00.000
245554903
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-3417/12/1/226/pdf", "pdf_hash": "687ab59c6d07487fb04dc6afa65f3108ca20ab81", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:225", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "4fb1d82f6ffdc4e287edd1868c67effc2f0805b8", "year": 2021 }
pes2o/s2orc
Battery Sizing for Mild P2 HEVs Considering the Battery Pack Thermal Limitations : Small capacity and passively cooled battery packs are widely used in mild hybrid electric vehicles (MHEV). In this regard, continuous usage of electric traction could cause thermal runaway of the battery, reducing its life and increasing the risk of fire incidence. Hence, thermal limitations on the battery could be implemented in a supervisory controller to avoid such risks. A vast literature on the topic shows that the problem of battery thermal runaway is solved by applying active cooling or by implementing penalty factors on electric energy utilization for large capacity battery packs. However, they do not address the problem in the case of passive cooled, small capacity battery packs. In this paper, an experimentally validated electro-thermal model of the battery pack is integrated with the hybrid electric vehicle simulator. A supervisory controller using the equivalent consumption minimization strategy with, and without, consideration of thermal limitations are discussed. The results of a simulation of an MHEV with a 0.9 kWh battery pack showed that the thermal limitations of the battery pack caused a 2–3% fuel consumption increase compared to the case without such limitations; however, the limitations led to battery temperatures as high as 180 ◦ C. The same simulation showed that the adoption of a 1.8 kWh battery pack led to a fuel consumption reduction of 8–13% without thermal implications. Introduction Powertrain hybridization seems to be a viable mid-term solution for the reduction of vehicle fuel consumption and tailpipe emissions [1][2][3][4]. Hybrid Electric Vehicles (HEVs) can be of series, parallel or combined (series-parallel, power-split) configurations [1,2]. Parallel configurations offer a cost-effective solution due to: easier integration of the HEV modules in an existing powertrain; the presence of a single electric motor (EM); and a small electric battery (especially in mild HEVs) [2,3]. Among parallel HEV architectures, the so-called P2 configuration is gaining the utmost attention of vehicle manufacturers as well as researchers due to its scalability and an increased number of operating modes [2][3][4]. As shown in Figure 1, in a P2 configuration, the electric machine (EM) is installed on the input shaft of the gearbox after the main clutch C0. Opening the C0 allows for driving in pure electric as well as for an efficient regeneration of the braking energy. By adding the clutch C1, the EM can be used as a starter to crank the ICE as well as for gear shifting. Depending on the location of the EM, the P2 configuration can be on-axis or off-axis. The off-axis P2 HEVs have a stage of parallel axis gear, chain or belt drives (Figure 1) with advantages in terms of the axial size of the powertrain and the potential to have a smaller electric machine running at higher speed than the ICE. Depending on the location of the EM, the P2 configuration can be on-axis or off-axis. The off-axis P2 HEVs have a stage of parallel axis gear, chain or belt drives ( Figure 1) with advantages in terms of the axial size of the powertrain and the potential to have a smaller electric machine running at higher speed than the ICE. Owing to the possibility of scaling the EM and the battery module, P2 is compatible with both mild and plug-in HEV, the mild P2 being characterized by a relatively smaller EM power and battery capacity [5,6]. An analysis of commercially available MHEVs shows that mostly they have a battery capacity in the range of 0.5-2 kWh [6,7]. Fuel consumption reduction is the main design goal of Mild HEVs, and this requires an integrated design of the powertrain components as well as the control strategy to avoid jeopardizing the vehicle's dynamic performance [8,9]. A wide range of controllers has been proposed to this end in the literature. Rule-based supervisory controllers are the simplest to implement, however, they have limited performance when compared to optimization-based counterparts [10]. Optimization-based controllers like dynamic programming (DP) [9][10][11], equivalent consumption minimization strategy (ECMS) [11][12][13], and model predictive control (MPC) [14] are more performant in finding a global minimum of fuel consumption. However, they need a higher computational effort, require prior knowledge about the driving cycle, and hence can mainly be used in offline control strategies [10,13]. Comparable results to the ECMS can be obtained by implementing fuzzy logic rules, which can be very complex in the case of a large number of control variables [15]. The ECMS was first proposed by [12] as a real-time controller offering a near-optimal solution without prior knowledge of the drive cycle [13,16]. Furthermore, it allows for the constraint of various system variables by integrating them in a cost function by means of penalty functions. The main principle of the ECMS is to minimize the total equivalent fuel consumption at each time instant, including the contribution coming from the electrical energy to or from the battery. The optimization is constrained by the battery state of charge (SOC) and the maximum/minimum torque of the EM and ICE [12,15]. Additionally, it is well known that the battery is a safety-critical system that might cause a fire at high temperatures. Hence, most battery management systems (BMS) monitor the battery temperature in real-time to operate at less than 55 °C [16][17][18]. A review of fire incidence associated with electric and HEVs can be found in [19]. The control of the battery temperature can be accomplished by systems of different complexity, from simple passive cooling (heat is conducted through the battery casing and natural convection), to active cooling [17,18] (resorting to air or liquid as a coolant, or by other means such as using phase change materials [20]). Owing to the possibility of scaling the EM and the battery module, P2 is compatible with both mild and plug-in HEV, the mild P2 being characterized by a relatively smaller EM power and battery capacity [5,6]. An analysis of commercially available MHEVs shows that mostly they have a battery capacity in the range of 0.5-2 kWh [6,7]. Fuel consumption reduction is the main design goal of Mild HEVs, and this requires an integrated design of the powertrain components as well as the control strategy to avoid jeopardizing the vehicle's dynamic performance [8,9]. A wide range of controllers has been proposed to this end in the literature. Rule-based supervisory controllers are the simplest to implement, however, they have limited performance when compared to optimization-based counterparts [10]. Optimization-based controllers like dynamic programming (DP) [9][10][11], equivalent consumption minimization strategy (ECMS) [11][12][13], and model predictive control (MPC) [14] are more performant in finding a global minimum of fuel consumption. However, they need a higher computational effort, require prior knowledge about the driving cycle, and hence can mainly be used in offline control strategies [10,13]. Comparable results to the ECMS can be obtained by implementing fuzzy logic rules, which can be very complex in the case of a large number of control variables [15]. The ECMS was first proposed by [12] as a real-time controller offering a near-optimal solution without prior knowledge of the drive cycle [13,16]. Furthermore, it allows for the constraint of various system variables by integrating them in a cost function by means of penalty functions. The main principle of the ECMS is to minimize the total equivalent fuel consumption at each time instant, including the contribution coming from the electrical energy to or from the battery. The optimization is constrained by the battery state of charge (SOC) and the maximum/minimum torque of the EM and ICE [12,15]. Additionally, it is well known that the battery is a safety-critical system that might cause a fire at high temperatures. Hence, most battery management systems (BMS) monitor the battery temperature in real-time to operate at less than 55 • C [16][17][18]. A review of fire incidence associated with electric and HEVs can be found in [19]. The control of the battery temperature can be accomplished by systems of different complexity, from simple passive cooling (heat is conducted through the battery casing and natural convection), to active cooling [17,18] (resorting to air or liquid as a coolant, or by other means such as using phase change materials [20]). The influence of battery temperature on the performance and fuel consumption of HEV is discussed in [13], considering a 100 Ah capacity Lithium battery. Penalty factors for different ambient temperature and drive cycle combinations are integrated with the cost function of the ECMS supervisor. A battery aging semi-empirical model is taken into account along with its temperature dynamics, showing that cold ambient temperature causes a frequent charge and discharge of the battery due to the battery capacity reduction in cold ambient temperatures. The limitation of the electric traction due to high battery temperatures is not considered in the work [13]. Padovani et al. has studied how battery aging is affected by high or low temperatures, and high SOC [21]. A penalty for using the battery at high temperatures is included in the cost function. Considering a 7 kWh lithium-ion battery, an ambient temperature of 15 • C running on an Artemis drive cycle, the battery temperature does not exceed 35 • C. The lower operating temperatures in this work can be attributed to the large battery size. Sarvaiya et al. [22] compared different control strategies incorporating the battery ageing model in the cost function. Considering the EPA driving cycle, and a pure electric vehicle with a relatively large 9 kWh lithium-ion battery, high temperatures are not an issue, with the temperature reaching 32 • C from a 30 • C ambient temperature. An accurate electro-thermal model of the battery pack is needed to study the influence of the battery temperature in HEV. Battery heat generation induced by the electrical current can be divided into irreversible and reversible components [23,24]. The former is related to the Joule and polarization effects due to the current flowing through the internal resistance and charge transfer. The latter is due to the entropy change, which is related to the temperature dependence of the open circuit voltage (OCV). Barcellona and Piegari in [25] propose a model that can predict the thermal behaviour of a pouch lithium-ion battery cell based on its current and ambient conditions. The model only takes the effect of the OCV and internal resistance into account. However, the resistive-capacitive (RC) parallel branches are not included in the model of the battery, as the thermal time constant greatly outweighs the electrical one. Madani et al. in [26] review the determination of thermal parameters of a single cell, such as internal resistance, specific heat capacity, entropic heat coefficient, and thermal conductivity. These parameters are then used for the design of a suitable thermal management system. A lumped-parameter thermal model of a cylindrical LiFePO4/graphite lithium-ion battery is developed in [27] to compute the internal temperature. Thermal management systems with active heating and cooling can be adopted for controlling the temperature in the battery pack [28][29][30]; however, the additional cost and complexity would not be affordable, especially in 48 V micro or mild hybrid electric vehicles. Conversely, 48 V micro or mild hybrids are potentially characterized by high-temperature fluctuations due to the large C-rate reached in boost and recuperation due to the small battery capacity. The influence of battery temperature limitations on the fuel consumption of mild HEVs and the study of battery capacity, above which temperature does not represent a criticality, seems to be still largely unattended in the vast literature on the topic. Additionally, most of the literature that addresses energy management problems has adopted a simplified analytical model to estimate battery SOC and temperature. Such models can only be accurate under limited scenarios and hence could impose a limitation on model accuracy in a wider range of applications. Hence, the main contribution of the present work is to address the following points: (a) to extend the existing control strategies to include the thermal limitations in the case of a 48 V P2 Mild HEV with passive cooling; (b) to validate the electro-thermal model of the battery by means of existing experimental data; and (c) to investigate the influence of the battery capacity on reaching thermal limitations and determine the minimum battery capacity to avoid thermal runaway. The paper is organized as follows: In Section 2, the backward model of the conventional vehicle and P2 HEV are developed. The model of the conventional vehicle is validated using the experimental results reported in [31,32]. Furthermore, the electrothermal model of the battery and its experimental validation are presented in this section. Section 3 presents the supervisory controller based on ECMS considering the thermal limitations of the battery. Section 4 reports the results of the simulation. Finally, Section 5 summarizes the main findings of the paper. Vehicle Performance Modelling The vehicle model shown in Figure 2 follows the backward approach widely adopted in the literature, especially for the design of the supervisory controller and the HEV components [11,33,34]. The detailed description of each subsystem and the underlying equations are given in the following table with reference to the HEV data summarized in Table 1. of the battery. Section 4 reports the results of the simulation. Finally, Section 5 summarizes the main findings of the paper. Vehicle Performance Modelling The vehicle model shown in Figure 2 follows the backward approach widely adopted in the literature, especially for the design of the supervisory controller and the HEV components [11,33,34]. The detailed description of each subsystem and the underlying equations are given in the following table with reference to the HEV data summarized in Table 1. Longitudinal Vehicle Dynamics In this subsystem, the required traction force to overcome the resisting forces (i.e., aerodynamic and rolling resistance forces) is calculated as: where M is the vehicle mass; v and dv/dt are longitudinal speed and acceleration; f r is coefficient of rolling resistance; C x is aerodynamic drag coefficient; ρ air is air density; A f is vehicle frontal area; F W is the longitudinal force on the wheels. The resisting force due to road inclination is not considered in the analysis. The total torque required on the wheels T W , includes the drag torque in the axle bearings T loss and torque due to inertia of the wheels J W , they are added to the torque of the traction force on wheels F W : The wheel rolling radius R W of the tire is calculated as: The coefficient ε takes into account the tire deflection, which for radial tires is in the range ε = 0.92 − 0.98 [35]. The nominal wheel radius can be obtained from the tire markings, i.e., the rim diameter D rim , nominal width W and aspect ratio AR. Moreover, the angular speed of the wheel shaft is then found from the rolling radius and the speed v: Gearbox and Gearshift Logic The gearbox model converts the wheel torque and speed to the corresponding ones at the input of the gearbox. A simple vehicle speed-based gearshift strategy is implemented in the model, accounting for the constraint of ICE maximum speed. Gearshift speeds are defined by analyzing the gearshift experimental data from Argon National Laboratory (ANL) Figure 3 shows the gearshift logic. To avoid frequent gear shifting, upshifts and downshifts are performed with a hysteresis that varies with the engaged gear. The required torque and angular speed at the gearbox input can be computed from the final gear ratio and the gear ratio of the gearbox , as: where the efficiencies of the final gear and the gearbox are considered as = = 0.98. In a conventional vehicle and are the ICE torque and speed at the gearbox. The required torque T gb and angular speed ω gb at the gearbox input can be computed from the final gear ratio i f and the gear ratio of the gearbox i gb , as: where the efficiencies of the final gear and the gearbox are considered as η f = η gb = 0.98. In a conventional vehicle T gb and ω gb are the ICE torque and speed at the gearbox. When considering a HEV, the supervisory controller splits the same torque between the ICE and the EM depending on the established rules. Internal Combustion Engine The fuel consumption map of Mazda CX9 2016 ICE used in the simulation is shown in Figure 4 as obtained from experimental data [32]. The adoption of a static map typically leads to an estimate of the fuel consumption that is usually lower than what is measured in dynamic conditions by means of a chassis dynamometer [36]. However, this difference is in the range of 1-4% depending on the driving cycle [36], with more aggressive driving cycles being associated with the upper bound of this range. In this paper, the extra fuel consumption due to transient operation is neglected. Electric Motor For a given angular speed and a mechanical torque request , the EM subsystem evaluates the electric power absorbed from the battery in discharge phases in charging mode: where the mechanical power is calculated as Electric Motor For a given angular speed ω em and a mechanical torque request T em , the EM subsystem evaluates the electric power absorbed from the battery in discharge phases P bat.dchg or released to it during the charging phases P bat.chg . This electric power is computed by means of the static efficiency map of the EM ( Figure 5): -in discharging mode: in charging mode: where the mechanical power P em is calculated as Appl. Sci. 2022, 11, x FOR PEER REVIEW 8 of 25 Figure 5. Static efficiency map of the electric motor and maximum torque characteristic lines. Battery Performance and Electro-Thermal Model The essence of incorporating a battery electro-thermal model in the vehicle model ( Figure 2) is to provide information to the ECMS controller for efficient fuel consumption minimization while preventing thermal runaway. At each time instant , the adopted ECMS controller requires information of the state of charge ( ), the temperature , and the rate of the change of the temperature . This information is the output of the battery pack electro-thermal model that takes the power request as input (from Equations (8) and (9). Considering the 14s6p battery configuration, the battery consists of a series module pack (SMP) composed of a series of fourteen (14) modules. Each of the modules consists of six (6) parallel cells, otherwise known as parallel cell modules (PCM). The pack model is developed from the electro-thermal model of a single cell according to the work of [37]. First, an enhanced self-correcting (ESC) single-cell model is developed to form the building block for the PCM that is scaled to obtain the SMP. The resulting model is validated with an experimental dataset at an ambient environmental temperature of 20 °C. The cell model takes into consideration both the static and the dynamic voltage characteristics as shown in Figure 6. A detailed description of each subsystem of the battery pack model is given below. The static voltage computed from the OCV block is integrated with the dynamic voltage from the dynamic process to obtain an ESC cell model. Battery Performance and Electro-Thermal Model The essence of incorporating a battery electro-thermal model in the vehicle model ( Figure 2) is to provide information to the ECMS controller for efficient fuel consumption minimization while preventing thermal runaway. At each time instant k, the adopted ECMS controller requires information of the state of charge SOC(k), the temperature θ, and the rate of the change of the temperature . θ. This information is the output of the battery pack electro-thermal model that takes the power request P bat as input (from Equations (8) and (9). Considering the 14s6p battery configuration, the battery consists of a series module pack (SMP) composed of a series of fourteen (14) modules. Each of the modules consists of six (6) parallel cells, otherwise known as parallel cell modules (PCM). The pack model is developed from the electro-thermal model of a single cell according to the work of [37]. First, an enhanced self-correcting (ESC) single-cell model is developed to form the building block for the PCM that is scaled to obtain the SMP. The resulting model is validated with an experimental dataset at an ambient environmental temperature of 20 • C. The cell model takes into consideration both the static and the dynamic voltage characteristics as shown in Figure 6. A detailed description of each subsystem of the battery pack model is given below. Battery Performance and Electro-Thermal Model The essence of incorporating a battery electro-thermal model in the vehicle model ( Figure 2) is to provide information to the ECMS controller for efficient fuel consumption minimization while preventing thermal runaway. At each time instant , the adopted ECMS controller requires information of the state of charge ( ), the temperature , and the rate of the change of the temperature . This information is the output of the battery pack electro-thermal model that takes the power request as input (from Equations (8) and (9). Considering the 14s6p battery configuration, the battery consists of a series module pack (SMP) composed of a series of fourteen (14) modules. Each of the modules consists of six (6) parallel cells, otherwise known as parallel cell modules (PCM). The pack model is developed from the electro-thermal model of a single cell according to the work of [37]. First, an enhanced self-correcting (ESC) single-cell model is developed to form the building block for the PCM that is scaled to obtain the SMP. The resulting model is validated with an experimental dataset at an ambient environmental temperature of 20 °C. The cell model takes into consideration both the static and the dynamic voltage characteristics as shown in Figure 6. A detailed description of each subsystem of the battery pack model is given below. The static voltage computed from the OCV block is integrated with the dynamic voltage from the dynamic process to obtain an ESC cell model. The static voltage computed from the OCV block is integrated with the dynamic voltage from the dynamic process to obtain an ESC cell model. Static Component The static component of the model represents the open-circuit voltage, which is the measured cell voltage when the cell is subject to a zero-load condition. The OCV is approximated as a function of only SOC for the temperature operating condition considered in this work. For the cell under consideration, the OCV is obtained from the cell datasheet of Sanyo NCR 18650-618 lithium-ion cell [38] that has a rated capacity of 3200 mAh. Figure The static component of the model represents the open-circuit voltage, which is the measured cell voltage when the cell is subject to a zero-load condition. The OCV is approximated as a function of only SOC for the temperature operating condition considered in this work. For the cell under consideration, the OCV is obtained from the cell datasheet of Sanyo NCR 18650-618 lithium-ion cell [38] that has a rated capacity of 3200 mAh. Figure 7 shows the voltage variation across SOC under open and closed-circuit conditions. The middle blue curve shows the OCV as a function of SOC, as acquired from the datasheet. The red curve is the closed-circuit voltage at a constant charge load of 1.675 A. The yellow curve is the closed-circuit voltage at a constant discharge load of 0.66 A. It can be seen that the OCV taken from the datasheet approximates the true OCV at the mean of the closed-circuit voltages of the charge and discharge phases. The larger deviation of the charge phase results from the relative higher load with respect to the discharge phase. The SOC is computed through coulomb counting and it is analogous to a fuel gauge that expresses the amount of charge contained in a cell. The expression of SOC is shown in Equation (10) for a given time step . where is current input to the model-positive at discharge, Ah is the total releasable capacity in a complete cycle and is the battery coulombic efficiency. ( ) is the SOC at kth time step. Only a percentage of current that is applied to the cell contributes to increasing the SOC in the charging phase. Hence, SOC is penalized with [30] in the charge phase. Dynamic Component The dynamic component of the cell model accounts for the voltage losses, including joule loss, diffusion loss due to mass transfer, activation loss due to charge transfer and loss due to hysteresis. The output of the dynamic model is the predicted closed-circuit voltage that accounts for these losses. The model is developed from an RC circuit with a single RC branch in series with a resistive branch [37] as shown in Figure 8. The middle blue curve shows the OCV as a function of SOC, as acquired from the datasheet. The red curve is the closed-circuit voltage at a constant charge load of 1.675 A. The yellow curve is the closed-circuit voltage at a constant discharge load of 0.66 A. It can be seen that the OCV taken from the datasheet approximates the true OCV at the mean of the closed-circuit voltages of the charge and discharge phases. The larger deviation of the charge phase results from the relative higher load with respect to the discharge phase. The SOC is computed through coulomb counting and it is analogous to a fuel gauge that expresses the amount of charge contained in a cell. The expression of SOC is shown in Equation (10) for a given time step k. where i(k) is current input to the model-positive at discharge, C tot (Ah) is the total releasable capacity in a complete cycle and η is the battery coulombic efficiency. SOC(k) is the SOC at kth time step. Only a percentage of current that is applied to the cell contributes to increasing the SOC in the charging phase. Hence, SOC is penalized with η [30] in the charge phase. Dynamic Component The dynamic component of the cell model accounts for the voltage losses, including joule loss, diffusion loss due to mass transfer, activation loss due to charge transfer and loss due to hysteresis. The output of the dynamic model is the predicted closed-circuit voltage that accounts for these losses. The model is developed from an RC circuit with a single RC branch in series with a resistive branch [37] as shown in Figure 8. The hyst component in the circuit accounts for the hysteretic contribution of the losses. The estimated dynamic voltage is computed as. where is the voltage loss, is the instantaneous hysteresis voltage; is the dynamic hysteresis magnitude; is the instantaneous series resistor; is the resistor diffusion current; is the parallel branch resistance; ℎ is the hysteresis voltage; is sign of current input. It is 1 for positive current input, −1 for negative input and zero otherwise. The first two components at the right-hand side of Equation (12) model the instantaneous and the dynamic hysteresis voltage. The third component models the RC branch of the circuit. The subscript j represents the number of branches. The RC branch models the losses due to mass transport and charge transfer. The diffusion-resistor current that passes through the RC branch is computed from the time constant of the RC branch. The fourth component is the series resistive loss that models the joule losses. The parameters of the model appear linearly according to Equation (13). is computed for the data points of the experimental data and the parameters, , , , are computed by least square approximation. By least square approximation, = \ . Table 2 reports the corresponding values of the electric and the thermal model parameters. where v loss is the voltage loss, M 0 is the instantaneous hysteresis voltage; M is the dynamic hysteresis magnitude; R 0 is the instantaneous series resistor; i Rj (k) is the resistor diffusion current; R j is the parallel branch resistance; h(k) is the hysteresis voltage; s(k) is sign of current input. It is 1 for positive current input, −1 for negative input and zero otherwise. The first two components at the right-hand side of Equation (12) model the instantaneous and the dynamic hysteresis voltage. The third component models the RC branch of the circuit. The subscript j represents the number of branches. The RC branch models the losses due to mass transport and charge transfer. The diffusion-resistor current i Rj that passes through the RC branch is computed from the time constant of the RC branch. The fourth component is the series resistive loss that models the joule losses. The parameters of the model appear linearly according to Equation (13). v loss is computed for the N data points of the experimental data and the parameters, M, M 0 , R j , R 0 are computed by least square approximation. By least square approximation, X = A \ Y. Table 2 reports the corresponding values of the electric and the thermal model parameters. The dynamic current profile used for data collection during the experiment is applied as input to the model to compute v loss according to Equation (12). This profile consists of a sequence of random charge and discharge current values applied in the range of −4.5 A and +4.5 A. The current profile and v loss are shown in Figure 9. Appl. Sci. 2022, 11, x FOR PEER REVIEW 11 of 25 The dynamic current profile used for data collection during the experiment is applied as input to the model to compute according to Equation (12). This profile consists of a sequence of random charge and discharge current values applied in the range of −4.5 A and +4.5 A. The current profile and are shown in Figure 9. The Battery Thermal Model The thermal model is developed from contributions of the voltage losses that are computed from the dynamic voltage model of Equation (12). Based on lumped parameter assumption [39], the thermal model is designed based on heat transfer by convection and by neglecting the conduction heat transfer component according to Equation (14). The discrete-time battery surface temperature is computed considering the power losses, that is derived from the voltage losses, as the heat source. is the battery specific heat capacity; is the cell mass; is the power is computed based on the voltage losses; is the ambient temperature; is the sampling time; is the thermal resistance by convection. The thermal parameters are reported in Table 2. Development of Supervisory Control Strategy The supervisory control strategy based on the Equivalent Consumption Minimization Strategy optimally splits the required traction torque between the traction sources while minimizing the objective function. In this section, three types of ECMS are demonstrated. The first type is the ECMS without considering the battery thermal limitations. The second type is an ECMS with an on/off switching of the electric traction when the battery temperature threshold is reached. Finally, in the third type, the thermal limitations are integrated into the penalty functions on the temperature and the temperature dynamics are set in the objective function. ECMS Control Strategy without Thermal Limitations To achieve an effective reduction in fuel usage, the contribution of electric motor power should be defined in such a way that the engine operates in its higher efficiency zone. For this reason, the ECMS was applied to the MHEV backward model to determine the power split ratio between the engine and the EM. From the configuration of the MHEV The Battery Thermal Model The thermal model is developed from contributions of the voltage losses that are computed from the dynamic voltage model of Equation (12). Based on lumped parameter assumption [39], the thermal model is designed based on heat transfer by convection and by neglecting the conduction heat transfer component according to Equation (14). The discrete-time battery surface temperature θ sur f is computed considering the power losses, P loss that is derived from the voltage losses, as the heat source. c p is the battery specific heat capacity; m cell is the cell mass; P loss is the power is computed based on the voltage losses; θ amb is the ambient temperature; T s is the sampling time; R conv is the thermal resistance by convection. The thermal parameters are reported in Table 2. Development of Supervisory Control Strategy The supervisory control strategy based on the Equivalent Consumption Minimization Strategy optimally splits the required traction torque T gb between the traction sources while minimizing the objective function. In this section, three types of ECMS are demonstrated. The first type is the ECMS without considering the battery thermal limitations. The second type is an ECMS with an on/off switching of the electric traction when the battery temperature threshold is reached. Finally, in the third type, the thermal limitations are integrated into the penalty functions on the temperature and the temperature dynamics are set in the objective function. ECMS Control Strategy without Thermal Limitations To achieve an effective reduction in fuel usage, the contribution of electric motor power should be defined in such a way that the engine operates in its higher efficiency zone. For this reason, the ECMS was applied to the MHEV backward model to determine the power split ratio between the engine and the EM. From the configuration of the MHEV system, the rotational speeds of the power sources are set by vehicle velocity. Hence, the ECMS controller splits only the torque between the engine and EM. Objective function: The objective function for the ECMS is overall equivalent fuel consumption (J ecms ) which is evaluated as a sum of engine fuel consumption . m ice and equivalent fuel consumption of the electric power usage . m eqv for the power drawn from the battery P bat [10,12]: . m eqv (P bat ) equivalent fuel consumption of EM can be computed as the fuel consumption of the ICE to provide the same mechanical power generated by electric motor [12]: . m eqv (P bat ) = P em (T em , ω em ) · s eqv = T em · ω em · s eqv (16) s eqv is the battery equivalent fuel consumption factor, depending on battery discharging (P bat > 0) or charging (P bat < 0) modes as: where LHV is the lower heating value of the fuel, η ice is the average efficiency of the ICE, η em is the average efficiency of the EM and η inv is the average efficiency of the inverter. The mathematical model for ICE fuel consumption rate . m ice at a given operation point (T ice , ω ice ) was obtained by steady state curve fitting of the engine map data (see Figure 10). The regression coefficients of the polynomial function (see Equation (18)) were estimated accurately since the goodness of this fitting indicated R-square = 0.9961. . m ice (T ice , ω ice ) = p 00 + p 10 · ω ice + p 01 · T ice + p 20 · ω 2 ice + . . . +p 11 · ω ice · T ice + p 02 · T 2 ice + p 21 · ω 2 e · T ice + . . . +p 12 · ω e · T 2 ice + p 03 · T 3 ice (18) system, the rotational speeds of the power sources are set by vehicle velocity. Hence, the ECMS controller splits only the torque between the engine and EM. Objective function: The objective function for the ECMS is overall equivalent fuel consumption ( ) which is evaluated as a sum of engine fuel consumption and equivalent fuel consumption of the electric power usage for the power drawn from the battery [10,12]: ( ) equivalent fuel consumption of EM can be computed as the fuel consumption of the ICE to provide the same mechanical power generated by electric motor [12]: is the battery equivalent fuel consumption factor, depending on battery discharging ( > 0) or charging ( < 0) modes as: where is the lower heating value of the fuel, is the average efficiency of the ICE, is the average efficiency of the EM and is the average efficiency of the inverter. The mathematical model for ICE fuel consumption rate at a given operation point ( , ) was obtained by steady state curve fitting of the engine map data (see Figure 10). The regression coefficients of the polynomial function (see Equation (18)) were estimated accurately since the goodness of this fitting indicated R-square = 0.9961. Once the objective function is expressed as a function of EM torque and engine torque (independent variables), then constraints were constituted based on operation limits and satisfying the required torque demand of the vehicle. Constraints: As shown in Equation (19), a necessary condition is to provide the required torque and request that the sum of torque contributions of two sources at the output of the gearbox be equal to torque demand at the same level. Moreover, the ICE and EM torques must be within the corresponding operating envelope: (19) where, U pulley -gear ratio on the pulley that connects EM to the input shaft of the gearbox and i gb -gear ratio of engaged gears in the gearbox. By minimizing the objective function (Equation (15)) over the range described in the constraints (Equation (18)), the optimum value of (T ice , T em ) can be estimated instantly, without considering the SOC level. When SOC is too low, the controller must restrict usage of the EM. On the other hand, when the SOC is near its upper threshold limit, the controller tends to use the EM power. Moreover, to accurately compare the fuel consumption of HEV, results should be indicated for the charge sustaining mode where SOC has the same level at the beginning and at the end of the driving cycle. Charge sustaining mode: Therefore, to maintain the battery SOC within the lower SOC low and upper SOC high limits, an s-shaped correction function PF soc was introduced to penalize the equivalent fuel consumption of the battery [10]. If the SOC decreases below the threshold (0.6 in this case), the vehicle tends to restrict the usage of electric energy, hence the PF soc starts to rise. Otherwise, the correction factor is equal to 1, meaning that no limitation to utilize the electric power is applied. In terms of an equation, the above condition can be represented as: where SOC is a normalized state of charge that can be computed as [10]: where, SOC high = 0.80 and SOC high = 0.60 values are used as upper and lower threshold limits for SOC of the battery. The coefficients in Equation (20) are chosen based on multiple trials to obtain the best fuel consumption. The plot in Figure 11 shows the variation of PF soc as a function of SOC. Then the objective function in Equation (15) is modified with the penalization for low SOC values, as: By minimizing this objective function, the torque split is performed to minimize the fuel consumption without considering the thermal limitations of the battery. However, as was mentioned, the optimal operating temperature range for lithium-ion batteries is between 15-55 • C, otherwise, their safety, power and life are impacted [17,18,20]. Therefore, temperature limitation will be implemented in the ECMS controller in the next step. Appl Then the objective function in Equation (15) is modified with the penalization for low SOC values, as: By minimizing this objective function, the torque split is performed to minimize the fuel consumption without considering the thermal limitations of the battery. However, as was mentioned, the optimal operating temperature range for lithium-ion batteries is between 15 − 55 °C, otherwise, their safety, power and life are impacted [17,18,20]. Therefore, temperature limitation will be implemented in the ECMS controller in the next step. ECMS Control Strategy with Thermal Limitations Using Temperature Threshold To avoid the thermal runaway of the battery, limitations can be applied by setting the battery temperature threshold to 55 °C. Over this temperature, the usage of electric power is limited. Hence, the required power is provided mainly by the ICE. This is implemented by setting the penalty factor to a huge number when the battery temperature is over the given threshold as shown in Equation (23). As can be seen, if the battery temperature is > 55 °C the penalty factor is set to a very high number i.e., to 1000, regardless of the battery SOC. Then, the penalty factor in Equation (22) is replaced with the penalty factor which considers the thermal condition of the battery as well. By implementing such a limitation, the penalty factor works as an on/off switch triggered by the temperature threshold and battery operation outside of the designed operating temperature range can be avoided. However, this limitation does not include the rate of temperature change, which might result in a temperature overshoot due to a previous power drain from the battery. In the next step, the temperature change rate is also limited. ECMS Control Strategy with Thermal Limitations Using Penalty Function on Temperature and Temperature Change Rate To avoid thermal shocks in the battery, the penalty factor is represented as a cumulative product of penalty factors on the temperature , on the temperature change rate ECMS Control Strategy with Thermal Limitations Using Temperature Threshold To avoid the thermal runaway of the battery, limitations can be applied by setting the battery temperature θ bat threshold to 55 • C. Over this temperature, the usage of electric power is limited. Hence, the required power is provided mainly by the ICE. This is implemented by setting the penalty factor PF SO to a huge number when the battery temperature is over the given threshold as shown in Equation (23). As can be seen, if the battery temperature is θ bat > 55 • C the penalty factor is set to a very high number i.e., to 1000, regardless of the battery SOC. Then, the penalty factor PF soc in Equation (22) is replaced with the penalty factor PF θ soc which considers the thermal condition of the battery as well. By implementing such a limitation, the penalty factor works as an on/off switch triggered by the temperature threshold and battery operation outside of the designed operating temperature range can be avoided. However, this limitation does not include the rate of temperature change, which might result in a temperature overshoot due to a previous power drain from the battery. In the next step, the temperature change rate is also limited. ECMS Control Strategy with Thermal Limitations Using Penalty Function on Temperature and Temperature Change Rate To avoid thermal shocks in the battery, the penalty factor is represented as a cumulative product of penalty factors on the temperature PF θ , on the temperature change rate PF . θ and on the SOC PF soc are applied to equivalent fuel consumption ( . m eqv ) of the electric motor: The penalty function on temperature PF θ is defined as: where θ is normalized temperature calculated as a function of upper θ high and lower θ low temperature limit as [10]: where: θ low = 10 • C and θ high = 60 • C. Function of PF θ is plotted in Figure 12a, it has a value of about 1 when the battery temperature is lower than 40 • C. As the battery temperature increases beyond 40 • C, a correction factor PF θ gets higher value, so usage of EM power will be limited. tric motor: The penalty function on temperature is defined as: where ̅ is normalized temperature calculated as a function of upper and lower temperature limit as [10]: where: = 10 °C and = 60 °C. Function of is plotted in Figure 12a, it has a value of about 1 when the battery temperature is lower than 40 °C. As the battery temperature increases beyond 40 °C, a correction factor gets higher value, so usage of EM power will be limited. The penalty on the temperature change rate is used to avoid a high-temperature change rate which could induce the thermal shock on the battery is: where ̅ is normalized temperature change rate, a function of upper and lower temperature change limits [10]: where for upper and lower limits for rate of temperature change are: = 0 °C/s = 6 °C/s The graphical representation of is given in Figure 12b. Results and Experimental Validation of the Developed Model An intermediate step of the HEV modelling was to validate the conventional vehicle model by comparing it to experimental data. To validate the model, experimental data from ANL [31] for a conventional vehicle was compared with the results of the developed model simulation. Furthermore, the battery electro-thermal model was validated experimentally using a developed test bench that was developed in-house. The simulation model of the mild P2 HEV is discussed afterwards in this section. The penalty on the temperature change rate PF . θ is used to avoid a high-temperature change rate which could induce the thermal shock on the battery is: where . θ is normalized temperature change rate, a function of upper . θ high and lower . θ low temperature change limits [10]: θ low (28) where for upper and lower limits for rate of temperature change are: The graphical representation of PF . θ is given in Figure 12b. Results and Experimental Validation of the Developed Model An intermediate step of the HEV modelling was to validate the conventional vehicle model by comparing it to experimental data. To validate the model, experimental data from ANL [31] for a conventional vehicle was compared with the results of the developed model simulation. Furthermore, the battery electro-thermal model was validated experimentally using a developed test bench that was developed in-house. The simulation model of the mild P2 HEV is discussed afterwards in this section. Experimental Validation of Conventional Vehicle Model The conventional vehicle model was obtained by disabling the ECMS controller, Electric Machine, and Battery Electro-Thermal model subsystems. This vehicle model was designed based on a backward model was developed in Matlab/Simulink environment using all characteristics and parameters of the selected vehicle. The model estimates the fuel consumption of the vehicle by computing the required ICE speed (ω ice ) and torque (T ice ) for a given driving cycle. To validate the model, experimental test data of the powertrain components and of the vehicle from ANL were used. The data includes measurements of the vehicle speed (v), the ICE torque (T ice ) and its angular speed (ω ice ), the selected gear (i gr ), and the instantaneous fuel consumption ( . m ice ) profile. A Mazda CX9 2016 SUV was selected for further analysis. The complete data of the ICE map and the vehicle parameters are available in [32]. For the validation, the simulated value of the torque, the angular speed and the fuel consumption rate of the ICE, and the engaged gear of the gearbox were compared to those from the experiments reported by ANL. As is shown in Figure 13, the input is the speed profile of the UDDS driving cycle. The gearshift in the model is defined on the basis of the vehicle velocity and the result of the gear ratio is almost overlapping with the ANL experimental data (Figure 13b). Moreover, the simulated values of the ICE angular speed (Figure 13c) and the torque (Figure 13d) reported at the output of the gearbox show a good match with experimental data. For the calculation of the tire radius (Equation (3)), the coefficient ε was chosen to be equal to 0.95. The simulated and experimental values for the fuel rate ( Figure 13e) have slight differences, due to the fact that transient fuel consumption is not taken into account. Figure 13e visualizes this fact, where the differences are mainly in the transient phases, which was not modelled. For the total fuel consumed during the UDDS cycle, a simulated value of 699 g vs. an experimental value of 739 g was obtained. This corresponds to a difference of about 5%. However, if transient fuel consumption behaviour of the engine is excluded this difference is within just 2% (713 g), while the other 3% of the difference is due to transient phenomena which is not included in the developed model as mentioned above. Experimental Validation of Conventional Vehicle Model The conventional vehicle model was obtained by disabling the ECMS controller, Electric Machine, and Battery Electro-Thermal model subsystems. This vehicle model was designed based on a backward model was developed in Matlab/Simulink environment using all characteristics and parameters of the selected vehicle. The model estimates the fuel consumption of the vehicle by computing the required ICE speed ( ) and torque ( ) for a given driving cycle. To validate the model, experimental test data of the powertrain components and of the vehicle from ANL were used. The data includes measurements of the vehicle speed ( ), the ICE torque ( ) and its angular speed( ), the selected gear ( ), and the instantaneous fuel consumption ( ) profile. A Mazda CX9 2016 SUV was selected for further analysis. The complete data of the ICE map and the vehicle parameters are available in [32]. For the validation, the simulated value of the torque, the angular speed and the fuel consumption rate of the ICE, and the engaged gear of the gearbox were compared to those from the experiments reported by ANL. As is shown in Figure 13, the input is the speed profile of the UDDS driving cycle. The gearshift in the model is defined on the basis of the vehicle velocity and the result of the gear ratio is almost overlapping with the ANL experimental data (Figure 13b). Moreover, the simulated values of the ICE angular speed (Figure 13c) and the torque ( Figure 13d) reported at the output of the gearbox show a good match with experimental data. For the calculation of the tire radius (Equation (3)), the coefficient was chosen to be equal to 0.95. The simulated and experimental values for the fuel rate (Figure 13e) have slight differences, due to the fact that transient fuel consumption is not taken into account. Figure 13e visualizes this fact, where the differences are mainly in the transient phases, which was not modelled. For the total fuel consumed during the UDDS cycle, a simulated value of 699 g vs. an experimental value of 739 g was obtained. This corresponds to a difference of about 5%. However, if transient fuel consumption behaviour of the engine is excluded this difference is within just 2% (713 g), while the other 3% of the difference is due to transient phenomena which is not included in the developed model as mentioned above. Electro-Thermal Model Validation The Electro-thermal model was experimentally validated using the experimental test bench described in Figure 14. The test bench consisted of six cells connected in a series with voltage across the cells measured with Elithion cell boards. An LM35 Texas Instrument temperature sensor was used to measure the cell surface temperature. An Elithion (Lithiumate) Battery Management System (BMS) was installed on the test bench to enhance the safety of the acquisition process. An Arduino Mega board connected via LAN to a dedicated PC was then used to acquire the measured data. As an extra safety measure, the system was equipped with an emergency stop device. output; and (e) the fuel consumption rate. SIM-Simulation, blue line, EXP-Experiment, orang dashed line. Electro-Thermal Model Validation The Electro-thermal model was experimentally validated using the experimental tes bench described in Figure 14. The test bench consisted of six cells connected in a serie with voltage across the cells measured with Elithion cell boards. An LM35 Texas Instru ment temperature sensor was used to measure the cell surface temperature. An Elithion (Lithiumate) Battery Management System (BMS) was installed on the test bench to en hance the safety of the acquisition process. An Arduino Mega board connected via LAN to a dedicated PC was then used to acquire the measured data. As an extra safety measure the system was equipped with an emergency stop device. The experimental data consisting of a dynamic current (A) input, battery surface tem perature (°C) and voltage (V) outputs were collected at room temperature from an NRC18650G-HOANA cell of 3200 mAh rated capacity. The dynamic current input was in the range of 4.5 A for both the charge and discharge phases. With the experiment con ducted at ambient temperature, the estimated and the measured voltages were compared as in Figure 15b. The thermal model was equally validated, and the result is shown i Figure 15c. Parallel Cell Module (PCM) With the single-cell model designed and validated, the model forms a building block for developing the PCM and the SMP for further integration in a complete vehicle model. A battery PCM consists of cells that are connected in parallel while the SMP consists of battery modules that are connected in series to make a battery pack. A 14s6p battery pack configuration entails 14 SMP and six PCM. The parallel cell configuration is often Parallel Cell Module (PCM) With the single-cell model designed and validated, the model forms a building block for developing the PCM and the SMP for further integration in a complete vehicle model. A battery PCM consists of cells that are connected in parallel while the SMP consists of battery modules that are connected in series to make a battery pack. A 14s6p battery pack configuration entails 14 SMP and six PCM. The parallel cell configuration is often useful for increasing the energy capacity of the battery pack, especially for various high energy applications. The current through each cell in a module i j can be computed from Equation (29) if the module voltage v can be derived. By Kirchhoff's law, the sum of all the individual current that passes through a module is equal to the pack current. Also, the voltages at the terminal of all the cells in the module are equal. The cell voltage has two contributions: the instantaneous voltage that changes instantly with the current and the non-instantaneous R 0 of the joule, the loss is modified to accommodate the cell terminal losses. The sum of the current through the individual cells in module i can be computed from Equation (29). p is the number of parallel branches in the module and j is the branch index. By simultaneous computation of Equations (29) and (30), the current through each cell in the module and the voltage can be derived. The modules are connected in series to develop an SMP. To design a 48 V, 0.9 kWh battery pack, a 14s6p configuration was used. The temperature distribution within the battery pack was evaluated on the basis of the variation of the SOC and the capacity of the cells within the battery pack. The battery pack was simulated to show these variations. Figure 16a shows the variation of temperature when the initial SOC varies from 0.85 to 0.95, assuming an equal total capacity of 3.0 Ah for the individual cells. Figure 16b shows the variation of temperature when the cell capacity varies from 2.7 Ah to 3.0 Ah, assuming equal initial SOC of 0.95 for all cells. Appl Ah for the individual cells. Figure 16b shows the variation of temperature when the cell capacity varies from 2.7 Ah to 3.0 Ah, assuming equal initial SOC of 0.95 for all cells. Based on the result of the simulation shown in Figure 16, the maximum temperature of the battery pack is in the range of 60 °C to 70 °C at the end of the tests. The temperature variation is larger for varying SOC than for varying capacity. Simulation Results for P2 HEV Once the simulation model was validated experimentally, the simulation was performed with the P2 HEV (with 14s6p battery pack) complete simulation model on different driving cycles (UDDS, NEDC and WLTC). As an example, the results for UDDS driv- Based on the result of the simulation shown in Figure 16, the maximum temperature of the battery pack is in the range of 60 • C to 70 • C at the end of the tests. The temperature variation is larger for varying SOC than for varying capacity. Simulation Results for P2 HEV Once the simulation model was validated experimentally, the simulation was performed with the P2 HEV (with 14s6p battery pack) complete simulation model on different driving cycles (UDDS, NEDC and WLTC). As an example, the results for UDDS driving cycle with three levels of limitations are presented in Figures 17-19. The figures represent (a) ICE torque, (b) EM torque, (c) the battery SOC, (d) fuel consumption rate and (e) the speed profile on UDDS driving cycle. Note that here, results are shown for when the C0 clutch was engaged during the declaration phase, which means that regenerative braking will be affected by the resistive torque of the engine. In all the cases, the SOC was maintained at the beginning and at the end of driving cycle. For this reason, the initial SOC was varied in different cases to ensure the electrical energy equilibrium. The overall fuel efficiency was improved by 8.6% compared to conventional vehicle simulation results (699.9 g) in the case where no thermal limitations are applied ( Figure 17). However, the temperature of the battery reached about 160 • C (see Figure 19a In Figure 18, results are described for ECMS with thermal limitation implemented by means of an on/off penalty factor (discussed in Equation (22)). In this case, the overall fuel consumption was reduced by only 3.8% compared to the conventional vehicle model (699.9 gr). Since, traction power contribution by EM is possible only if battery temperature is below 55 °C (see Figure 19a). With the penalty factors on both the temperature and the temperature change rate (discussed in Equations (24)- (27)), the battery temperature was kept below 55 °C (see Figure 20a), and overall fuel usage was improved by 4.8% with respect to conventional vehicle model as in Figure 19. The temperature profile of the battery for UDDS, NEDC and WLTC driving cycles are given in Figure 20. For the case with no thermal limitations (blue, solid lines), the more In Figure 18, results are described for ECMS with thermal limitation implemented by means of an on/off penalty factor (discussed in Equation (22)). In this case, the overall fuel consumption was reduced by only 3.8% compared to the conventional vehicle model (699.9 gr). Since, traction power contribution by EM is possible only if battery temperature is below 55 • C (see Figure 19a). With the penalty factors on both the temperature PF θ and the temperature change rate PF . θ (discussed in Equations (24)- (27)), the battery temperature was kept below 55 • C (see Figure 20a), and overall fuel usage was improved by 4.8% with respect to conventional vehicle model as in Figure 19. In Figure 18, results are described for ECMS with thermal limitation implemented by means of an on/off penalty factor (discussed in Equation (22)). In this case, the overall fuel consumption was reduced by only 3.8% compared to the conventional vehicle model (699.9 gr). Since, traction power contribution by EM is possible only if battery temperature is below 55 °C (see Figure 19a). With the penalty factors on both the temperature and the temperature change rate (discussed in Equations (24)- (27)), the battery temperature was kept below 55 °C (see Figure 20a), and overall fuel usage was improved by 4.8% with respect to conventional vehicle model as in Figure 19. The temperature profile of the battery for UDDS, NEDC and WLTC driving cycles are given in Figure 20. For the case with no thermal limitations (blue, solid lines), the more The temperature profile of the battery for UDDS, NEDC and WLTC driving cycles are given in Figure 20. For the case with no thermal limitations (blue, solid lines), the more aggressive cycles such as UDDS and WLTC have higher power demand, therefore, the temperature of the battery went up to 160 • C and 180 • C (Figure 20a,c), respectively. The temperature for the low power demanding NEDC cycle stayed within 85 • C (Figure 20b). The case with limitation on both temperature and temperature change rate (yellow, dotted line) showed more stable the battery temperature profile compared to the on/off temperature limitation (orange, dashed line) strategy. The reason is that introducing penalty factor due to the temperature changing rate PF . θ , which limits the rapid changes in the battery temperature. Moreover, in three driving cycles (UDDS, NEDC, and WLTC) this limitation led to fuel consumption improvement of 1%, 0.3% and 0.7%, respectively, compared to the on/off strategy. As can be seen, the introduction of the thermal limitation positively affects the thermal behaviour that helps to avoid the thermal runaway. However, fuel economy is negatively affected, as the use of electric traction is limited as well. Therefore, it would be reasonable to define the minimum capacity of the battery that allows avoiding thermal runaway without introducing the thermal limitation on the control strategy. Increased Battery Size to Avoid Temperature Limits (14s12p) To reduce the fuel consumption of the vehicle while keeping the thermal condition of the battery within the optimal range, the battery pack capacity was doubled. Since an increase of the number of parallel cells results in a decrease of the current per cell, this should affect positively the battery thermal behavior. Therefore, the number of parallel cells was gradually increased (using integer numbers only) from 14s6p to obtain a 14s12p configuration. The profile of the different variables over the WLTC cycle is shown in Figure 21. aggressive cycles such as UDDS and WLTC have higher power demand, therefore, the temperature of the battery went up to 160 °C and 180 °C (Figure 20a,c), respectively. The temperature for the low power demanding NEDC cycle stayed within 85 °C (Figure 20b). The case with limitation on both temperature and temperature change rate (yellow, dotted line) showed more stable the battery temperature profile compared to the on/off temperature limitation (orange, dashed line) strategy. The reason is that introducing penalty factor due to the temperature changing rate , which limits the rapid changes in the battery temperature. Moreover, in three driving cycles (UDDS, NEDC, and WLTC) this limitation led to fuel consumption improvement of 1%, 0.3% and 0.7%, respectively, compared to the on/off strategy. As can be seen, the introduction of the thermal limitation positively affects the thermal behaviour that helps to avoid the thermal runaway. However, fuel economy is negatively affected, as the use of electric traction is limited as well. Therefore, it would be reasonable to define the minimum capacity of the battery that allows avoiding thermal runaway without introducing the thermal limitation on the control strategy. Increased Battery size to Avoid Temperature Limits (14s12p) To reduce the fuel consumption of the vehicle while keeping the thermal condition of the battery within the optimal range, the battery pack capacity was doubled. Since an increase of the number of parallel cells results in a decrease of the current per cell, this should affect positively the battery thermal behavior. Therefore, the number of parallel cells was gradually increased (using integer numbers only) from 14s6p to obtain a 14s12p configuration. The profile of the different variables over the WLTC cycle is shown in The results show ( Figure 22) that starting from a 14s12p configuration the battery temperature stays within the optimal range, even in the most aggressive cycle considered in the study. The results for this configuration are 50 • C, 37 • C and 61 • C in the UDDS, NEDC and WLTC cycles, respectively. The fuel consumption and its percentage reduction (in brackets) relative to the conventional vehicle are as follows: 611.4 g (−12.6%), 544 g (−8.2%) and 1185 g (−9.4%), for the UDDS, NEDC and WLTC driving cycles, respectively. 1 Figure 22. The temperature of the 14s12p battery pack on different driving cycles. Table 3 summarizes the simulation results for P2 mild HEV with a 48 V 0.9 kWh (14s6p) battery pack. It includes the overall fuel consumption, the percentage of reduction with reference to conventional vehicles and the maximum temperature over the corresponding cycle. Furthermore, the results for the 14s12p battery pack with increased capacity (1.8 kWh) and without thermal limitations implemented in the supervisory controller are reported. The considered battery packs have a passive cooling system. Hence the heat transfer is only by natural convection. Therefore, thermal limitations must be applied on the controller level for the 14s6p battery pack to limit battery usage and to avoid thermal runaway. However, the implementation of limitations on the controller leads to a fuel consumption increase of between 2-3%. To achieve a reduction in fuel consumption without compromising thermal behaviour, a larger battery capacity could be considered. The minimum battery capacity would be 1.8 kWh, which corresponds to a 14s12p battery pack configuration. Fuel consumption in this case is reduced by between 8-13% with reference to the fuel consumption of the conventional vehicle. Discussion and Conclusions Obviously, the increase of ambient temperature should change the minimum battery capacity towards the higher values. However, the analysis was conducted considering a constant ambient temperature of 20 • C. The influence of different ambient temperatures has not been validated in this work and can be addressed in future works.
v3-fos-license
2016-04-23T08:45:58.166Z
2011-05-01T00:00:00.000
5231636
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1420-3049/16/5/4070/pdf", "pdf_hash": "a87792b29753bc5efb16badfb9fb8827d59fd887", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:226", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "sha1": "6a2d3556a4d6ddc2227194305550bf37c8ed2e20", "year": 2011 }
pes2o/s2orc
Copper(I)-Catalyzed [ 3+ 2] Cycloaddition of 3-Azidoquinoline-2,4(1H,3H)-diones with Terminal Alkynes † 3-Azidoquinoline-2,4(1H,3H)-diones 1, which are readily available from 4-hydroxyquinolin-2(1H)-ones 4 via 3-chloroquinoline-2,4(1H,3H)-diones 5, afford, in copper(I)-catalyzed [3 + 2] cycloaddition reaction with terminal acetylenes, 1,4-disubstituted 1,2,3-triazoles 3 in moderate to excellent yields. The structures of compounds 3 were confirmed by 1H and 13C-NMR spectroscopy, combustion analyses and mass spectrometry. 3-Azidoquinoline-2,4(1H,3H)-diones 1 were examined as partners in copper(I) catalyzed [3 + 2] cycloaddition (Scheme 2). Three different terminal acetylenes 2 were chosen; phenylacetylene (2a), propargyl alcohol (2b) and 3-ethynylaniline (2c). When screening for the optimal reaction conditions, we initially tested the most commonly used system, copper(II) sulphate pentahydrate and ascorbic acid as a source of copper(I) in tert-BuOH/H 2 O as a solvent [6]. Interestingly, no reaction could be detected by thin-layer chromatography (TLC) analysis after 24 h and the starting azides 1 were recovered nearly quantitatively from the reaction mixtures. Similarly unsuccessful were attempts to use a combination of copper(II) acetate and elemental copper in acetonitrile. We assumed that the prime reasons for the failure of these reactions were the extremely low solubilities of azides 1 in the reaction media used. Similar difficulties were previously encountered by some of us in attempts at using sparingly soluble propargyl functionalized diazenecarboxamides [11] or azido-appended platinum(II) complexes [12] as click components. In those instances the use of dimethyl sulfoxide (DMSO) as a reaction solvent and a combination of copper(II) sulphate pentahydrate and elemental copper (CuSO 4 /Cu (0) ) provided results that were superior to other combinations. Conducting the cycloadditions between azides 1 and acetylenes 2 in DMSO, in the presence of CuSO 4 /Cu (0) couple afforded the expected 1,4-disubstituted 1,2,3-triazoles 3 in moderate to excellent yields, as shown in Table 1. The structures of triazoles 3 were confirmed by 1 H-and 13 C-NMR spectroscopy, combustion analyses and mass spectrometry on the crystallized compounds. In one instance, that of 3Cb, the expected 1,4-regiochemistry at the 1,2,3-triazole ring was confirmed by a NOESY experiment. As demonstrated in Figure 1, the triazole hydrogen atom (H triazole ) displays five NOE cross peaks; three to the propyl group bound to C3 of the quinolinedione core and two to the hydroxymethyl group, attached to C4' of the triazole ring. The most important for the assigned regiochemistry is the cross peak of H triazole to the C3-CH 2 protons, which would not be possible for the isomeric 1,5-disubstituted product. The absence of a cross peak between the hydroxymethyl group and C3-CH 2 further corroborates the structure of 3Cb. In one instance, that of 3-phenyl-3-(4-phenyl-1H-1,2,3-triazol-1-yl)quinoline-2,4(1H,3H)-dione (3Aa), the preparation of 3-azido-3-phenylsquinoline-2,4(1H,3H)-dione (1A) and its cycloaddition with phenylacetylene were conducted as a one-pot multicomponent reaction, i.e., by mixing the corresponding 3-bromoquinolinedione substrate 6A, sodium azide and alkyne (2a) in the presence of Cu (II) /Cu (0) (Experimental section). Similar one-pot azidation-cycloaddition procedures are described in the literature [13,14] This protocol afforded the desired product (3Aa) in modest 43% yield. General Reagents and solvents were commercially sourced (Fluka, Aldrich, Alfa Aesar) and used as purchased. Granular copper (particle size 0.2-0.7 mm), coating quality (99.9%, Fluka #61144) was used. For column chromatography, Fluka Silica gel 60, 220-440 mesh was used. The course of separation and also the purity of substances were monitored by TLC on Alugram® SIL G/UV254 foils (Macherey-Nagel). NMR spectra were recorded at 302 K on a Bruker Avance DPX 300 spectrometer operating at 300 MHz ( 1 H) and 75 MHz ( 13 C), and Bruker Avance III 500 MHz NMR instrument operating at 500 MHz ( 1 H) and 125 MHz ( 13 C). Proton spectra were referenced to TMS as internal standard. Carbon chemical shifts were determined relative to the 13 C signal of DMSO-d 6 (39.5 ppm). Chemical shifts are given on the δ scale (ppm). Coupling constants (J) are given in Hz. Multiplicities are indicated as follows: s (singlet), d (doublet), t (triplet), q (quartet), m (multiplet), or br (broadened). Phase sensitive NOESY with gradient pulses in mixing time, of 3Cb, was recorded in DMSO-d 6 (c = 21 mM) using standard pulse sequence from the Bruker pulse library (noesygpphpp in the Bruker software) at 296 K, with mixing time of 300 ms and relaxation delay of 2 s. Mass spectra and high-resolution mass spectra were obtained with a VG-Analytical AutospecQ instrument and Q-TOF Premier instrument. Data are reported as m/z (relative intensity). The IR spectra were recorded on a Perkin-Elmer 421 and 1310 and Mattson 3000 spectrophotometers using samples in potassium bromide disks. Elemental analyses (C, H, N) were performed with FlashEA1112 Automatic Elemental Analyzer (Thermo Fisher Scientific Inc.). The melting points were determined on a Kofler block or Gallenkamp apparatus and are uncorrected. Starting compounds 1, 4-6 were prepared by known procedures as shown in Scheme 3 and described below. Scheme 3. Preparation of starting compounds 4-6. For key to substituents, please see Table 1. 4-Hydroxy-6-methoxy-3-propylquinolin-2(1H)-one (4C) A mixture of 4-methoxyaniline (6.16 g, 50 mmol) and diethyl propylmalonate (10.52 g, 52 mmol) was heated on a metal bath at 220-230 °C for 1 h and then at 260-280 °C for 3 h (the reaction was complete when the distillation of ethanol stopped). After cooling, the solid product was crushed, suspended in aqueous sodium hydroxide solution (0.5 M, 125 mL) and after filtration the filtrate was washed with toluene (3 × 20 mL). The aqueous phase was filtered and acidified by concentrated hydrochloric acid. The precipitated crude product was filtered, washed with water, air dried and crystallized from ethanol affording white solid of 4C, yield 5. The precipitated solid was filtered, washed with water, dried on the air and crystallized from benzene (1.3 L) affording compound 5D. The mother liquor was concentrated in vacuo to approximately 380 mL. The precipitated solid was filtered and recrystallized from benzene to give compound 5F. General Procedure for the Preparation of 1,2,3-Triazoles 3 A mixture of 3-azidoquinoline-2,4(1H,3H)-dione (1, 2.00 mmol), terminal alkyne (2, 2.02 mmol), CuSO 4 ⋅5H 2 O (0.2 mmol, 10 mol%), granular copper (8.8 mmol), and DMSO (6 mL) was stirred at room temperature in darkness until the starting compound 1 became undetectable by TLC (The reaction times are indicated in Table 1). Then the reaction mixture was diluted with CH 2 Cl 2 (160-250 mL) and filtered. The filtrate was washed with saturated aqueous NH 4 Cl (3 × 80 mL) until the aqueous layer remained colourless (concentrated aqueous ammonia (0.25 mL) was added to the saturated aqueous NH 4 Cl for the isolation of 3Ac. Each time the product was back-extracted from the water layer with few millilitres of CH 2 Cl 2 . The combined organic layers were shortly dried over Na 2 SO 4 , filtered, and the solvents were evaporated in vacuo. Residual DMSO was removed by several consecutive co-distillations in vacuo with toluene and then ethanol. The product was suspended in boiling cyclohexane (20 mL), cooled down to room temperature, filtered and dried to give the corresponding triazole 3. For analyses the products were crystallized from the solvent indicated below. Reaction times along with the yields of crude and crystallized products are indicated in Table 1. 79.2, 100.0, 112.4, 116.7, 123.2, 125.1, 127.8, 128.3, 128.7, 128.8, 129.5,
v3-fos-license
2018-08-29T23:34:40.999Z
2018-08-15T00:00:00.000
52095391
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/complexity/2018/6342683.pdf", "pdf_hash": "8e9cb6a72a3d8aabecfd2cc95cceb271fddd77c3", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:229", "s2fieldsofstudy": [ "Engineering" ], "sha1": "8e9cb6a72a3d8aabecfd2cc95cceb271fddd77c3", "year": 2018 }
pes2o/s2orc
Event-Triggered Consensus Control for Leader-Following Multiagent Systems Using Output Feedback The event-triggered consensus control for leader-following multiagent systems subjected to external disturbances is investigated, by using the output feedback. In particular, a novel distributed event-triggered protocol is proposed by adopting dynamic observers to estimate the internal state information based on the measurable output signal. It is shown that under the developed observer-based event-triggered protocol, multiple agents will reach consensus with the desired disturbance attenuation ability and meanwhile exhibit no Zeno behaviors. Finally, a simulation is presented to verify the obtained results. Introduction Consensus is a basic problem in the cooperative control of multiagent systems [1][2][3][4][5], which is generally realized through the behavior-based method or the leader-following approach.To be specific, the leader-following consensus problem has been studied in [6][7][8][9][10][11][12] from different perspectives, where all the following agents can reach consensus by tracking a real or virtual leader based on local interactions.However, all the work mentioned above requires that the control system is implemented in a continuous or timesampled triggered way, which would consume some unnecessary energy and computing resources in applications. To reduce the resource consumption, the event-triggered scheme has been applied to the consensus control problem.In particular, the event-triggered consensus of leaderfollowing multiagent systems was studied in [13][14][15][16], where the dynamics of agents was restricted to single-or doubleintegrator.Furthermore, the general linear multiagent system was considered in [17][18][19][20][21].In [18], a distributed eventtriggered strategy was proposed with state-dependent threshold so that the following agents could asymptotically track the leader without continuous communication.But it was assumed that all the following agents were aware of state information of the leader.In [19], the distributed, centralized, and clustered event-triggered schemes were proposed for different network topologies, which could reduce the frequency of controller updates.In [21], an existing continuous control law was extended with a novel event-triggered condition, and it was proved that the system remained the desired performance with much lower controller updating frequency.In the literatures mentioned above, the consensus protocols and event-triggered conditions are both designed using the internal state information that is very hard or even impossible to be accurately obtained in real systems [22].In addition, the external disturbance always exists in realistic situations, whose influence also has to be taken into account. In this paper, the output-feedback event-triggered consensus is investigated for leader-following multiagent systems with high-order linear dynamics, subject to external disturbances.A novel distributed control protocol is proposed with an observer form, by using the local output information.Then, sufficient conditions are derived to guarantee that the system can reach consensus asymptotically with the desired disturbance attenuation ability but without Zeno behavior.The contributions of our research are as follows.The difficulties in directly obtaining full states is overcome by designing a local state observer, whose output is used to generate the event-triggered consensus protocol.Besides, in the developed scheme, the consensus algorithm, the state observer, and the event-triggering monitor are all implemented in a distributed and asynchronous way, which is applicable in practice and can reduce the controller updating frequency. The rest of the paper is arranged as follows.Section 2 gives the preliminaries on graph theory and the problem formulation.In Section 3, the event-triggered scheme is proposed with a state observer, in which the internal states of each agent are estimated by using its output information.And it is proved that the desired consensus performance can be realized with no Zeno behavior.A simulation is given in Section 4, and Section 5 concludes the whole paper. Preliminaries and Problem Formulation 2.1.Preliminaries.In a leader-following multiagent system, the communication topology among following agents is described by a graph G = V , ℰ, A .Assume that there are n agents and let N = 1, … , n .Then, V = v 1 , v 2 , … , v n denotes the set of nodes with node v i standing for the ith agent, and ℰ ⊆ V × V denotes the set of edges, in which the edge v j , v i means information is transferred from agents j to i. Besides, if v j , v i ∈ ℰ, node v j is called a neighbor of v i .A = a ij is named the adjacency matrix with a ij ≥ 0, where the positive element a ij is the weighing factor of edge v j , v i .On this basis, the Laplacian matrix of G can be defined as ℒ = D − A, in which D == diag d 1 , … , d n is called the degree matrix of G with d i = ∑ n j=1 a ij .On the other hand, for the leading agent represented by node v 0 , the edge v 0 , v i / v i , v 0 represents that the information is exchanged between the ith agent and the leader, whose weighing factor is denoted by a 0i = a i0 .In other words, the edges between following agents and the leader are bidirectional.Note that there is no control input for the leader, and the information from its neighboring agents is just used to determine the triggering time.To summarize, let ℰ be the edge set related to n agents and one leader, then the set of neighbors of node v i is The interaction matrix of the leader-following system is defined as H = L + Λ, whereΛ = diag a 10 , … , a n0 . To realize the consensus control of leader-following multiagent systems, the assumption on interaction graphs is made as follows [7]. Assumption 1.It is supposed that the interaction graph of the leader-follower system with directed communication has a spanning tree with the leader as root, or at least one agent in each connected component of G is connected to the leader for the undirected case. Problem Formulation. Consider a leader-following multiagent system with n followers and one leader.The ith follower is modeled by a linear dynamic system with external disturbances: where The dynamics of the leader is where x 0 t ∈ ℝ m and y 0 t ∈ ℝ p denote the state and output of the leader, respectively.Without loss of generality, it is assumed that (A, B) is stabilizable, (A, C) is observable, and C is of full row rank. Protocol u i t is said to solve the consensus problem if and only if the following equality is satisfied: However, the accurate consensus is hard to reach when there exist external disturbances.So the following controlled output is defined to measure the disagreement of agent i to the leader agent: Let Obviously, if z t = 0, then z i t = 0 for any i ∈ N and equivalently (3) holds.Therefore, the H ∞ norm of the transfer function matrix from ω t to z t , denoted by T zω s , can quantitatively measure the attenuation ability of the multiagent system against external disturbances.Combining with the definition of H ∞ norm, the control objective is to make where γ > 0 is the given H ∞ performance index [23].When (7) is satisfied for the closed-loop system, we say consensus is realized with the disturbance attenuation ability γ. Protocol Design and Consensus Analysis 3.1.Output-Feedback Event-Triggered Protocol.To realize the consensus control using output information, state observers are first designed for the following agents and the leader as 2 Complexity in which x i t ∈ R m and x 0 t ∈ R m are, respectively, the estimated states of the ith agent and the leader.Combining with observers ( 8) and ( 9), the event-triggered protocol is developed as where t i k denotes the kth triggering time of the ith agent determined by the event-triggered condition Remark 1. Totally, ( 8), ( 9), (10), and ( 11) form the outputfeedback event-triggered protocol, which is implemented in a distributed and asynchronous way.Notice that there is an item e At in the definition of e i t that can be regarded as a predictive factor.By this, the trigger frequency can be reduced, and the redundant triggers after achieving consensus can be also avoided.Similarly, to decrease the effect of sampling error to the control effect, the predictive factor is also used in the designed protocol u i t . Model Transformation. For analyzing the consensus performance, the nonzero consensus trajectory of a closedloop multiagent system is firstly converted into the origin by model transformations.Define the observing error as h i t = x i t − x i t , i = 0, … , n, and let , and e i t = e i t − e 0 t .By ( 8), (9), and (10), it is derived that a ij e i t + x i t − e j t + x j t + Ka i0 e i t + x i t 13 Furthermore, by letting we have and then T and ε T t = e T t 0 1×mn T , then ( 16) and ( 4) can be rewritten as where According to the definition equations of x i t and h i t , if they are asymptotically stable at the origin, x i t and h i t would asymptotically equal to x 0 t and h 0 t , respectively.Then x i t can asymptotically reach x 0 t by x i t = x i t − h i t and x 0 t = x 0 t − h 0 t .Therefore, the consensus of leader-following multiagent systems (1) and ( 2) is reformulated as the asymptotical stability problem of system (17).In other words, if (17) is asymptotically stable at the origin, then the multiple agents reach consensus.Furthermore, the norm T zω s ∞ remains unchanged before and after the 3 Complexity model transformation, and thus the H ∞ performance of system ( 17) is equivalent to that of the multiagent system.In conclusion, if ( 17) is asymptotically stable with the disturbance attenuation performance T zω s ∞ < γ, then the multiagent system can reach consensus with the disturbance attenuation level γ.Theorem 1.For a given index γ > 0, system ( 17) is asymptotically stable with the disturbance attenuating performance T zω s ∞ < γ, if there exists a positive definite matrix P and a positive parameter α so that the following inequality is satisfied Proof.Firstly, the asymptotical stability is demonstrated under the assumption ω t = 0.According to the eventtriggered condition (11) Applying Lemma 1 to the inequality (21) indicates V < 0, and therefore, system ( 17) is asymptotically stable. Secondly, consider the H ∞ performance of system ( 17) with disturbance ω t ≠ 0. Define a cost function as follows: Then, under the zero initial condition, the following result is derived: According to the inequality condition (21), it is proved that 4 Complexity from which it yields which is equivalent to T zω s ∞ < γ.Combining with Section 3.2, it is demonstrated that the closed-loop multiagent system reaches consensus with the desired disturbance attenuation ability T zω s ∞ < γ.This completes the proof. Remark 2. Based on Theorem 1, the gain matrices in the proposed event-triggered protocol, that is, K and G, can be determined by the following steps.To summarize, it is shown that if (32) holds, then the consensus condition ( 21) is satisfied.In order to calculate the gain matrices, let P −1 take the form where P is a positive definite matrix.Then, by substituting ( 18) and ( 33) into (32) and denoting 34 we obtain a linear matrix inequality (LMI) in terms of matrix variables P, Q, and R. Once the above three matrices are figured out by using the LMI toolbox of Matlab, the feedback matrices can be determined as Theorem 2. Under the event-triggered protocol ( 8), ( 9), (10), and ( 11), the leader-following multiagent system does not exhibit Zeno behavior. Proof. Before the consensus is achieved, ∑ j∈N i a ij x i t − x j t > 0 holds between any two triggering times, and there is a Obviously, there exists only one solution t * to φ i k / A e A t−t i k − 1 = η k , which satisfies t * − t i k > 0 and t * ≤ t i k+1 .Consequently, it is derived that t k+1 − t k ≥ t * − t i k > 0, which completes the proof. Simulation In this section, a numerical simulation is given to verify the theoretical results.A multiagent system consisting of one leader and four following agents is considered, 6 Complexity whose dynamics are modeled as ( 1) and (2) with system matrices The external disturbances are simulated by white noises with brand limit 0, 0 1 and action period 0, 4s .Set the initial states as x 0 = 1 2 −2 T , x 1 = x 2 = x 3 = x 4 = 0 0 0 T .The communication topology among agents is given by The state trajectories of the multiagent system are shown in Figures 2-4, from which we can see that the following agents can reach consensus with the leader approximately during the first four seconds when there exist external disturbances and then achieve the exact consensus after the disappearance of disturbances.From Figure 5, it is obvious that the energy of the controlled output is lower than that of external disturbances, which indicates that the system realizes the H ∞ disturbance attenuation performance with γ = 1.Besides, the triggering times of each agent are n 0 = 144, n 1 = The first state of each agent Conclusion In this paper, the output-feedback event-triggered consensus was addressed for the linear leader-following multiagent system with external disturbances.A distributed observer-based event-triggered protocol was proposed using the output information, under which the system could reach consensus with the desired disturbance attenuation ability and no Zeno behavior occurred.The effectiveness of the developed protocol was demonstrated by a numerical simulation. Figure 1 : Figure 1: Communication topology of the leader-following system. Figure 2 :Figure 3 : Figure 2: The first state component of each agent. Figure 5 :Figure 4 : Figure 5: The energy of external disturbances and the controlled output. Ae i t − Bu i t − GCh i t ≤ A e i t + B u i t + GC h i t 35 From h i t = A + GC h i t − B 1 ω i t , we get
v3-fos-license
2020-11-26T09:06:13.047Z
2020-11-18T00:00:00.000
229476782
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ojs.library.queensu.ca/index.php/pocus/article/download/14431/9431", "pdf_hash": "80b1007842b1f26a6cd27239ed040f268afcad84", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:230", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "sha1": "ca11933e9bbb2df2dd00fefb69f76db415385200", "year": 2020 }
pes2o/s2orc
POCUS for Visualization and Facilitation of Urinary Catheter Placement The use of point-of-care genitourinary ultrasound allows dynamic visualization of urinary catheter placement within the bladder and serves to minimize the potential for traumatic injury to the prostate and urethra during difficult insertion. Introduction The use of point-of-care genitourinary ultrasound allows dynamic visualization of urinary catheter placement within the bladder and serves to minimize the potential for traumatic injury to the prostate and urethra during difficult insertion. Case Presentation A 61-year old man with pertinent past medical history of benign prostatic hyperplasia (BPH) was admitted for bilateral L1-L3 laminoforaminotomy for symptomatic lumbar stenosis. Routine intraoperative urinary catheter placement with a 14 French Coudé catheter was attempted by operating room (OR) nursing staff following induction and intubation. The catheter was placed, and the balloon was inflated without resistance. However, there was no return of urine immediately or in the hour following placement even though the patient had reported the urge to urinate prior to induction of anesthesia. A point-of-care ultrasound of the bladder and prostate was performed demonstrating approximately 200 mL of urine retained in the bladder. There was high suspicion for improperly placed catheter even though the initial placement was uncomplicated, and the catheter was inserted up to its hub. Ultrasound of the bladder and prostate failed to show the catheter balloon in the bladder. Dynamic manipulation of the catheter under POCUS showed its tip repeatedly abutting and traumatizing the prostate at the bladder neck without advancement into the bladder (see online Video S1). At this point urology was consulted for difficult urinary catheter placement. Repeat placement of both 14 and 18 French Coudé catheters was attempted without return of urine and complicated by trauma of the genitourinary tract evidence by blood at the meatus. Flexible cystoscopy was performed to assist visualization of the urethra but was complicated by blood and clots obscuring the view. Decision was made to switch to a flexible ureteroscope to gain access into the bladder. Given the degree of trauma, anatomic challenges, and bleeding in the prostatic urethra, visualization via the flexible ureteroscope was limited so real-time POCUS was used to slowly advance the flexible ureteroscope into the bladder while minimizing trauma to the prostate urethra, A guidewire was then advanced through the ureteroscope under direct visualization into the bladder (see online Video S2). The ureteroscope was removed and a 16 French council tip catheter was advanced over the guidewire into the bladder with efflux of clear urine. The balloon was inflated with proper catheter tip placement in the bladder confirmed by ultrasound (see online Video S3). Bladder Ultrasonography The bladder is typically imaged using a 3.5 -5MHz transducer via a transabdominal suprapubic approach. The bladder is best visualized when full. On transverse imaging, the normal urinary bladder appears as an almost rectangular shape with thin walls (< 4mm in thickness). Sagittal images of the bladder can be obtained by rotation the probe 90 degrees from the transverse plane and a normal bladder usually appears triangular in shape. The bladder outlet can be visualized by tilting the tail of the probe down towards the umbilicus when in the transverse position. A normal urine-filled bladder with no clots or masses appears completely anechoic within the walls. Bladder volume is estimated using the formula [1] where w = maximum diameter in transverse plane, d = maximal diameter in sagittal plane, and h = maximum depth in sagittal plane. Discussion This case illustrates the importance of genitourinary POCUS in visualizing proper or improper placement of a urinary catheter, estimating bladder volume, and avoiding prostatic and urethral trauma during difficult catheter placement. Even routine urinary catheter placement subjects the patient to risks of trauma and infection [2]. This risk is increased in patients with BPH as the prostate compresses the prostatic urethra and makes passage of a urinary catheter more difficult [3]. In this case of a patient with known BPH, passage of an intraoperative urinary catheter appeared to be uncomplicated, yet there was no efflux of urine even with the bladder holding 200 mL urine. Failure of a urinary catheter to drain urine after uncomplicated placement can be caused by an empty bladder, placement of catheter tip in the urethra, clogged catheter tip, kinked catheter, or creation of a false passage. The most severe complications is creation of a false passage, as this can result in pain, abscess formation, urethrocutaneous fistula, and infection [4]. The use of POCUS to facilitate placement of a urinary catheter over a hydrophilic guidewire in a patient with BPH and prior placement of suprapubic catheter has been reported [5]. Further, POCUS has been used to dynamically guide a Foley catheter into the uterine cavity to tamponade life-threatening post-procedure hemorrhage [6]. This case demonstrates the role of POCUS in visualizing the tip of a urinary catheter when the catheter fails to efflux urine after placement and can differentiate from false passage versus kinked catheter versus inadequate tip advancement into the bladder. Repeated attempts at re-insertion in the absence of direct ultrasound visualization can result in trauma, bleeding, and edema requiring indwelling Foley catheter for a prolonged period of time. Furthermore, flexible video cystoscopy may not be available and bladder POCUS offers an alternative visualization method. Lastly, if trauma is suspected, cystoscopy may be of limited utility as blood and clots may impair the cystoscope's view. POCUS offers visualization of urinary catheter within the bladder and prostate even in the setting of a urethral trauma.
v3-fos-license
2016-05-12T22:15:10.714Z
2014-09-02T00:00:00.000
3262443
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0106336&type=printable", "pdf_hash": "6ed6604e6d00ef62de620f939414426a5f33aff2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:232", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "6ed6604e6d00ef62de620f939414426a5f33aff2", "year": 2014 }
pes2o/s2orc
Body Size Diversity and Frequency Distributions of Neotropical Cichlid Fishes (Cichliformes: Cichlidae: Cichlinae) Body size is an important correlate of life history, ecology and distribution of species. Despite this, very little is known about body size evolution in fishes, particularly freshwater fishes of the Neotropics where species and body size diversity are relatively high. Phylogenetic history and body size data were used to explore body size frequency distributions in Neotropical cichlids, a broadly distributed and ecologically diverse group of fishes that is highly representative of body size diversity in Neotropical freshwater fishes. We test for divergence, phylogenetic autocorrelation and among-clade partitioning of body size space. Neotropical cichlids show low phylogenetic autocorrelation and divergence within and among taxonomic levels. Three distinct regions of body size space were identified from body size frequency distributions at various taxonomic levels corresponding to subclades of the most diverse tribe, Geophagini. These regions suggest that lineages may be evolving towards particular size optima that may be tied to specific ecological roles. The diversification of Geophagini appears to constrain the evolution of body size among other Neotropical cichlid lineages; non-Geophagini clades show lower species-richness in body size regions shared with Geophagini. Neotropical cichlid genera show less divergence and extreme body size than expected within and among tribes. Body size divergence among species may instead be present or linked to ecology at the community assembly scale. Introduction The importance of body size on the life history, ecology and distribution of species has been highlighted continuously in the literature [1][2][3][4]. Nevertheless, little work has been completed to answer broad questions of body size evolution and its importance. In addition, few empirical studies have addressed the evolutionary processes that underlie body size distributions across geographic space and time [5]. Many studies, particularly in mammals, have addressed how body size is distributed on a broad geographic scale [6][7][8] or across the fossil record [9], but an understanding of how body size is distributed in a phylogenetic context among organisms is far from complete [5,10]. Cope's Rule, the phyletic increase in body size over evolutionary time [11][12], and Bergmann's Rule, the increase of body size with increases in latitude [13], have been proposed based on the mammalian fossil record to outline fundamental patterns of body size distribution. Recent studies from other taxa have suggested that these ''rules'' do not always apply, and may be the exception rather than the rule [14][15][16]. Body size, like other phenotypic traits, is expected to be similar among closely related taxa due to evolutionary constraints on morphology tied to biologically and ecologically relevant characters [17,18]. Yet there are likely many exceptions where closely related taxa are highly divergent in body size: body size divergence could allow habitat or resource partitioning in coexisting congeneric species [19,20] or other closely related taxa, while body size shifts associated with ecomorphological differentiation could allow access to novel habitats or unused resources [21,22]. Furthermore, if body size is so important for physiological and ecological processes, evolution towards extreme body size, especially small body size, must result in one or several evolutionary trade-offs in life-history and ecological characteristics of these species [21,23]. But at what taxonomic resolution should we see these trade-offs occurring? How closely related would we expect species to be that share both the same body size and the same suite of behaviours, reproductive modes, diet preferences or morphologies? Only recently has body size distribution been examined in fishes, with several studies primarily investigating the distribution of body size across geographic space [16,24] or with regard to basic ecological characters [25][26][27][28]. The evolutionary history of body size has rarely been addressed in fishes on a broad geographic scale that links possible phylogenetic constraints of body size evolution over time with the ecological and geographic distribution of extant taxa [29][30][31]. Body size reduction in fishes has been shown to be a common phenomenon in tropical systems [27,32], particularly in freshwater environments [33,34]. As a consequence, distributions of fishes in the tropics tend to be right-skewed [27], though direction and intensity of skew varies depending on evolutionary history, environmental characteristics and ecology [35]. Though previous work has begun to outline the link between body size and ecology in fishes, very little is known about body size distributions, evolution and the consequences of occupying a particular body size space in Neotropical fishes. In this study we examine the distribution of body size in a phylogenetic context across Neotropical cichlids, a group of fishes with a broad geographic distribution that is also highly diverse in species and ecological roles. By examining this system we intend to outline potentially important drivers that may influence body size evolution in Neotropical fishes as a whole. Cichlids have been used as a model to study a number of biological questions in fishes due to their ecological versatility and life history traits [36][37][38]. The radiation of cichlids generated extraordinary diversity both taxonomically and ecologically within a single family. The clade of Neotropical cichlids (subfamily Cichlinae) is the third most species-rich lineage in South America following Characidae and Loricariidae [39], but a robust hypothesis of evolutionary history for these latter two groups has yet to be developed. Moreover, Cichlinae shows a high degree of ecological and body size diversity in addition to high species diversity. Despite this diversity, very little is known about Neotropical cichlid body size diversification. Sexual dimorphism does occur in several genera of Neotropical cichlids [39,40] but it is poorly characterized, and where present, it is not known whether it is associated with sexually selected behavioural traits (e.g. sneaker males, shell-dwelling) or if it has marked impacts on ecological strategies as seen in dimorphic species of African cichlids [41]. We are not familiar with studies that have examined body size in all described Neotropical cichlids, and very few studies have examined the association between body size and ecological or life history traits that could be driving patterns we identify in this study. Most of the species richness in Cichlinae is distributed among three tribes. Geophagini is the most species-rich (243 species [42]) and, along with Cichlasomatini (74 species [42]), is primarily restricted to South America. Heroini (176 species [42]) is the second most diverse tribe following Geophagini and has expanded from South America into Central and North America. Recent work on Neotropical cichlids has shown that the process underlying body size evolution in Cichlinae may vary among tribes, but that body size may have diverged early in the evolution of this group, which may have resulted in the accumulation of higher body size diversity over time and possibly greater divergence among distantly related lineages [43]. These analyses, however, were performed on a relatively small subset of taxa, with distributions and evolutionary patterns of body size below the tribe level not explicitly explored. Body size in Cichlinae spans a large portion of the body size space occupied by Neotropical fishes, ranging from 21 mm standard length (SL) in Apistogramma staecki to 990 mm total length (TL) in Cichla temensis. Such body size diversity in Cichlinae provides a case study to investigate how extreme body size impacts various aspects of life history, ecology and distribution of freshwater fishes. High diversity of ecology in this group presents morphological disparity and ecological diversification ideal for examining factors affecting body size evolution and a strong understanding of phylogenetic relationships within Cichlinae [44] provides the phylogenetic framework for addressing associations with body size in an evolutionary context. As a prerequisite for identifying the underlying processes and forces driving body size evolution in Cichlinae, we need a clear understanding of patterns of body size diversity at relevant phylogenetic levels and across the geographical distribution of the group. To this end, the purpose of this study is to 1) quantitatively characterize body size frequency distributions and space occupation in various clades, 2) determine if body size is randomly distributed at and among various phylogenetic levels, 3) determine if body size variation correlates with phylogenetic relatedness, and 4) to distinguish small and large-bodied taxa as a foundation for future work in body size evolution of Neotropical cichlids. Data Collection and body size frequency distributions Maximum body size available for valid fish species within the subfamily Cichlinae were taken from FishBase (see Table S1) [42]. To ensure data accuracy, body size data provided in FishBase were compared against original sources provided within the database (see Table S2). Maximum body sizes previously found from museum specimens to be larger than published data in the literature [43] were used in this study (See Table S1). Measurements were given either in standard length (SL, length from the tip of the upper lip to end of caudal peduncle), or total length (TL, length from snout to posterior edge of caudal fin). Total length data were concentrated in Heroini species. To maximize taxon sampling in our dataset we included both SL and TL data to incorporate all variation available at the genus level and representatives from all genera within Cichlinae. We tested the effect of using the two different measures and determined that exclusion of species for which only TL was available did not change the results found at the tribe or major clade (See Appendix S1, Table S3), and only affected results at the genus level (see below) in a few taxa. Exclusion only resulted in significant changes at the genus level within some heroines (12 genera, e.g. Hoplarchus, Parachromis, Archocentrus) in which most or all body size was given in TL. Untransformed body size data was typically skewed, and therefore log-transformed body size data were used in all analyses unless otherwise noted. Log transformation of body size data also reduces the potential bias of TL data within distributions and analyses. Unless otherwise noted, discussions of body size with regards to the results of the study and the interpretation of the figures refer to log-body size (LBS). Species assignment and phylogenetic partitioning of the data for subclade analyses within Cichlinae followed [44]. Body size data for the subfamily Cichlinae was first partitioned by tribe. The three most diverse tribes Geophagini, Heroini and Cichlasomatini following [44] represent monophyletic clades that encompass the vast majority of taxonomic diversity and were used for further subdivision into less inclusive taxonomic units ( Fig. 1), while all other tribes contain too few taxa to be partitioned further. Geophagini is composed of two major clades [44], Crenicichla-Apistogramma-Satanoperca (CAS) that is more species-rich and has higher morphological disparity than the Geophagus-Gymnogeophagus-Dicrossus (GGD) clade [45,46]. Body size distributions of genera were analyzed separately in these two clades due to high species diversity as well as differences in ecological attributes of species. Heroini is the only tribe of Neotropical cichlids that inhabits both South and Central America, even extending into the very southern regions of North America. To test if body size frequency distributions (BSFDs) were influenced by geographic expansion, we also analyzed Heroini by separating species into South and Central American groups (hereby referred to as SA or CA heroines) ( Fig. 1) Although we separate heroines geographically, it is relevant to clarify that SA heroines do not comprise a monophyletic clade. Central American heroines are monophyletic at the most basal level, but include several lineages distributed in South America that have Central American affinities [44]. The CAS clade, GGD clade, CA heroines, SA heroines and Cichlasomatini as a whole were then subdivided into the genera identified by ( [44], their Fig. 1). Analysis of body size frequency distributions within Cichlinae Ecologically relevant morphological traits in Neotropical cichlids are more similar within genera than among genera [45] and phenotypic divergence that resulted in currently recognized genera often followed an early-burst pattern of evolution [43,46]. We expected to observe body size distribution patterns that were consistent with other phenotypic data which found higher similarity within clades than among-clades. Therefore we wanted to characterize body spaces occupation of clades and determine if divergence in body size is present within or among clades in Cichlinae. We analyzed the BSFDs of Neotropical cichlids with available body size data at the subfamily, tribe, major clade and genus level (Fig. 1). Analyses were only conducted on monophyletic groups with two or more taxa, with the exception of the SA Heroini (see above). The mean, standard deviation, range, 25% and 75% quantiles and interquartile range (IQR) were calculated for each BSFD. Significant deviations in mean body size may indicate shifts in body space occupation among clades. Significantly lowered standard deviation, range and interquartile range would indicate a lowering of body size diversity within clades, suggesting constrained size, while increases would indicate expansions in body size diversity. Location of 25% and 75% quantiles together also indicate information about body size diversity, proportion of taxa within certain body size ranges and skew, but interpretation is not as clear. In addition, distributions were tested for unimodality using Hartigan's dip test [47] and characterized by kurtosis and skew. Platykurtosis, flatter as compared to a normal distribution, and leptokurtosis, more peaked than a normal distribution, give indications of constraints around the mean body size. Right-skew indicates a higher proportion of small-bodied species while left-skew indicates a higher proportion of large-bodied species within a distribution. To determine if BSFDs were different among phylogenetic levels, distributions were first compared using the Kolmogorov-Smirnov (K-S) two-sample test. This test identifies differences between two observed frequency distributions and is particularly sensitive to deviations in skew, kurtosis and location along the body size gradient. We employed a Bonferroni correction to account for multiple comparisons among genera in the K-S analysis (P adj ,0.00004). The K-S only determines whether two distributions differ, but does not identify what aspects of the distribution drive those differences. We employed a bootstrapping method to test for random distribution of BSFDs between phylogenetic levels [5,48]. The BSFD of the higher taxonomic unit containing the focal clade was resampled to create 1000 randomly assembled BSFDs equal in size to the focal clade (see above). The mean, standard deviation, range, 25% and 75% quantiles, interquartile range, kurtosis and skew were then calculated for each of the1000 BSFDs. A distribution of each summary statistics expected under a random phylogenetic distribution was then obtained. We assumed that the summary statistics of the observed data could either be higher or lower than the simulated data, so a two-tailed adjusted alpha level of 0.05 was applied. Summary statistics for observed body size distributions found below and above the 2.5% and 97.5% quantiles of the simulated summary statistic distributions were considered to significantly deviate from summary statistics describing randomly distributed simulated data. A p-value was not directly calculated for each bootstrap simulation, but all deviations under or over the above thresholds were reported as significant (p,0.05). We also compared observed data to the 0.25% and 99.75% quantiles (P,0.005) (see Table S4). The BSFDs of subclades were compared to bootstrap pseudo-distributions created from respective clades in each successive phylogenetic level. If clades showed few or no deviations from pseudodistributions, body size was considered a random subset of the containing higher taxonomic level, suggesting low phylogenetic autocorrelation. Clades that show a number of significantly different summary statistics have a specialized or partitioned body size space occupation as compared to distributions at higher taxonomic levels. We also tested for phylogenetic autocorrelation [43]. Phylogenetic nomenclature used in this paper is represented by coloured boxes for the tribes and major clades. See [44] for details on phylogenetic reconstruction and taxonomic conventions. CAS and GGD refer to the Crenicichla-Apistogramma-Satanoperca and Geophagus-Gymnogeophagus-Dicrossus clades, respectively, of Geophagini. doi:10.1371/journal.pone.0106336.g001 in body size using Moran's I [5,10] to determine if body size was randomly distributed within a given taxonomic level. Values of Moran's I fall between -1 and 1, with higher values indicating the trait is more similar within taxonomic units than expected at random, 0 indicating random distribution, and values approaching -1 indicating the trait is more different than random. To account for potential taxonomic error, all genera were compared to the higher clade and tribe that contained them (Fig. 1). The distributions of major clades were then compared to their respective tribe. Finally the BSFDs of each tribe were compared to the BSFD of Cichlinae. The genera Crenicichla and Teleocichla of Geophagini are known to form a monophyletic clade with Teleocichla potentially interspersed among Crenicichla species [44,49], therefore body size of all species in both genera were analyzed together. Heroini contains a paraphyletic, catch-all genus 'Cichlasoma' which was not analyzed at the genus level [50,51]. Species of 'Cichlasoma' with body size data available were included in the Heroini and CA Heroini BSFDs to be resampled with phylogenetic assignment following [44]. Characterization of cichlid body size frequency distributions Based on the data available from FishBase [42], we were able to include 498 cichlid species in our analyses of BSFDs (Table S1). This represents approximately 88% of the valid Neotropical cichlid species listed on FishBase at the time of the study. The bootstrap analyses found that only nine genera across the three main tribes had significantly smaller means than expected if body size was randomly distributed throughout the phylogeny, while eight genera had higher than expected mean body size (Table S4). These findings were generally consistent when comparing distributions of genera to respective major clades as well as at the tribe level (Table S4). Occurrences of mean body size deviation were not more frequent in any particular tribe (Geophagini 6/16; Heroini 8/25; Cichlasomatini 3/10) or major clade and no tribe was biased towards smaller or larger body size (Table S4). Standard deviation was typically lower in all significant results, and lowering of body size diversity was particularly apparent in the CAS clade of Geophagini (5/9 cases; GGD 2/7; SA heroines 2/7; CA heroines 6/18; Cichlasomatini 1/10) (Table S4). Deviations in skew and kurtosis were not typically found at any phylogenetic level ( Table 1). The mean of the CAS BSFD was significantly lower than expected while the standard deviation and IQR were higher ( Table 1). The maximum and minimum body sizes are not significantly different than expected. Despite this, the 25% and 75% quantiles were closer to the extremes of the distribution than expected (Table 1; Fig. 2A). The distribution was also significantly more platykurtic and with a higher right-skew than Geophagini. The BSFD of CAS is also strongly bimodal (p = 0.0026) ( Fig. 2A), with the small-bodied peak (LBS 1.54, 35 mm) coinciding with the distribution of Apistogramma (Fig. 3), and the large-bodied peak (LBS 2.40, 250 mm) coinciding with the peak of Crenicichla (Fig. 3). Despite the bimodality of CAS, none of its subclades deviate from a unimodal distribution. The mean of GGD was significantly higher than expected at LBS 2.07(118.2 mm) while standard deviation and IQR were lower than expected (Table 1; Fig. 2B). Minimum body size and 25% quantile were higher than expected. The BSFD of GGD did not significantly deviate from unimodality, however it was more platykurtic and left-skewed than expected. SA heroines had a significantly lower mean than expected, and a slight trend towards lower standard deviation than expected (Table 1). CA heroines did not deviate from the BSFD of Heroini in any summary statistic, except for a higher mean and 75% quantile than expected ( Table 1). The BSFD of Cichlinae was unimodal, slightly left-skewed with a mean of LBS 2.08 (120.2 mm), and IQR from LBS 1.89 (77.6 mm) to 2. 28 (190.5 mm). Bootstrap analyses revealed Geophagini had a significantly lower mean of LBS 1.97 (93.3 mm), accompanied by significantly lower minimum body size, maximum body size, 25% quantile and 75% quantile ( Table 1). Geophagini tended towards a bimodal distribution (p = 0.0642) (Fig. 4A), which is also reflected by a significantly higher standard deviation and IQR as well as being significantly platykurtic. The mean of Heroini BSFD was significantly higher, at LBS 2.18 (151.0 mm), than expected (Table 1). Heroini shows significantly lower standard deviation and IQR, suggesting a restricted range of body size. This is supported by a significantly larger minimum body size and 25% quantile, however the 75% quantile is higher than expected. The mean of Cichlasomatini of LBS 1.98 (94.6 mm) and standard deviation were significantly lower than expected (Table 1). Maximum body size, 75% quantile and IQR were also significantly lower than expected while the minimum body size was higher. Distribution of body size among taxonomic levels If body size is randomly distributed across a phylogeny (i.e. no phylogenetic autocorrelation), we would expect little deviation of BSFDs between clades and their subclades, as well as high similarity between distributions of related subclades at the same taxonomic level [5,10]. High correlation within clades (phylogenetic autocorrelation) should result in considerable partitioning of body size space. Only 46 out of 1249 pairwise comparisons between Cichlinae genera showed significantly different BSFDs from each other (p adj ,0.00004) using the K-S test (for comparisons of major clades see Table 2; results at genus level not shown), supporting randomly distributed body size across genera. Of these 46 cases, 14 comparisons involved Cichla (Cichlini), a large-bodied piscivorous genus occupying a body size space that few other taxa occupy. In addition, 17 other cases involved the ''dwarf'' cichlid genus Apistogramma (Geophagini), which significantly differed from several genera across all three tribes. The remaining 15 cases typically involved comparisons between genera from different tribe affinities rather than divergence of genera within the same tribe. Analysis of phylogenetic autocorrelation in Cichlinae showed strong body size correlation with phylogenetic history at the genus level (I = 0.7010, p,,0.05), however at more inclusive taxonomic levels (subclade, tribe) body size was not correlated with phylogenetic history. Species within a genus are more similar in body size to each other than expected at random, however at higher taxonomic levels body size may or may not be similar in closely related groups. The BSFDs of all three major tribes were found to be significantly different from that of Cichlinae and from each other using the K-S test ( Table 2). The CAS clade was not significantly different from the BSDF of Geophagini while the GGD clade was found to be significantly different from both the BSFD of Geophagini and the CAS clade (Table 2). SA heroines and CA heroines did not significantly differ from the BSFD of Heroini or each other based on the K-S test. Divergence in body size space In addition to looking at phylogenetic autocorrelation of body size among taxonomic levels, we also wanted to explore body size space occupied by closely related taxa. We compared the location of distributions in body size space as well as bootstrap analysis Within Geophagini there are several cases of divergence in body size space within closely related genera. Within the CAS clade, the distributions of Apistogramma and Satanoperca do not show overlap in body size space (Fig. 3). Biotoecus and Acarichthys (See Fig. S1), proposed sister-groups, also do not show overlap in body size space, though no bootstrap results support divergence greater than expected at random (Table S4). The sister-groups Guianacara and Mazarunia occupy a narrow range of body size space around 100 mm (LBS 2.0) and do not show significant divergence from each other (Fig. S1). The small-bodied space dominated by Apistogramma is primarily shared with Teleocichla and some small-bodied Crenicichla, while large body size space is equally shared between Satanoperca and Crenicichla (Fig.3). In the GGD clade, the sister-groups Dicrossus and Crenicara show significant divergence based on bootstrap results (Table S4) and nonoverlapping distributions in body size space (Fig. S1). While the sister-groups Geophagus and Gymnogeophagus show considerable overlap in body size space (Fig. S1), bootstrap results suggest that the range of body size in Gymnogeophagus is significantly reduced and overlaps only with the lower end of the Geophagus distribution (Table S4). In addition, the distribution of Geophagus occupies a narrower body size space, shifted towards large body sizes. Smaller bodied taxa (Mikrogeophagus, Dicrossus, Crenicara; Fig. 3) show little or no overlap with the distributions of Geophagus and Gymnogeophagus while Biotodoma occupies a narrow range of body size space around 100 mm (LBS 2.0) (Fig. S1). Body size distributions of genera within Cichlasomatini commonly overlap in closely related genera. Two exceptions occur between sister-groups Nannacara and Cleithracara as well as Acaronia and Laetacara (Fig. S1). In both cases, no overlap is seen between taxa although these divergences are not supported strongly by the bootstrapping analyses (Table S4) due to the low species richness of these genera. In CA heroines, divergence was difficult to assess due to the paraphyletic and unresolved nature of many groups. However, two cases of divergence occur in South America between Uaru and its sister clade containing Symphysodon and Heros as well as between Hoplarchus and Hypselecara (Fig. S1). No overlap is seen between taxa, but again these divergences are not supported by the bootstrapping analyses likely due to low species richness (Table S4). Divergence in body size space Morphological traits associated with diet in Neotropical cichlids are known to be more similar within genera than among genera [45]. Since body size is often strongly linked to diet and feeding [52] and morphology [53] in fishes, we hypothesized body size would also show this pattern of higher similarity within genera than among genera. A reduction of body size range and variation as compared to a random phylogenetic distribution was expected if species within a genus share more similar body sizes. However, this pattern was only consistent within the CAS clade of Geophagini, perhaps due to the specialized ecological roles within this clade compared to others [43,46] that is associated with particular body size regions (Table 1; Fig. 3). In addition, we expected a low degree of body size overlap among clades, consistent with ecological divergence and niche partitioning hypotheses [20,52]. Body size distributions of genera within and among major clades or tribes did not show a high amount of body size divergence. This was supported by results of the K-S test, bootstrap simulations and analysis of phylogenetic autocorrelation. Most differences among genera occur with Apistogramma (Geophagini) or Cichla (Cichlini) (Fig. 3), which occupy extreme areas of small and large body size space, respectively, and which are rarely occupied by other genera. Body size can be an important determinant of niche partitioning or overlap [54] as well as of trophic level and resource use [52], but among Cichlinae, it is not highly divergent in a strict phylogenetic context, and may in fact be more important within the context of community ecology and assembly [28,54]. Body size divergence in distantly related taxa may allow partitioning of resources within a community by differing in resource use (e.g. microhabitat, prey type and size), while divergent ecologies in similarly sized cichlids may also allow for coexistence. Analyses at the community level will be needed to understand the role of body size in the ecological and geographic assembly of Neotropical cichlid fishes. The distributions of SA and CA heroines were also not highly divergent (Fig. 3), but there was a trend towards lower diversity in SA heroines and smaller body size. Heroine species diversity is higher in Central America, which may be expected to increase body size diversity. However, mean body size of CA heroines is higher than expected, suggesting that despite having more species, evolution in CA heroines may be biased towards body size increase. This trend may, in part, be influenced by the higher success of large-bodied species at dispersal [26,27,55,56], and therefore the founders of the Central American colonization may have been primarily larger-bodied cichlids. The recent finding that Heroini body size may be evolving under an adaptive-peak model of evolution [43] suggests that novel environmental pressures or opportunities in Central America could have also acted on dispersing cichlids to drive increases in CA Heroine body size [57,58]. Restricted body size occupation in SA heroines could be associated with ecological constraint as compared to CA heroines, particularly as a result of interaction with the much more diverse Geophagini [43]. The diversification of ecological roles, particularly the evolution of piscivorous heroines in Central America, could be related to the origin of larger body size in middle American heroines [43,52]. Body size diversity and the evolution of extreme body sizes Body size diversity appears to be higher in Geophagini (Fig. 4), which has a higher standard deviation and IQR than expected (Table 1). However, Heroini spans the largest range of body size space while Cichlasomatini has the narrowest range. Despite having significantly larger body size, less than 4% of heroines have increased in body size beyond the largest Geophagine cichlids, and still do not occupy a unique body size space (Fig. 4). This extremely large body size space is also occupied by Cichla and Astronotus, large predatory cichlids from two different speciespoor tribes (Cichlini and Astronotini, respectively). In contrast, heroine cichlids have not expanded into small bodied space to the extent that Geophagini has (Fig. 4). In Geophagini, approximately 10% of species occupy a region of body size unoccupied by any other tribe while 25% of species occupy a region of body size unoccupied by heroine cichlids. The genus Apistogramma (Fig. 3) not only dominates in the extent of body size reduction within Cichlinae as a whole (minimum BS 21 mm SL, LBS 1.32), but also in the total number of species that occupy this unique space (22 spp below 36 mm SL, LBS 1.57; 53 spp below 50 mm SL, LBS Table 2. Partitioning of body size within Cichlinae and major clades. 1.70). Comparatively, only ten species from other groups also occupy this space. The restriction of extremely small-bodied species to particular genera and conservation of body size within these groups suggests that morphological evolution may be constrained at smaller size, which could limit ecological opportunity within small-bodied taxa [52,59]. Though variance in body size was not correlated with body size (results not shown), in that large-bodied clades do not have higher body size diversity relative to small-bodied clades, evidence for ecomorphological constraint within these smallbodied taxa has been found in Geophagini. Small-bodied genera within Geophagini were found to converge on the same trophic functional morphospace below a body size threshold of 100 mm SL [46] (LBS 2.0, identified in Fig. 3 by a prominent dashed line for comparison). This threshold appears to be supported by the BSFD of the CAS clade, but our results also suggest that at least three regions of body size space may better characterize body space occupation, each with possible size-specific ecological roles in Neotropical cichlids. Cichlinae is significantly left-skewed compared to other Neotropical fishes, with a higher proportion of large-bodied species. This finding is inconsistent with typical patterns found in tropical riverine fishes [27] where increasingly right-skewed distributions occur in lower latitudes. Below the subfamily level, clades did not typically deviate in skew and therefore extreme body size reduction is rare in Cichlinae. Subsequently, Geophagini offers a unique case to study the ecologies of small-bodied fishes prevalent in the Neotropics and consequences of extreme body size reduction. The most species-rich genus of Neotropical cichlids, Crenicichla (including Teleocichla) has the largest range of body size, from 39.8 mm SL (LBS 1.60) to 312 mm SL (LBS 2.49). Though this range only occupies one third of the body size range of all Cichlinae and does not reach the lower and upper extremes, this range does encompass 75% of species within the BSFD of Cichlinae (Fig. 3). Crenicichla and Teleocichla, though not as ecologically diverse as Geophagini, could offer a more practical group to study narrow questions of body size evolution in a strict phylogenetic, ecological and geographic context. Is body size adaptive? Recent studies of BSFDs in North American freshwater fishes found that bimodality in distributions was typically influenced by the presence of small-bodied, resident, habitat specialists and large-bodied, migratory, habitat generalists [27]. Recently, strong partitioning of ecologically meaningful body shape attributes [43] and trophic functional morphospace [46] was found in Cichlinae and it is likely that this ecological differentiation is correlated with patterns in BSFDs across the group. The distribution of body size in the GGD clade of Geophagini was found to be located between the two modes of the CAS clade, producing a third mode of body size that suggests partitioning of total body size morphospace. The CAS clade began diversifying before all other species-rich clades within Cichlinae [43] and may have constrained body size diversification within other lineages. In addition, the distributions of GGD and Cichlasomatini are also located between the two modes of the CAS clade and show narrowing of body size space occupation (Fig. 3). SA heroines occupy a similar body size space to the large-bodied mode of the CAS clade, but have considerably less species diversity. This pattern of roughly complementary body size distribution among clades may further support the hypothesis that competition and ecological opportunity may be important in the diversification of traits, including body size, in South American cichlids. Analyses of ecomorphology [43] and biomechanics [46] have shown a high degree of diversity in Geophagini as compared to South American Heroini and Cichlasomatini, a pattern consistent with our findings in this paper. Interestingly, the BSFD of CA heroines, which are geographically separated from other Neotropical cichlids, occupies the same body size space as the large-bodied CAS Geophagini and show comparable species richness (Fig. 3). This result is also consistent with patterns found in body shape disparity [43], with heroines occupying nonoverlapping morphospace with Geophagini in South America, then expanding into these newly available ecomorphospace regions once geographically separated (Fig. 3). The pattern of BSFDs complementarity at the major clade and tribe levels support a hypothesis of three possible body size optima in Cichlinae: a ''miniature'' body size optimum around 35 mm (SL), a '' mid-sized'' optimum around 100 mm (SL) and a ''large-bodied'' optimum around 250 mm (SL) (Fig. 2; Fig. 4). The CAS and GGD clades of Geophagini show modes around these three optima that are quantitatively supported as distinct from each other by the K-S tests and bootstrap analyses (Fig. 2, Tables 1 and 2), while the body sizes of Cichlasomatini and South American heroines form a unimodal distribution around the 100 mm optimum (Figs. 3; Fig. 4). Finally the CA heroines have a significantly right shifted but unimodal distribution around the large-bodied optimum of 250 mm SL (Fig. 3). These results are inconsistent with the idea of a phylogenetically directional pattern of evolution towards large body size, as proposed by Cope's rule. Decreases in body size are a common phenomenon in Neotropical cichlids, and are present in both old and young clades, although the phylogenetic directionality of body size changes and number of independent reductions still need to be tested directly. Likewise, testing for body size optima was beyond the scope of this paper as it should be done using phylogenies with complete or near-complete species-level sampling. Instead, until more detailed phylogenies become available, we employed less phylogenetically robust analyses incorporating all species with available body size data to determine patterns of body size within phylogenetic groups while accounting for as much body size variation in each group as possible. We were also unable to directly test the association between body size, morphology and ecology for all Neotropical cichlids since data is largely unavailable for most species used in this study. This paper outlines a number of broad patterns in body size within a widely distributed group that is both taxonomically and ecologically diverse. Body size distributions of related clades show an unexpected amount of overlap, and an understanding of how body size is associated with geographic separation, community structure and ecological divergence may shed light on why Neotropical cichlids so frequently occupy overlapping body size space. Cichlinae represents the most studied Neotropical freshwater fish lineage to date. With a strong knowledge of the evolutionary processes driving ecological diversification in the context of a robust understanding of the phylogenetic history of the group, we can derive and test hypotheses of diversification in an explicit macroevolutionary context. Such an approach should reveal how body size evolves in Neotropical freshwater fishes and the ecological consequences or opportunities associated with body size space occupation. Appendix S1 Effects of total length data on body size distributions. Description of the effects of the inclusion and exclusion of total length data on summary statistics. (DOCX)
v3-fos-license
2020-06-11T09:04:23.739Z
2020-06-01T00:00:00.000
219588781
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4409/9/6/1420/pdf", "pdf_hash": "39cd8f3c87eefe042563e6c7ee464bbfc04501ca", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:234", "s2fieldsofstudy": [ "Psychology", "Biology", "Medicine" ], "sha1": "0a9afb0cb2dec871f6aef941bdf2fb48dea2d586", "year": 2020 }
pes2o/s2orc
Dementia, Depression, and Associated Brain Inflammatory Mechanisms after Spinal Cord Injury Evaluation of the chronic effects of spinal cord injury (SCI) has long focused on sensorimotor deficits, neuropathic pain, bladder/bowel dysfunction, loss of sexual function, and emotional distress. Although not well appreciated clinically, SCI can cause cognitive impairment including deficits in learning and memory, executive function, attention, and processing speed; it also commonly leads to depression. Recent large-scale longitudinal population-based studies indicate that patients with isolated SCI (without concurrent brain injury) are at a high risk of dementia associated with substantial cognitive impairments. Yet, little basic research has addressed potential mechanisms for cognitive impairment and depression after injury. In addition to contributing to disability in their own right, these changes can adversely affect rehabilitation and recovery and reduce quality of life. Here, we review clinical and experimental work on the complex and varied responses in the brain following SCI. We also discuss potential mechanisms responsible for these less well-examined, important SCI consequences. In addition, we outline the existing and developing therapeutic options aimed at reducing SCI-induced brain neuroinflammation and post-injury cognitive and emotional impairments. Introduction As the primary relay center of neural transmission between the brain and the rest of the body, damage to the spinal cord can be a devastating event. Chronic evaluation of spinal cord injury (SCI) has focused on sensorimotor deficits, neuropathic pain, bladder/bowel dysfunction, loss of sexual function, and emotional distress [1][2][3][4][5]. Although clinical studies have reported that 40-60% of SCI patients show cognitive and emotional deficits [6][7][8][9][10][11][12][13], the cause of such changes has been debated because of potentially confounding factors such as concurrent traumatic brain injury (TBI). However, studies addressing this issue in SCI cases without TBI confirmed impairments in cognitive function [7][8][9]11,14,15]. Such cognitive/emotional impairments can compromise not only quality of life but also rehabilitation and recovery. Prior research on physiopathology following SCI has focused on injured spinal cord and its anterograde and retrograde pathways to the brain. Conversely, limited work has examined potential effects of SCI on brain regions that impacts learning and memory or emotions. Identifying mechanisms responsible for these less-well-examined, important SCI consequences could provide targets for more effective therapeutic interventions that improve outcome. To date, our mechanistic understanding of this disease process is informed by clinical studies but requires deeper exploration using well-controlled Figure 1. Blood-brain barrier (BBB) and blood-spinal cord barrier (BSB) permeability assay. A T10 spinal cord contusion injury (moderate/severe injury) was produced in young adult C57BL/6 male mice (2-3 months old) using the Infinite Horizon spinal cord impactor as previously described [38,40] [ 158,190]. 100 μL of saline solution containing 10% sodium fluorescein and 2% Evans blue was injected by tail vein (100 μL/mouse) at 1 d, 3 d, and 7 d after SCI. At 30 min after dye injection, mice were perfused with 100 mL of saline and injured thoracic spinal cord (SPC), lumbar and cervical SPC, as well as cerebral cortex and hippocampus were dissected for fluorescent assay (sodium fluorescein at 485/528 nm, Evans blue at 470/680 nm). * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001 vs. Sham. One-way ANOVA following Tukey's multiple comparisons test. However, consistent with the clinical pathology, increased activation and functional reorganization in the somatosensory cortex could be observed in the immediate aftermath of SCI [46,47]. Of the few studies that examine neuronal function in the brain after SCI, signs of neurodegeneration, mitochondrial swelling, and vacuolated cytoplasm were observed in the hippocampus along with elevated levels of injury biomarkers in the cerebral spinal fluid [36,48]. In addition, rodent models of experimental SCI also induce neuronal atrophy. Increased brain expression of Calbindin-D (28 K), caspase-3, and Bax protein are associated with increased neuronal apoptosis in the primary motor cortex [49][50][51]. Cell loss in this region was shown to reduce motor evoked potentials, indicating that SCI alters the excitability and functionality of upper motor neurons [49]. Interestingly, injecting/transplanting brain derived neurotrophic factor (BDNF)-secreting cells at the SCI lesion site ameliorated pyramidal neuron loss in the rhesus macaque, providing further mechanistic insight into SCI-induced brain injury, and suggesting that mitigating injury in the spinal cord via of supplementation of neurotrophic factors or otherwise may limit or even prevent neuronal damage in the brain [52]. However, additional examination is needed as reports from subsequent studies have been mixed, with some showing no observable neuronal loss in the cortex following SCI [53,54]. Reasons for these disparate results are not fully understood; however, injury severity, time after injury, and differences in experimental modeling can all affect pathological outcomes. Further investigation of the underlying mechanisms is needed to fully understand SCI-induced cognitive and mood disorders. Neuroinflammation and Neurodegeneration in the Brain after SCI Chronic inflammation occurs in pain regulatory areas such as brainstem and thalamus after SCI, with posttraumatic hyperesthesia associated with plasticity or electrophysiological alterations [55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71]. Chemokines CCL2 and CCL3 are chronically expressed in thalamus, hippocampal CA3 and dentate gyrus (DG), and periaqueductal gray matter after severe SCI [61]. Our recent autoradiography studies [39] in male rats after SCI, using a new translocator protein 18 kDa (TSPO) ligand [ 125 I] IodoDPA-713 [72] (a new probe for imaging inflammation in clinical PET studies), revealed that cortex, thalamus, hippocampus, cerebellum, and caudate/putamen all showed chronic brain inflammation. Moreover, flow cytometry analysis demonstrated that moderate/severe SCI in C57BL/6 male mice caused significantly increased levels of proinflammatory cytokine IL6 in the brain ( Figure 2). These data complemented microscopy findings showing chronic microglial activation in brain after SCI [38][39][40]73,74]. Glial activation was confirmed in the sub-granular zone and molecular layer of the DG in the hippocampus in a severity-dependent manner; such activation was only found in moderate and severe SCI, but not mild [73]. Moreover, increased levels of IL1α and TNFα were observed in the hippocampus of rats with anxiety/depressive-like behavior after SCI [75]. Modulating inflammation has recently been shown to improve mood in patients with SCI [76]. Thus, isolated thoracic SCI in rats and mice causes widespread progressive chronic neuroinflammation, leading to neurodegeneration in key brain regions associated with cognitive dysfunction and depression. However, the precise molecular mechanisms underlying these changes have not been elucidated. in moderate and severe SCI, but not mild [73]. Moreover, increased levels of IL1α and TNFα were observed in the hippocampus of rats with anxiety/depressive-like behavior after SCI [75]. Modulating inflammation has recently been shown to improve mood in patients with SCI [76]. Thus, isolated thoracic SCI in rats and mice causes widespread progressive chronic neuroinflammation, leading to neurodegeneration in key brain regions associated with cognitive dysfunction and depression. However, the precise molecular mechanisms underlying these changes have not been elucidated. Increased proinflammatory cytokine IL6+ microglia occur in the brain after SCI. A T10 spinal cord contusion injury (moderate/severe injury) was produced in young adult C57BL/6 male mice (2-3 months old) using the Infinite Horizon spinal cord impactor as previously described [38,40] 158,190. At seven days after injury, mice were perfused with 40 mL of cold PBS, and the brain hemisphere was isolated for preparation of single cell suspension using standard FACS protocol. Cells were then blocked with anti-mouse CD16/CD32 Fc Block (clone 93) prior to staining with primary antibodyconjugated flourophores: CD45-eF450 (clone 30-F11), CD11b-APCeF780 (clone M1/70), IL6-PE (Clone: Mp5-20f3). All reagents were obtained from BioLegend Inc. N= 4 (Sham) and 5 (SCI) mice. * p < 0.05 vs. Sham with Mann-Whitney test. Neurogenesis, the process of generating new neurons from neural stem cells, is usually restricted to fetal and peri-natal time periods in the developing brain and ends shortly after birth. In the adult brain, however, neurogenesis may still be observed in the subgranular zone of the DG in the hippocampal and in the subventricular zone in the lateral wall of the lateral ventricles [77]. Psychiatric and neurodegenerative disorders are often associated with altered neurogenesis [78,79], some theorize cognitive deficits in the elderly result from a gradual decrease in hippocampal neurogenesis during the aging process [80]. Impaired neurogenesis is also seen in the hippocampus with chronic stress and depression [81,82]. With regard to SCI, the data are less clear. Glial activation has been shown to reduce neurogenesis [83]. By extension, SCI-induced brain inflammation could also impact neurogenesis. Several studies reported decreased neurogenesis in the brain at chronic time periods after SCI [38,40,73,84]. Another study, however, found an increase of gliogenesis in the brain following SCI, whereas neurogenesis remained unaltered [85]. This could reflect differences in timing and location, as the former examined the effects of a cervical injury at 90 days after SCI, whereas the latter investigated effects of a thoracic injury at 42 days after SCI. Injury severity may also play a role. Indeed, Jure et al. reported that although severe and moderate SCI reduced neurogenesis, mild SCI did not [73]. To this end, our laboratory has shown that discrete regions of the brain exhibit significant signs of cell cycle arrest and decreased numbers of immature neurons in the male murine hippocampus after SCI to the T10 segment [38,40]. These findings, combined with the widely explored theory of impaired neurogenesis being one of the underlying mechanisms of cognitive decline, may provide an explanation as to why SCI patients have a significantly higher risk of Figure 2. Increased proinflammatory cytokine IL6 + microglia occur in the brain after SCI. A T10 spinal cord contusion injury (moderate/severe injury) was produced in young adult C57BL/6 male mice (2-3 months old) using the Infinite Horizon spinal cord impactor as previously described [38,40]. At seven days after injury, mice were perfused with ice-cold PBS, and the brain hemisphere was isolated for preparation of single cell suspension using standard FACS protocol. Cells were then incubated with Fc Block prior to staining with primary antibody-conjugated fluorophores: CD45-Bv421, CD11b-APC/FireTM750, and Zombie AquaTM viability dye. Cells were then subject to fixation/permeabilization for cytokine labeling (i.e., IL-6-PE). All reagents were obtained from BioLegend Inc. (A) A representative histogram shows the relative frequency of IL-6-positive brain-resident microglia at seven days after sham and SCI surgery. FMO: fluorescence minus one; SSC-A: side scatter-area. (B) The percentage of IL6-positive brain microglia is quantified. N = 4 (Sham) and 5 (SCI) mice. * p < 0.05 vs. Sham with Mann-Whitney test. Neurogenesis, the process of generating new neurons from neural stem cells, is usually restricted to fetal and peri-natal time periods in the developing brain and ends shortly after birth. In the adult brain, however, neurogenesis may still be observed in the subgranular zone of the DG in the hippocampal and in the subventricular zone in the lateral wall of the lateral ventricles [77]. Psychiatric and neurodegenerative disorders are often associated with altered neurogenesis [78,79], some theorize cognitive deficits in the elderly result from a gradual decrease in hippocampal neurogenesis during the aging process [80]. Impaired neurogenesis is also seen in the hippocampus with chronic stress and depression [81,82]. With regard to SCI, the data are less clear. Glial activation has been shown to reduce neurogenesis [83]. By extension, SCI-induced brain inflammation could also impact neurogenesis. Several studies reported decreased neurogenesis in the brain at chronic time periods after SCI [38,40,73,84]. Another study, however, found an increase of gliogenesis in the brain following SCI, whereas neurogenesis remained unaltered [85]. This could reflect differences in timing and location, as the former examined the effects of a cervical injury at 90 days after SCI, whereas the latter investigated effects of a thoracic injury at 42 days after SCI. Injury severity may also play a role. Indeed, Jure et al. reported that although severe and moderate SCI reduced neurogenesis, mild SCI did not [73]. To this end, our laboratory has shown that discrete regions of the brain exhibit significant signs of cell cycle arrest and decreased numbers of immature neurons in the male murine hippocampus after SCI to the T10 segment [38,40]. These findings, combined with the widely explored theory of impaired neurogenesis being one of the underlying mechanisms of cognitive decline, may provide an explanation as to why SCI patients have a significantly higher risk of dementia. Although adult neurogenesis is clear in rodents, whether and to what extent adult neurogenesis occurs in humans remains controversial [86,87]. The Influence of Aging on SCI-Mediated Cognitive Impairments According to the report from the National Spinal Cord Injury Statistical Center (NSCISC), the average age at the time of injury has increased from 29 years in the 1970s to approximately 42 years in currently (https://www.sci-info-pages.com/facts.html). Moreover, today individuals with SCI have an average life expectancy of more than 30 years; however, given that such injuries now occur more frequently in older persons than previously, they are more susceptible to problems associated with ageing and dementia. Indeed, cognitive functioning is negatively correlated with age in individuals with SCI [13]. It is well known that ageing potentiates inflammation and neurodegeneration at the injury site, and impairs recovery from CNS trauma [88][89][90][91][92]. Aging is also an important pathogenic factor in other neurodegenerative disorders, including Alzheimer's disease (AD) [93,94]. Recent large-scale longitudinal population-based studies [14,16,20] showed that patients with SCI are at higher risk of developing dementia than non-SCI patients, indicating that SCI is a potential risk factor for dementia. Therefore, it is intriguing to investigate the mechanisms of SCI-induced dementia or its relationship to age of onset or age-related neurodegenerative disorders such as AD. On the other hand, elderly patients with impaired cognitive function are at risk of sustaining falls [95,96], which are an important cause (30%) of SCI (NSCISC). Furthermore, patients with dementia such as AD, could have higher risk of falls [97][98][99], and therefore increased risk of SCI. Thus, there may be an emerging confluence of SCI and dementia in the elderly population that represents a significant unmet healthcare challenge. We and others [37][38][39][40]49,51,[73][74][75] show that cognitive impairments and depression are detected weeks to months after isolated thoracic experimental SCI and that progressive neuronal loss and microglial activation occur in brain regions involved in memory and learning. Extending these observations, using aging and aged animals should help elucidate underlying mechanisms and generate new treatment approaches. Anterograde and Retrograde Mechanisms Although mounting evidence now indicates that SCI causes a significant change in brain function, the underlying anatomical pathways and molecular mechanisms are not clear. Anterograde and retrograde connections between the spinal cord and various brain regions certainly exist. The contusion site may also affect distal brain regions via the production and diffusion of chemokines. For instance, the chemokine cysteine-cysteine chemokine ligand 21 (CCL21) was shown to be produced in lumbar dorsal horn neurons around the SCI lesion site, but was also found in the thalamus, cerebral cortex, and hippocampus at later time points [39,40,55,100,101]. Subsequent work by De Jong et al. demonstrated that neuronal CCL21 is sorted into large dense-core vesicles and transported into axons, providing evidence for the directed transport from one neuron to another [100]. Indeed, it is now accepted that CCL21 is transported from/throughout neuronal processes into presynaptic structures [101]. This would suggest that chemokines can be transported in an anterograde manner to the areas distant from the lesion site. In addition to CCL21, SCI increases expression of CCL2 and its receptor CCR2 in the thalamus, hippocampus, and periaqueductal gray matter at chronic time points after injury [61]. Further examination is needed to determine the functional role of these and potentially other chemokines once they reach the brain via anterograde (e.g., through spinothalamic pathways) transport. Distal Release of CCL21 Neuronal CCL21 is a potent microglial-activating chemokine [55,[100][101][102][103][104]. We and others have reported [39,40,55,57] that SCI triggers up-regulation of CCL21 in a number of brain regions including thalamus, hippocampus, and cerebral cortex; increased thalamic CCL21 levels are associated with microglial activation and hyperpathia. CCL21 is not detected in healthy neurons, glia cells, or other non-neuronal cells in the brain [105]. That CCL21 is specifically expressed in injured neurons and that may act as a signal from damaged neurons to microglia were reported in 2001 [103]. Subsequent studies [100,101,104,106] confirm that CCL21 is synthesized by damaged neurons, transported by axons, and released to activate microglia-a phenomenon that has also been observed in humans [107,108]. Thus, CCL21 in the CNS is exclusively expressed in injured neurons and can cause neuroinflammation at sites distant from the injury site. In SCI [55], thalamic levels of CCL21 are rapidly reduced after spinal cord blockade using lidocaine, supporting the view that SCI elevates CCL21 levels in the brain. Increased CCL21 in more distant neurons after SCI [40,55,57] may reflect subsequent damage to second order neurons by the induced microglial activation. It is plausible that (1) activated microglia by CCL21 release pro-inflammatory cytokines that are toxic to surrounding neurons, leading to more distant CCL21 release through their axonal transport, consistent with the evidence for elevated CCL21 signals in broad brain regions at later time-points after SCI [40]; (2) such delayed increases of CCL21 in more distant regions are associated with progressive chronic neuroinflammation [39]. An alternate hypothesis is that activated microglia release microparticles, which contain pro-inflammatory molecules that can contribute to the spread of brain inflammation. It is known that microparticles are extracellular vesicles that play a critical role in cell-cell communication, including between immune cells and their targets. We recently reported [109] that microglial-derived microparticles can mediate progressive, spreading neuroinflammation after TBI. A schematic diagram for CCL21 axonal transport and its effects on microglial activation is illustrated in Figure 3. microglial activation and hyperpathia. CCL21 is not detected in healthy neurons, glia cells, or other non-neuronal cells in the brain [105]. That CCL21 is specifically expressed in injured neurons and that may act as a signal from damaged neurons to microglia were reported in 2001 [103]. Subsequent studies [100,101,104,106] confirm that CCL21 is synthesized by damaged neurons, transported by axons, and released to activate microglia-a phenomenon that has also been observed in humans [107,108]. Thus, CCL21 in the CNS is exclusively expressed in injured neurons and can cause neuroinflammation at sites distant from the injury site. In SCI [55], thalamic levels of CCL21 are rapidly reduced after spinal cord blockade using lidocaine, supporting the view that SCI elevates CCL21 levels in the brain. Increased CCL21 in more distant neurons after SCI [40,55,57] may reflect subsequent damage to second order neurons by the induced microglial activation. It is plausible that (1) activated microglia by CCL21 release pro-inflammatory cytokines that are toxic to surrounding neurons, leading to more distant CCL21 release through their axonal transport, consistent with the evidence for elevated CCL21 signals in broad brain regions at later time-points after SCI [40]; (2) such delayed increases of CCL21 in more distant regions are associated with progressive chronic neuroinflammation [39]. An alternate hypothesis is that activated microglia release microparticles, which contain pro-inflammatory molecules that can contribute to the spread of brain inflammation. It is known that microparticles are extracellular vesicles that play a critical role in cell-cell communication, including between immune cells and their targets. We recently reported [109] that microglial-derived microparticles can mediate progressive, spreading neuroinflammation after TBI. A schematic diagram for CCL21 axonal transport and its effects on microglial activation is illustrated in Figure 3. Although CCL21 is up-regulated under pathological conditions, the responsible receptor for the CCL21-dependent microglial activation is unclear [105,110,111]. There are two known receptors for CCL21 in mice: CCR7 and CXCR3 [112]. CCR7 is not found in microglia under basal conditions, but it can be induced in vitro and in vivo [103,[113][114][115]. In contrast, CXCR3 is constitutively expressed in Although CCL21 is up-regulated under pathological conditions, the responsible receptor for the CCL21-dependent microglial activation is unclear [105,110,111]. There are two known receptors for CCL21 in mice: CCR7 and CXCR3 [112]. CCR7 is not found in microglia under basal conditions, but it can be induced in vitro and in vivo [103,[113][114][115]. In contrast, CXCR3 is constitutively expressed in cultured microglia and in acutely isolated microglia [114]. However, neither the deficiency of CCR7 nor CXCR3 had a major impact on development of neuropathic pain, in contrast to the striking phenotype in the absence of their ligand CCL21. In agreement with earlier studies [102,105,106,110], we were not able to detect CCR7 mRNA or CXCR3 mRNA in the brain, even after SCI. Thus, neither in vitro nor in vivo studies have clearly defined a functional role for CCL21 signaling in microglia. However, Cells 2020, 9, 1420 8 of 22 the underlying mechanisms of CCL21-triggered detrimental microglial activation and associated functional outcomes, including cognitive deficits are intriguing for future investigation. Systemic Immune Functions Another serious complication in SCI patients is systemic immune dysfunction. The peripheral immune response is complicated, with some studies reporting an increase in systemic inflammatory activation and others showing impaired immunological responses. SCI induces systemic increases in immune cells and pro-inflammatory factors [116]. Free radical production increases significantly in neutrophils isolated from SCI patients compared to control subjects [117]. Expression of the NADPH oxidase subunit gp91(phox) and nitric oxide synthase are increased in blood [117]. SCI increases the NOD-like receptor family, pyrin domain containing 3 (NLRP3) inflammasome formation, in peripheral tissues [118]. Pro-inflammatory cytokine levels of TNFα, IL-1β, and IL-6 are increased in serum after rat spinal cord ischemia injury model [119]. SCI can also disturb neuroendocrine functions, by activating the hypothalamic-pituitary-adrenal axis, causing systemic inflammation by inducing production of macrophage migration inhibitory factors from pituitary gland [120]. Collectively, increased free radical production, inflammasome activation, and cytokine production may exacerbate leukocyte infiltration in the brain and promote excessive microglial activation and chronic inflammation [116]. Contrary to systemic immune activation, SCI can also impair systemic immune function. In clinical studies, SCI survivors have increased morbidity for infections due to the development of SCI-induced immune depression syndrome (SCI-IDS), which is a system-wide deficit of immune surveillance [121]. SCI-IDS is thought to be caused by severing thoracolumbar spinal cord projections, disrupting sympathetic nervous system signaling [122]. Hallmarks of SCI-IDS include splenic atrophy, leukopenia, reduced anti-microbial activity, and impaired humoral immunity. Although SCI-IDS may have a positive effect by reducing the potential for auto-immune damage to the CNS [123], these complications compromise the patient's immunity, resulting in higher cost of care and rehospitalization incidences. Importantly, pathological changes to the immune system as a whole may promote brain inflammation and significantly impede neurological recovery after SCI. Chronic Neuropathic Pain Chronic neuropathic pain, a common secondary complication of SCI, also plays a major role in the development of depression [18,124]. In the context of clinical cases, persistent pain has been reported in approximately 65% of people affected by SCI [3]. In a cohort study by Perry et al., approximately half of the individuals undergoing rehabilitation after SCI reported "moderate" or "severe" pain [125]. Persistent pain experienced by SCI patients is thought to exacerbate cognitive dysfunction and be detrimental to recovery. In fact, Murry et al. found that post-SCI pain significantly correlated with neurological behavior, and can be used as a predictor of cognitive function, emotional function and quality of life [15]. In a recent Swedish survey, neuropathic pain was one of the most critical factors contributing to the low quality of life reported by SCI patients, surpassing bladder dysfunction, problematic spasticity, pressure sores, and sexual dysfunctions [126]. However, whether pain directly affects cognition is still debated. Post-SCI pain may indirectly lead to cognitive decline and poor recovery by promoting immobility and sleep disturbance. Lack of physical activity is known to be a risk factor for cognitive impairment and dementia whereas moderate to high level of physical activities protect against cognitive impairments [127,128]. Given that most of SCI patients cannot participate in physical activity at an adequate level and only 50% of SCI patients engage in any physical activities during leisure time, it is not surprising that SCI patients are at higher risk of developing cognitive impairments [129,130]. In addition to the lower quality of life that patients experience due to pain and cognitive-emotional disturbance after SCI, depression can also impede the physical rehabilitation process and exacerbate health problems associated with SCI [15]. One study reported approximately one-third of SCI patients have symptoms of depression up to 10 years after injury while others found up to 78% of rehabilitation Cells 2020, 9, 1420 9 of 22 patients reported chronic pain [131,132]. Despite the difference in frequency, both studies consistently found SCI patients are more anxious and depressed than control subjects [133]. Avluk et al. reported a positive correlation between pain severity and the development of depression [134]. While the exact mechanisms remain to be seen, it is plausible that stress caused by persistent pain underlie these changes in neurological condition. Studies show a strong positive association between usual pain intensity and psychological distress, with significant differences in usual pain severity when those with and without possible clinical levels of anxiety and depression were compared [131,135]. This bidirectional relationship suggests that pain can lead to depression, and that depressed patients may be more sensitive to the pain. Indeed, a study combining rat models of chronic constriction injury and chronic mild stress showed that depressive-like behavior was associated with a heightened aversion to painful stimuli, implying depression can cause increased sensitivity to pain [136]. Those results suggest the possibility of a positive feedback loop wherein chronic SCI pain increases physiological distress and depression, which in turn increases sensitivity to pain. Together, with pain and depression impact cognitive function and recovery. A nationwide study conducted in Canada found that chronic SCI patients diagnosed with neuropathic pain and depression had increased risk of cardiovascular disease, such as heart disease and stroke [137]. Depression is also associated with a two-fold risk of dementia [138,139]. This may be partially due to reduced physical activities and leisure activities in depressed patients [140]. Thus, chronic pain not only increase the prevalence of depression in SCI patients, but depression may also lead to the development of dementia in SCI patients. It is important to recognize that chronic pain is prevalent in SCI patients and significantly correlates with cognitive decline and depression. However, to what degree chronic pain causes those problems requires further investigation. The presence of unremitting pain may lead to dementia, cognitive dysfunction, and depression by altering patients' behavior such as reducing physical activity and sleep quality. This differs from direct anterograde or retrograde signaling or altering physiological alternation such as inducing systemic inflammation; adding another layer complexity to how SCI induces cognitive and mood dysfunctions. Careful observation should be made to determine whether focused efforts on pain management in SCI patients can have extended benefits on cognitive and emotional well-being and vice versa. Anti-Depressants Due to the detrimental effects that mood disorders have on SCI patients, mood stabilization and maintaining good mental health is of the utmost importance to recovery and rehabilitation. As described above, depression is associated with decreased physical activity and increased risk of developing dementia [139,140]. Anti-depressant drugs can be advantageous in treating SCI-mediated depression, including improving mood status and quality of life following surgery, and reducing risk of delirium and suicide [141]. Venlafaxine may be more appropriate for patients with SCI presenting with depression and/or nociceptive pain [142][143][144]. In addition, antidepressant medications have been predicted to treat SCI pain. Some reports show an increased effectiveness of tricyclic antidepressants (TCA) on severe depression in SCI patients compared to those without SCI [145]. TCA may also be effective in ameliorating SCI-induced pain. An eight-week clinical trial showed that the TCA medication amitriptyline was significantly more effective at ameliorating neuropathic pain in SCI patients compared with diphenhydramine [146]. However, the efficacy of antidepressants in SCI pain management is still debated. The antidepressant duloxetine was reported to alleviate dynamic and cold allodynia but had no effect on tactile or pressure pain thresholds [147]. In experimental SCI models in rat, there is no correlation between locomotor functional recovery, assessed with the Basso, Beattie, and Bresnahan (BBB) scale, and performance on the depression tests including the sucrose preference, forced swim, open field, social exploration, and burrowing tasks [37], indicating the characterization of depression does not depend on motor recovery. One promising anti-depressant proven in experimental SCI is fluoxetine. The combined treatment of fluoxetine and treadmill gait training shows better BBB scale score in SCI rats than non-treated group or groups treated with fluoxetine or treadmill gait training alone [148]. Fluoxetine is a selective serotonin reuptake inhibitor that allows higher extracellular serotonin concentrations and prolonged activation of serotonin receptors. This might be one reason fluoxetine improve BBB score since serotonin receptor activation is known to contribute to motor function recovery after SCI [149]. Furthermore, fluoxetine is known to induce the production of neurotrophic factors and affect brain physiology. High dose fluoxetine (21 days of daily i.p. injections of 25 mg/kg) increased BDNF production and hippocampal neurogenesis in rats following SCI [150]. However, there were no synergistic effects of combining exercise and fluoxetine treatment on neurotrophic factor expression. Although BDNF concentrations did not change in the spinal cord with either physical activity or fluoxetine treatment, IGF-1 levels and cytogenesis were significantly decreased with high dose fluoxetine alone [150]. However, a low dose of fluoxetine (5 mg/kg/day) that has been shown to be subthreshold for increasing motor activity significantly decreased immobility in the forced swim test in depressed SCI rats without altering locomotor functional recovery [37]. These findings indicate fluoxetine has differential effects on the spinal cord and brain, including anti-inflammatory actions. Furthermore, whether improved mood by fluoxetine treatment affects cognitive function following SCI remains unclear. Ultimately, a better understanding of modifiable risk factors underlying cognitive impairment and depression in SCI subjects could lead to the development of more-effective interventions to treat these symptoms. Cell Cycle Activation Inhibition Although anti-depressants may potentially alleviate the symptoms of post-SCI depression, further research on the molecular mechanisms driving this neurological condition could yield more promising methods of treatment or prevention. One of these potential mechanisms involves cell cycle activation (CCA), which has been experimentally proven harmful after neurotrauma [151]. CCA is known an important secondary injury mechanism after TBI or SCI [57,60,[152][153][154][155][156][157][158][159][160][161][162][163][164][165][166][167] that contributes to early posttraumatic neuronal cell death as well as to chronic neuroinflammation that leads to delayed progressive neurodegeneration. Using experimental SCI contusion models, we showed that treatment with cyclin-dependent kinase (CDK) inhibitors reduces neuronal death and microglial/astrocyte reactivity, attenuates lesion volume, and improves motor recovery [57,[155][156][157][158]160]. Our recent data indicate that CCA increased not only in the lesioned spinal cord but also in various brain regions (thalamus, cortex, hippocampus) following SCI [38,39,57]. The expression of cell cycle genes are increased in multiple brain regions, where mRNA levels of cyclin A1, cyclin A2, cyclin D1, and PCNA genes were significantly increased as early as first few days after SCI and remained elevation at three months post-injury [38]. Cyclin D1 protein levels were also elevated in these regions [38,57]. Among initiating factors for CCA, the E2F1 transcription factor showed elevations in the hippocampus 24 h after injury in both rat and mouse, consistent with our previous report in the injured spinal cord [160]. E2F1-3 are members of the activator sub-family of E2F transcription factors and play a key upstream role in CCA [168,169]. Thus, E2Fs may represent a potential target to modulate these pathways. Importantly, early treatment with CR8, a potent inhibitor of multiple CDKs after injury largely prevented both the posttraumatic neuroinflammation and progressive neurodegeneration in the brain, as well as long-term cognitive dysfunction and depressive-like behavior [38,39]. As CCA activation causes cell death of post-mitotic cells (neurons, oligodendroglia) as well as activation and proliferation of mitotic cells (microglia, astrocytes) with neuroinflammation and secondary neurotoxicity [161], the effectiveness of CCA inhibitors likely involves multiple cell types. Targeting Inflammation A possible mechanism underlying the high dementia risk among patients with SCI is posttraumatic neuroinflammation and associated neurodegeneration. As the major cellular component of the innate immune system in the CNS, microglia play a critical role in the response to CNS trauma. In response to injury, microglia can produce neuroprotective factors, clear cellular debris and orchestrate neurorestorative processes that are beneficial for neurological recovery. However, dysregulated microglia can also produce high levels of pro-inflammatory and cytotoxic mediators that hinder CNS recovery [170,171]. Chronic neuroinflammation often continues for months to years after CNS trauma [170][171][172][173][174][175][176][177][178][179][180][181], contributing to delayed, progressive neuronal cell loss and neurological dysfunction. Chronic brain neuroinflammation after SCI is associated with progressive neurodegeneration in the brain regions associated with cognitive impairment [40]. Based on recent experimental work, an alternate hypothesis is that SCI-induced neuroinflammation affects hyperpathic pain, depression, and cognition. Considerable data also suggest that chronic neurotoxic inflammation may be a critical pathogenic mechanism in neurodegenerative disorders including AD [93,94]. SCI-mediated neuropsychological abnormalities are not necessarily "reactive" symptoms, but may reflect specific pathobiological changes that can be targeted [38][39][40]75]. Recent randomized clinical trials reported that targeting inflammation improves mood and neuropathic pain following SCI [76,182]. We and others have shown the importance of phagocytic NADPH oxidases (NOX2) in microglial activation and correlated production of pro-inflammatory factors, along with chronic neuronal cell loss and associated neurological dysfunction, after TBI or SCI [175,[183][184][185][186]. Depletion or inhibition of NOX2 reduce reactive oxygen species (ROS) production and alter microglia/macrophage polarization balance toward the anti-inflammatory phenotype after neurotrauma [187][188][189], leading to neuroprotection. In a mouse SCI model, the combined use of the nonspecific apocynin and the AMPA receptor inhibitor NBQX resulted in reduced lesion volume and increased preservation of white matter and a greater overall improvement to functional recovery out to 6 weeks post-injury [190]. Systemically acute inhibition of NOX2 by NOX2ds-tat (a peptide that specific inhibits NOX2) can result in long-term alterations in function and microglial activity after SCI [186]. We have recently reported [188] that constitutive depletion or systemically inhibition of NOX2 significantly reduced mechanical/thermal cutaneous hypersensitivity and motor dysfunction after moderate contusion SCI at T10 in male mice. Thus, NOX2 signaling may be one of the key mechanisms of posttraumatic neuroinflammation after brain or spinal cord insults that can be effectively targeted by NOX2 inhibitors or gene knockout in TBI or SCI. Whether or not NOX2 activation contributes to subsequent brain inflammation and neurodegeneration following SCI is intriguing for future investigation. Furthermore, targeting CNS resident microglial populations have been pursued as potential therapeutics for various neurodegenerative diseases [191], including neurotrauma, but most such approaches have lacked specificity and/or modest effectiveness in decreasing adult microglia. Genetic deletion of CX3CR1, a microglial chemokine receptor, promotes recovery after SCI, but this receptor is also highly expressed by infiltrating macrophages [192,193]. Recently developed transgenic mice (CX3CR1 CreER/+ :R26 iDTR/+ ) permit depletion of resident microglia in the CNS but not blood CX3CR1 + cells [194]. Additionally, microglia are dependent on colony-stimulating factor 1 receptor (CSF1R) signaling for survival in the healthy adult brain [195]. Administration of CSF1R antagonists results in the rapid and continued elimination of virtually all microglia from the CNS, without significant effects on peripheral macrophage populations [195]. Mice lacking microglia, using this approach, are healthy, viable, and show no deleterious effects. Elimination of microglia is highly beneficial in CNS disease models [196][197][198][199], suggesting that CSF1R antagonists can be effective therapeutically for various CNS disorders associated with neuroinflammation. Crucially, CSF1R antagonists are in clinical trials for various cancers, and thus, these findings may be more readily translatable. Conclusions and Perspectives Across studies in human and experimental models, SCI-induced cognitive and mood disorders are consistently observed. fMRI imaging of the spinal cord reveals a reorganization of cortical networks in the brain following SCI in humans, including impairments in distributed cortical/subcortical networks that are engaged in information processing functions [9,200]. Recent preclinical work has begun to elucidate the underlying mechanisms of cognitive impairments following SCI, such as CCL21 release, neuroinflammation, and cell cycle pathways. However, the full picture of the connection between SCI and brain circuit dysfunction is yet to be uncovered. For example, anterograde and retrograde transport mechanisms are suggested to cause brain circuit dysfunctions after SCI, but the exact mechanisms involved are not yet understood. In summary, cognitive dysfunction after SCI may reflect a combination of biochemical, physiological and behavioral alternations induced by SCI. Although potential clinical confounding factors include concomitant brain injury, psychological abnormality, as well as cardiovascular and sleep disorders, growing evidence demonstrates that SCI induces functional changes in the brain. Thus, it is important to view SCI as a brain degenerative disease in addition to a traditional understanding of SCI as a traumatic event. Future SCI rehabilitation efforts should better emphasize and examine the potential cognitive changes and mood disorders following SCI. Similarly, preclinical studies should further address these important translational issues in addressing injury mechanisms and therapeutic approaches to SCI.
v3-fos-license
2020-12-10T09:04:58.533Z
2020-12-04T00:00:00.000
229402556
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2312-7481/6/4/69/pdf", "pdf_hash": "7826d4efbeaa76b1600c4e476260c489735a9512", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:235", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "d6ef1fccbfbe19caf51eb7841322067718de16a3", "year": 2020 }
pes2o/s2orc
When Molecular Magnetism Meets Supramolecular Chemistry: Multifunctional and Multiresponsive Dicopper(II) Metallacyclophanes as Proof-of-Concept for Single-Molecule Spintronics and Quantum Computing Technologies? Molecular magnetism has made a long journey, from the fundamental studies on through-ligand electron exchange magnetic interactions in dinuclear metal complexes with extended organic bridges to the more recent exploration of their electron spin transport and quantum coherence properties. Such a field has witnessed a renaissance of dinuclear metallacyclic systems as new experimental and theoretical models for single-molecule spintronics and quantum computing, due to the intercrossing between molecular magnetism and metallosupramolecular chemistry. The present review reports a state-of-the-art overview as well as future perspectives on the use of oxamato-based dicopper(II) metallacyclophanes as promising candidates to make multifunctional and multiresponsive, single-molecule magnetic (nano)devices for the physical implementation of quantum information processing (QIP). They incorporate molecular magnetic couplers, transformers, and wires, controlling and facilitating the spin communication, as well as molecular magnetic rectifiers, transistors, and switches, exhibiting a bistable (ON/OFF) spin behavior under external stimuli (chemical, electronic, or photonic). Special focus is placed on the extensive research work done by Professor Francesc Lloret, an outstanding chemist, excellent teacher, best friend, and colleague, in recognition of his invaluable contributions to molecular magnetism on the occasion of his 65th birthday. Introduction and Background: Molecular Magnetism Meets Metallosupramolecular Chemistry for Single-Molecule Spintronics and Quantum Computing The metallosupramolecular chemistry term was coined by Constable in 1994 to describe an emerging research area in the field of supramolecular chemistry [1,2], where the advantage of Detecting the response of the spins of a single magnetic molecule to an external stimulus and, by using such a platform (in the form of well-established stimulus-response correlations), being able to implement quantum logic memory capabilities, is the key to applications in single-molecule spintronics and quantum computing, according to Sanvito [21]. Two unique examples of divanadium(IV) complexes which have been proposed as prototypes of molecular magnetic transistors (MTs) and molecular quantum gates (QGs) for the physical implementation of quantum information processing (QIP) illustrate this idea [93][94][95][96]. On the one hand, a dual electroswitching (ON/OFF) magnetic behavior upon one-electron metal reduction and oxidation of the trans-diaminomaleonitrile-bridged divanadium(IV) complex of formula V2(μ-C4N4)(CN)4(tmtacn)2 (tmtacn = N,N′,N″-1,4,7-trimethyl-1,4,7-triazacyclononane) has been reported by Long et al., as shown in Figure 1 [93]. The magnetic bistability responsible for the MT behavior in this electron spin-based system would arise from the conversion between the antiferromagnetically coupled V IV 2 complex with an S = 0 ground state (OFF) and the mixed-valence, ferromagnetically coupled V III,IV 2 or paramagnetic V IV,V 2 species, possessing S = 3/2 and 1/2 ground states (ON), respectively [94]. The electron exchange (EE) interactions between the V IV (SV = 1/2) ions through the C4N4 4− bridge account for the strong antiferromagnetic coupling in the S = 0 V IV 2 neutral molecule ( Figure 1, middle) [93]. In contrast, the very strong ferromagnetic coupling in the reduced S = 3/2 V III,IV 2 anion results from the double exchange (DE) interactions between the V III (SV = 1) and V IV (SV = 1/2) ions due to the delocalization of the added electron through the C4N4 4− bridge (Figure 1, left) [93]. On the other hand, the p-phenylenediamidocatecholate-bridged divanadyl(IV) metallacyclic complex of the cyclophane type of formula (Ph 4 P) 4 [(VO) 2 (µ-ppbacat) 2 ] [ppbacat = N,N -bis(2,3dihydroxybenzoyl)-1,4-phenylendiamine] reported by Atzori et al. allows for the electron spin-mediated switching of the nuclear spins of each V IV ion, as shown in Figure 2 [96]. The QG behavior in this nuclear spin-based double quantum bit (qubit) system results from the fast electronic spin excitations within the S = 1 state promoted by the application of uniform electron paramagnetic resonance (EPR) pulses. The controlled entanglement between the nuclear spin-based qubits in this very weak magnetically coupled V IV 2 complex is ultimately made possible by the large hyperfine coupling between the electron and nuclear spins of each V IV moiety (S V = 1/2 and I V = 7/2), as clearly found in the parent mononuclear vanadyl(IV) complex featuring long spin coherence times [95]. This related pair of mono-and dinuclear vanadyl(IV) complexes illustrates, in a certain manner, the transition from classic molecular (Werner) to modern supramolecular coordination chemistry by using binucleating aromatic diamidocatecholates ligands as bridges. Magnetochemistry 2020, 6, x FOR PEER REVIEW 3 of 24 On the other hand, the p-phenylenediamidocatecholate-bridged divanadyl(IV) metallacyclic complex of the cyclophane type of formula (Ph4P)4[(VO)2(μ-ppbacat)2] [ppbacat = N,N′-bis(2,3dihydroxybenzoyl)-1,4-phenylendiamine] reported by Atzori et al. allows for the electron spinmediated switching of the nuclear spins of each V IV ion, as shown in Figure 2 [96]. The QG behavior in this nuclear spin-based double quantum bit (qubit) system results from the fast electronic spin excitations within the S = 1 state promoted by the application of uniform electron paramagnetic resonance (EPR) pulses. The controlled entanglement between the nuclear spin-based qubits in this very weak magnetically coupled V IV 2 complex is ultimately made possible by the large hyperfine coupling between the electron and nuclear spins of each V IV moiety (SV = 1/2 and IV = 7/2), as clearly found in the parent mononuclear vanadyl(IV) complex featuring long spin coherence times [95]. This related pair of mono-and dinuclear vanadyl(IV) complexes illustrates, in a certain manner, the transition from classic molecular (Werner) to modern supramolecular coordination chemistry by using binucleating aromatic diamidocatecholates ligands as bridges. In addition, two related examples of dicopper(II) and copper(II)-organic radical complexes are known, featuring a chemo-and photoswitchable (ON/OFF) magnetic behavior between antiferro-or ferromagnetically coupled states (ON) and magnetically isolated ones (OFF) [97,98]. These two case studies of ligand-based chemo-and photo-active, bistable dynamic magnetic systems would constitute suitable candidates for single-molecule spintronics and quantum computing nanotechnologies [17]. On the one hand, the tweezer-type dicopper(II) complex of formula [Cu2(terpytbsalphen)] [H4terpy-tbsalphen 6,6″-bis(4-ethenyl-N,N′-1,2-phenylene-bis(3,5-diterbutysalicylideneimine)-2,2′:6′,2″-terpyridine] reported by Doisteau et al. provides an elegant example of the mechanically assisted chemical switching of the magnetic coupling after coordination of the Zn II ion to the central terpy linker, to give the corresponding [ZnCu2(terpytbsalphen)]Cl2 species, as shown in Figure 3 [97]. A concomitant conformational change of the terpy-tbsalphen bridging ligand occurs, which is eventually responsible for the switching between the magnetically uncoupled (W-shaped) open isomer and the weak antiferromagnetically coupled (U-shaped) closed isomer. The lack of throughbond EE interactions between the Cu II (SCu = 1/2) ions at such a long intermetallic distance (r = 21.4 Å) accounts for the negligibly small magnetic coupling observed in the open isomer (Figure 3, left). Otherwise, the presence of direct through-space EE interactions between the two parallel stacked, copper(II)-salphen fragments at a short intermetallic distance (r = 4.03 Å) is the origin of the weak but non-negligible antiferromagnetic coupling found in the closed isomer (Figure 3, right). In addition, two related examples of dicopper(II) and copper(II)-organic radical complexes are known, featuring a chemo-and photoswitchable (ON/OFF) magnetic behavior between antiferro-or ferromagnetically coupled states (ON) and magnetically isolated ones (OFF) [97,98]. These two case studies of ligand-based chemo-and photo-active, bistable dynamic magnetic systems would constitute suitable candidates for single-molecule spintronics and quantum computing nanotechnologies [17]. On the one hand, the tweezer-type dicopper(II) complex of formula [Cu 2 (terpytbsalphen)] [H 4 terpy-tbsalphen 6,6"-bis(4-ethenyl-N,N -1,2-phenylene-bis(3,5-diterbutysalicylideneimine)-2,2 :6 ,2" -terpyridine] reported by Doisteau et al. provides an elegant example of the mechanically assisted chemical switching of the magnetic coupling after coordination of the Zn II ion to the central terpy linker, to give the corresponding [ZnCu 2 (terpytbsalphen)]Cl 2 species, as shown in Figure 3 [97]. A concomitant conformational change of the terpy-tbsalphen bridging ligand occurs, which is eventually responsible for the switching between the magnetically uncoupled (W-shaped) open isomer and the weak antiferromagnetically coupled (U-shaped) closed isomer. The lack of through-bond EE interactions between the Cu II (S Cu = 1/2) ions at such a long intermetallic distance (r = 21.4 Å) accounts for the negligibly small magnetic coupling observed in the open isomer (Figure 3, left). Otherwise, the presence of direct through-space EE interactions between the two parallel stacked, copper(II)-salphen fragments at a short intermetallic distance (r = 4.03 Å) is the origin of the weak but non-negligible antiferromagnetic coupling found in the closed isomer (Figure 3, right). On the other hand, the copper(II)-organic radical complex of formula Cu(hfac)2(phendaeNNO) (hfac = hexafluoroacetylacetonate and phendaeNNO = 1-[6-oxyl-3-oxide-4,4,5,5-tetramethylimidazolin-2-yl)-2-methylbenzothiophen-3-yl]-2-[6-(1,10-phenanthroline-2-yl)-2-methylbenzothiophen-3-yl]-3,3,4,4,5,5-hexafluorocyclopentene) reported by Takayama et al. constitutes an example of the photoswitching of the magnetic coupling by using a tailor-made coordinating group-substituted, photoactive nitronyl nitroxide (NNO) radical ligand, as shown in Figure 4 [98]. The reversible intramolecular photocycloaddition of the diarylethene-type photochromic linker that occurs after UV and visible light irradiation is responsible for the switching between the magnetically uncoupled open-ring (o) and the presumably weak ferromagnetically coupled closed-ring (c) isomers of the Cu II -NNO radical species. The magnetic coupling in the open isomer is very weak, if not negligible, as expected because of the absence of through-bond EE interaction between the Cu II (SCu = 1/2) ion and the NNO radical (SR = 1/2) (Figure 4, left). Otherwise, the fully conjugated π-electron system of the closed isomer substantially increases the magnitude of the EE interaction, giving rise to a weak but non-negligible ferromagnetic coupling due to the strict orthogonality of the magnetic orbitals of the Cu II ion and the NNO radical ligand (Figure 4, right). The state-of-the-art in metallosupramolecular chemistry concerns the design and synthesis of novel classes of chemo-, electro-and photo-active, extended π-conjugated aromatic bridging ligands, which should be able to self-assemble with paramagnetic transition metal ions to give new multifunctional and multiresponsive metallacyclic complexes [7]. This is nicely exemplified in the work by Lloret and co-workers on the rich metallosupramolecular chemistry of a novel class of On the other hand, the copper(II)-organic radical complex of formula Cu(hfac) 2 (phendaeNNO) (hfac = hexafluoroacetylacetonate and phendaeNNO = 1-[6-oxyl-3-oxide-4,4,5,5-tetramethylimidazolin-2-yl)-2-methylbenzothiophen-3-yl]-2-[6-(1,10-phenanthroline-2-yl)-2-methylbenzothiophen-3-yl]-3,3,4,4,5,5-hexafluorocyclopentene) reported by Takayama et al. constitutes an example of the photoswitching of the magnetic coupling by using a tailor-made coordinating group-substituted, photoactive nitronyl nitroxide (NNO) radical ligand, as shown in Figure 4 [98]. The reversible intramolecular photocycloaddition of the diarylethene-type photochromic linker that occurs after UV and visible light irradiation is responsible for the switching between the magnetically uncoupled open-ring (o) and the presumably weak ferromagnetically coupled closed-ring (c) isomers of the Cu II -NNO radical species. The magnetic coupling in the open isomer is very weak, if not negligible, as expected because of the absence of through-bond EE interaction between the Cu II (S Cu = 1/2) ion and the NNO radical (S R = 1/2) (Figure 4, left). Otherwise, the fully conjugated π-electron system of the closed isomer substantially increases the magnitude of the EE interaction, giving rise to a weak but non-negligible ferromagnetic coupling due to the strict orthogonality of the magnetic orbitals of the Cu II ion and the NNO radical ligand (Figure 4, right). On the other hand, the copper(II)-organic radical complex of formula Cu(hfac)2(phendaeNNO) (hfac = hexafluoroacetylacetonate and phendaeNNO = 1-[6-oxyl-3-oxide-4,4,5,5-tetramethylimidazolin-2-yl)-2-methylbenzothiophen-3-yl]-2-[6-(1,10-phenanthroline-2-yl)-2-methylbenzothiophen-3-yl]-3,3,4,4,5,5-hexafluorocyclopentene) reported by Takayama et al. constitutes an example of the photoswitching of the magnetic coupling by using a tailor-made coordinating group-substituted, photoactive nitronyl nitroxide (NNO) radical ligand, as shown in Figure 4 [98]. The reversible intramolecular photocycloaddition of the diarylethene-type photochromic linker that occurs after UV and visible light irradiation is responsible for the switching between the magnetically uncoupled open-ring (o) and the presumably weak ferromagnetically coupled closed-ring (c) isomers of the Cu II -NNO radical species. The magnetic coupling in the open isomer is very weak, if not negligible, as expected because of the absence of through-bond EE interaction between the Cu II (SCu = 1/2) ion and the NNO radical (SR = 1/2) (Figure 4, left). Otherwise, the fully conjugated π-electron system of the closed isomer substantially increases the magnitude of the EE interaction, giving rise to a weak but non-negligible ferromagnetic coupling due to the strict orthogonality of the magnetic orbitals of the Cu II ion and the NNO radical ligand (Figure 4, right). The state-of-the-art in metallosupramolecular chemistry concerns the design and synthesis of novel classes of chemo-, electro-and photo-active, extended π-conjugated aromatic bridging ligands, which should be able to self-assemble with paramagnetic transition metal ions to give new multifunctional and multiresponsive metallacyclic complexes [7]. This is nicely exemplified in the work by Lloret and co-workers on the rich metallosupramolecular chemistry of a novel class of The state-of-the-art in metallosupramolecular chemistry concerns the design and synthesis of novel classes of chemo-, electro-and photo-active, extended π-conjugated aromatic bridging ligands, which should be able to self-assemble with paramagnetic transition metal ions to give new multifunctional and multiresponsive metallacyclic complexes [7]. This is nicely exemplified in the work by Lloret and co-workers on the rich metallosupramolecular chemistry of a novel class of ligands bearing two oxamato donor groups separated by more or less rigid non-innocent, extended π-conjugated aromatic spacers, which transmit both magnetic and electronic coupling effects between the metal centers in efficient and switchable ways [99,100]. In this review, we summarize old and more recent achievements, as well as future perspectives, dealing with the ligand design strategy to control the nature and magnitude of the intramolecular magnetic coupling between distant metal centers, through both internal and external factors, in a diverse family of oxamato-based dicopper(II) metallacyclophanes, as illustrated in Scheme 1 [99,100]. These double-stranded dicopper(II) metallacyclic complexes of the cyclophane type, resulting from the self-assembly of dinucleating aromatic oxamato ligands with Cu II ions, include a variety of potentially chemo-, electro-and/or photoactive, extended π-conjugated organic spacers, such as polymethyl-substituted mor p-phenylenes (Scheme 1a,b), mor p-pyridines (Scheme 1c,d), oligo-p-phenylenes or oligo-p-phenylene-ethynes (Scheme 1e,f), stilbene or azobenzene (Scheme 1g), o-phenylene-ethylenes (Scheme 1h), oligo-α,α -or oligo-β,β -acenes (Scheme 1i,j), and 1,4-or 2,6-anthraquinones (Scheme 1k,l). Herein, we will highlight how this new class of multifunctional and multiresponsive metallosupramolecular complexes are up-and-coming candidates as a proof-of-concept (POC) design in the development of molecular magnetic devices, such as wires and switches, for information processing and storage applications in the emerging fields of single-molecule spintronics and quantum computing. Magnetochemistry 2020, 6, x FOR PEER REVIEW 5 of 24 ligands bearing two oxamato donor groups separated by more or less rigid non-innocent, extended π-conjugated aromatic spacers, which transmit both magnetic and electronic coupling effects between the metal centers in efficient and switchable ways [99,100]. Dinuclear Copper(II) Metallacyclophanes in the Proof-of-Concept (POC) Design of Molecular Magnetic Wires The basic components of a molecular spintronic circuit are molecular magnetic wires (MWs), which would facilitate the magnetic communication between the spin carriers along the circuit [100,101]. MWs offer a new design concept for the transfer of information over long distances based on EE interactions and without current flow [102][103][104][105][106][107][108][109][110][111][112][113][114][115], in contrast to conventional charge transport-based electronic wires [116][117][118][119][120][121][122], as stated by Lloret [100]. Indeed, setting a long-range magnetic coupling between two distant spin centers connected by a long organic spacer that ultimately extends over infinite distances ("wire-like magnetic coupling") is the cornerstone. That being so, a perturbation induced by an externally applied magnetic field on the spin center located at the beginning of the wire generates a change on the spin center located at the end of it, as shown in Figure 5. Dinuclear Copper(II) Metallacyclophanes in the Proof-of-Concept (POC) Design of Molecular Magnetic Wires The basic components of a molecular spintronic circuit are molecular magnetic wires (MWs), which would facilitate the magnetic communication between the spin carriers along the circuit [100,101]. MWs offer a new design concept for the transfer of information over long distances based on EE interactions and without current flow [102][103][104][105][106][107][108][109][110][111][112][113][114][115], in contrast to conventional charge transport-based electronic wires [116][117][118][119][120][121][122], as stated by Lloret [100]. Indeed, setting a long-range magnetic coupling between two distant spin centers connected by a long organic spacer that ultimately extends over infinite distances ("wire-like magnetic coupling") is the cornerstone. That being so, a perturbation induced by an externally applied magnetic field on the spin center located at the beginning of the wire generates a change on the spin center located at the end of it, as shown in Figure 5. The transmission of spin information between the metal centers occurs through either σ-or πpathways, depending on the nature of the linker. The σ-pathway is very efficient over short distances but nearly negligible for long organic spacers. In contrast, the π-pathway is more efficient over long distances. In this latter case, two different situations can be envisaged for long organic spacers, as shown in Figure 6. The electronic density of the metal centers is partially delocalized, and occasionally polarized, on the ligands when they possess a non-polyradical character (with a closed-shell singlet ground state) (Figure 6a). In such a case, the double spin polarization mechanism through the π-electron system of the organic spacer allows for the transmission of spin information over very long distances (long-range magnetic coupling), provided that there is a relatively small highest-occupied molecular orbital (HOMO)/lowest-unoccupied molecular orbital (LUMO) energy gap [108]. The magnetic coupling between the metal centers is expected to decay continuously with the intermetallic distance in a more or less marked way, depending on the extended π-conjugated nature of the organic spacer. In contrast, a strong magnetic coupling between metal centers could be anticipated when the ligands have a unique polyradical character (with an open-shell singlet ground state) ( Figure 6b). In this case, the unpaired electrons of the polyradical spacer act as intermediates in the transmission of spin information, providing thus two strongly spin-correlated metal centers. This situation is reminiscent of the hopping mechanism of electrical conduction over a wire [119]. The magnetic coupling in these polyradical systems is much stronger than that found for the non-polyradical ones (vide infra). More importantly, it remains more or less constant with the intermetallic distance from a certain length of the organic spacer (wire-like magnetic behavior). Non-Polyradical Spacers In a pioneering work, Ruiz and Cano demonstrated that the appropriate choice of the topology (substitution pattern) and the number of methyl substituents of the bridging ligand allows for controlling the nature and magnitude of the intramolecular magnetic coupling on oxamato-based dicopper(II) metallacyclophanes with polymethyl-substituted 1,3-and 1,4-phenylene spacers (Scheme 1a,b) [123][124][125][126]. This is appropriately expressed by the variation in the sign and magnitude of the magnetic coupling parameter (J) in the phenomenological spin Hamiltonian H = −J SA · SB (SA = SB = SCu = 1/2) [127]. Density functional theory (DFT) calculations provide a clear-cut answer to the relative importance of the spin delocalization and spin polarization mechanisms for the throughligand EE interaction along these two series of oxamato-based dicopper(II) metallacyclophanes [100]. The transmission of spin information between the metal centers occurs through either σor π-pathways, depending on the nature of the linker. The σ-pathway is very efficient over short distances but nearly negligible for long organic spacers. In contrast, the π-pathway is more efficient over long distances. In this latter case, two different situations can be envisaged for long organic spacers, as shown in Figure 6. The transmission of spin information between the metal centers occurs through either σ-or πpathways, depending on the nature of the linker. The σ-pathway is very efficient over short distances but nearly negligible for long organic spacers. In contrast, the π-pathway is more efficient over long distances. In this latter case, two different situations can be envisaged for long organic spacers, as shown in Figure 6. The electronic density of the metal centers is partially delocalized, and occasionally polarized, on the ligands when they possess a non-polyradical character (with a closed-shell singlet ground state) (Figure 6a). In such a case, the double spin polarization mechanism through the π-electron system of the organic spacer allows for the transmission of spin information over very long distances (long-range magnetic coupling), provided that there is a relatively small highest-occupied molecular orbital (HOMO)/lowest-unoccupied molecular orbital (LUMO) energy gap [108]. The magnetic coupling between the metal centers is expected to decay continuously with the intermetallic distance in a more or less marked way, depending on the extended π-conjugated nature of the organic spacer. In contrast, a strong magnetic coupling between metal centers could be anticipated when the ligands have a unique polyradical character (with an open-shell singlet ground state) (Figure 6b). In this case, the unpaired electrons of the polyradical spacer act as intermediates in the transmission of spin information, providing thus two strongly spin-correlated metal centers. This situation is reminiscent of the hopping mechanism of electrical conduction over a wire [119]. The magnetic coupling in these polyradical systems is much stronger than that found for the non-polyradical ones (vide infra). More importantly, it remains more or less constant with the intermetallic distance from a certain length of the organic spacer (wire-like magnetic behavior). Non-Polyradical Spacers In a pioneering work, Ruiz and Cano demonstrated that the appropriate choice of the topology (substitution pattern) and the number of methyl substituents of the bridging ligand allows for controlling the nature and magnitude of the intramolecular magnetic coupling on oxamato-based dicopper(II) metallacyclophanes with polymethyl-substituted 1,3-and 1,4-phenylene spacers (Scheme 1a,b) [123][124][125][126]. This is appropriately expressed by the variation in the sign and magnitude of the magnetic coupling parameter (J) in the phenomenological spin Hamiltonian H = −J SA · SB (SA = SB = SCu = 1/2) [127]. Density functional theory (DFT) calculations provide a clear-cut answer to the relative importance of the spin delocalization and spin polarization mechanisms for the throughligand EE interaction along these two series of oxamato-based dicopper(II) metallacyclophanes [100]. The electronic density of the metal centers is partially delocalized, and occasionally polarized, on the ligands when they possess a non-polyradical character (with a closed-shell singlet ground state) (Figure 6a). In such a case, the double spin polarization mechanism through the π-electron system of the organic spacer allows for the transmission of spin information over very long distances (long-range magnetic coupling), provided that there is a relatively small highest-occupied molecular orbital (HOMO)/lowest-unoccupied molecular orbital (LUMO) energy gap [108]. The magnetic coupling between the metal centers is expected to decay continuously with the intermetallic distance in a more or less marked way, depending on the extended π-conjugated nature of the organic spacer. In contrast, a strong magnetic coupling between metal centers could be anticipated when the ligands have a unique polyradical character (with an open-shell singlet ground state) (Figure 6b). In this case, the unpaired electrons of the polyradical spacer act as intermediates in the transmission of spin information, providing thus two strongly spin-correlated metal centers. This situation is reminiscent of the hopping mechanism of electrical conduction over a wire [119]. The magnetic coupling in these polyradical systems is much stronger than that found for the non-polyradical ones (vide infra). More importantly, it remains more or less constant with the intermetallic distance from a certain length of the organic spacer (wire-like magnetic behavior). Non-Polyradical Spacers In a pioneering work, Ruiz and Cano demonstrated that the appropriate choice of the topology (substitution pattern) and the number of methyl substituents of the bridging ligand allows for controlling the nature and magnitude of the intramolecular magnetic coupling on oxamato-based dicopper(II) metallacyclophanes with polymethyl-substituted 1,3-and 1,4-phenylene spacers (Scheme 1a,b) [123][124][125][126]. This is appropriately expressed by the variation in the sign and magnitude of the magnetic coupling parameter (J) in the phenomenological spin Hamiltonian H = −J S A · S B (S A = S B = S Cu = 1/2) [127]. Density functional theory (DFT) calculations provide a clear-cut answer to the relative importance of the spin delocalization and spin polarization mechanisms for the through-ligand EE interaction along these two series of oxamato-based dicopper(II) metallacyclophanes [100]. In the parent oxamato-based dicopper(II) meta-and paracyclophanes (Scheme 1a,b, X = Y = Z = W = H), the distinct nature of the ground spin state can be nicely interpreted based on the concept of molecular ferro-and antiferromagnetic couplers (FCs and ACs), as illustrated in Scheme 2. In each case, the metaand para-substituted phenylene spacers act as FCs and ACs, respectively, between the two Cu II ions leading to either a triplet (S = S A + S B = 1) or a singlet (S = S A − S B = 0) ground spin state for the corresponding dicopper(II) meta-and paracyclophanes (J = 16.8 and −94 cm -1 , respectively) [123,125]. In both cases, the perpendicular arrangement of the oxamate donor groups with the central benzene ring allows a strong interaction of the magnetic orbitals of the Cu II ions with the π-electron system of the bridging ligands. This leads to an efficient spin polarization mechanism which accounts for the parallel or antiparallel spin alignments resulting from alternating spin densities on the mand p-phenylene spacers with an even or odd number of carbon atoms, respectively [127]. In a more general way, this particular geometrical configuration will allow any chemical or physical action occurring in the π-electron system of the bridging ligands to have drastic repercussions on the electron spin configuration of the Cu II ions, as we will see in the next section. In the parent oxamato-based dicopper(II) meta-and paracyclophanes (Scheme 1a,b, X = Y = Z = W = H), the distinct nature of the ground spin state can be nicely interpreted based on the concept of molecular ferro-and antiferromagnetic couplers (FCs and ACs), as illustrated in Scheme 2. In each case, the meta-and para-substituted phenylene spacers act as FCs and ACs, respectively, between the two Cu II ions leading to either a triplet (S = SA + SB = 1) or a singlet (S = SA − SB = 0) ground spin state for the corresponding dicopper(II) meta-and paracyclophanes (J = 16.8 and −94 cm -1 , respectively) [123,125]. In both cases, the perpendicular arrangement of the oxamate donor groups with the central benzene ring allows a strong interaction of the magnetic orbitals of the Cu II ions with the π-electron system of the bridging ligands. This leads to an efficient spin polarization mechanism which accounts for the parallel or antiparallel spin alignments resulting from alternating spin densities on the m-and p-phenylene spacers with an even or odd number of carbon atoms, respectively [127]. In a more general way, this particular geometrical configuration will allow any chemical or physical action occurring in the π-electron system of the bridging ligands to have drastic repercussions on the electron spin configuration of the Cu II ions, as we will see in the next section. Likewise, the antiferromagnetic coupling in the series of polymethyl-substituted oxamato-based dicopper(II) paracyclophanes continuously increases with the number of methyl substituents (J = −94, −124, and −144 cm −1 , with x = 0, 1, and 4, respectively) (Scheme 1b) [125,126]. They act thus as a kind of adjusting screws in a putative molecular antiferromagnetic transformer (AT). The observed AT behavior points out that the magnitude of the antiferromagnetic coupling along this series is mainly governed by electronic factors associated with the electron donor properties of the methyl group, as supported by DFT calculations [126]. In subsequent works, Cangussu and Julve, on the one hand, and Armentano and Lloret, on the other hand, provided further support of the occurrence of a spin polarization mechanism in the related oxamato-based dicopper(II) metallacyclophanes with 2,6-pyridine and 1,4-anthraquinone spacers (Scheme 1c,k) [128][129][130][131]. In each case, the ferro-or antiferromagnetic nature of the EE interaction is likely explained by the meta-or para-substitution pattern of the 2,6-pyridine and 1,4anthraquinone spacers, respectively, as illustrated in Scheme 3. In both cases, however, the magnitude of the ferro-and antiferromagnetic coupling for these novel oxamato-based dicopper(II) metapyridenophanes and paraanthraquinophanes (J = 7.9 and −84 cm −1 , respectively) decreases when compared with their parent unsubstituted dicopper(II) meta-and paracyclophanes. This feature is likely explained by the reduction in the Lewis basicity of the amidate donor groups from the electronpoor 2,6-pyridine and 1,4-anthraquinone spacers, which causes a decrease in the metal-ligand covalency and thus of the electron spin delocalization and polarization effects on the bridging ligands, as supported by DFT calculations [128,131]. Likewise, the antiferromagnetic coupling in the series of polymethyl-substituted oxamato-based dicopper(II) paracyclophanes continuously increases with the number of methyl substituents (J = −94, −124, and −144 cm −1 , with x = 0, 1, and 4, respectively) (Scheme 1b) [125,126]. They act thus as a kind of adjusting screws in a putative molecular antiferromagnetic transformer (AT). The observed AT behavior points out that the magnitude of the antiferromagnetic coupling along this series is mainly governed by electronic factors associated with the electron donor properties of the methyl group, as supported by DFT calculations [126]. In subsequent works, Cangussu and Julve, on the one hand, and Armentano and Lloret, on the other hand, provided further support of the occurrence of a spin polarization mechanism in the related oxamato-based dicopper(II) metallacyclophanes with 2,6-pyridine and 1,4-anthraquinone spacers (Scheme 1c,k) [128][129][130][131]. In each case, the ferro-or antiferromagnetic nature of the EE interaction is likely explained by the metaor para-substitution pattern of the 2,6-pyridine and 1,4-anthraquinone spacers, respectively, as illustrated in Scheme 3. In both cases, however, the magnitude of the ferro-and antiferromagnetic coupling for these novel oxamato-based dicopper(II) metapyridenophanes and paraanthraquinophanes (J = 7.9 and −84 cm −1 , respectively) decreases when compared with their parent unsubstituted dicopper(II) meta-and paracyclophanes. This feature is likely explained by the reduction in the Lewis basicity of the amidate donor groups from the electron-poor 2,6-pyridine and 1,4-anthraquinone spacers, which causes a decrease in the metal-ligand covalency and thus of the electron spin delocalization and polarization effects on the bridging ligands, as supported by DFT calculations [128,131]. On the other hand, oxamato-based dicopper(II) metallacyclophanes with oligo-p-phenylene (OP) and oligo-p-phenylene-ethyne (OPE) spacers have been examined by Cano and Lloret as potential candidates to obtain molecular antiferromagnetic wires (Scheme 1e,f) [132][133][134]. In fact, OP and OPE spacers have been demonstrated to be really effective in mediating EE interactions between paramagnetic metal centers which are separated by relatively long distances in discrete metallacyclic entities, as supported by DFT calculations [132,134]. The EE interaction between the two Cu II ions decreases from the parent complexes with 4,4′-diphenylene and 4,4′-diphenylene-ethyne spacers (J = −8.7 and −3.9 cm −1 , respectively) (Scheme 1e,f, with n = 1) to the longer homologues with 4,4′-terphenylene and 1,4-di(4phenylethynyl)phenylene ones (J = −1.8 and −0.9 cm −1 , respectively) (Scheme 1e,f, with n = 2) [132,134]. Indeed, the rather low exponential decay of the antiferromagnetic coupling with the intermetallic distance along both series of oxamato-based dicopper(II) oligo-p-phenyleno-and oligo-p-phenylene-ethynophanes indicates that the EE interaction through these rigid rod-like aromatic spacers obeys a non-polyradical spin polarization mechanism, as illustrated in Scheme 4. Interestingly, a much better magnetic communication between very distant metal centers is predicted for the series of oxamato-based dicopper(II) oligophenylethynylenophanes than for the parent dicopper(II) oligophenylenophanes (Scheme 4a,b), as reflected by the calculated values of the On the other hand, oxamato-based dicopper(II) metallacyclophanes with oligo-p-phenylene (OP) and oligo-p-phenylene-ethyne (OPE) spacers have been examined by Cano and Lloret as potential candidates to obtain molecular antiferromagnetic wires (Scheme 1e,f) [132][133][134]. In fact, OP and OPE spacers have been demonstrated to be really effective in mediating EE interactions between paramagnetic metal centers which are separated by relatively long distances in discrete metallacyclic entities, as supported by DFT calculations [132,134]. The EE interaction between the two Cu II ions decreases from the parent complexes with 4,4 -diphenylene and 4,4 -diphenylene-ethyne spacers (J = −8.7 and −3.9 cm −1 , respectively) (Scheme 1e,f, with n = 1) to the longer homologues with 4,4 -terphenylene and 1,4-di(4-phenylethynyl)phenylene ones (J = −1.8 and −0.9 cm −1 , respectively) (Scheme 1e,f, with n = 2) [132,134]. Indeed, the rather low exponential decay of the antiferromagnetic coupling with the intermetallic distance along both series of oxamato-based dicopper(II) oligo-p-phenyleno-and oligo-p-phenylene-ethynophanes indicates that the EE interaction through these rigid rod-like aromatic spacers obeys a non-polyradical spin polarization mechanism, as illustrated in Scheme 4. Interestingly, a much better magnetic communication between very distant metal centers is predicted for the series of oxamato-based dicopper(II) oligophenylethynylenophanes than for the parent dicopper(II) oligophenylenophanes (Scheme 4a,b), as reflected by the calculated values of the exponential decay factor (β = 0.31 and 0.35 Å −1 ) [132,134]. This feature clearly indicates that introducing an ethynylene group between the phenylene spacers does not interrupt the magnetic communication between the spins of the metal centers [133]. Instead, a strong orbital overlap occurs between the π-type orbitals of the para-substituted benzene rings across the carbon-carbon triple bonds due to the almost planar configuration of the OPE spacers, when compared to the slightly twisted configuration of the OP spacers. This situation is in agreement with time-dependent density functional theory (TD-DFT) calculations, which point out the linear decay of the π-π* transition energy with the HOMO-LUMO energy gap along this series of dicopper(II) oligophenylethynylenophanes [134]. respectively) (Scheme 1e,f, with n = 1) to the longer homologues with 4,4′-terphenylene and 1,4-di(4phenylethynyl)phenylene ones (J = −1.8 and −0.9 cm −1 , respectively) (Scheme 1e,f, with n = 2) [132,134]. Indeed, the rather low exponential decay of the antiferromagnetic coupling with the intermetallic distance along both series of oxamato-based dicopper(II) oligo-p-phenyleno-and oligo-p-phenylene-ethynophanes indicates that the EE interaction through these rigid rod-like aromatic spacers obeys a non-polyradical spin polarization mechanism, as illustrated in Scheme 4. Interestingly, a much better magnetic communication between very distant metal centers is predicted for the series of oxamato-based dicopper(II) oligophenylethynylenophanes than for the parent dicopper(II) oligophenylenophanes (Scheme 4a,b), as reflected by the calculated values of the exponential decay factor (β = 0.31 and 0.35 Å −1 ) [132,134]. This feature clearly indicates that introducing Polyradical Spacers Oxamato-based dicopper(II) metallacyclophanes with flat-like oligo-α,α -and oligo-β,β -acene (OA) spacers have been investigated by Ruiz and Cano as unique examples of MWs (Scheme 1i,j) [135][136][137]. As such, they have found a moderately strong antiferromagnetic coupling between the two Cu II ions separated by relatively large intermetallic distances (J = −18.6 and −23.9 cm −1 with r = 8.3 and 12.5 Å, respectively) for the former members of these series with the shorter 1,8-naphthalene and 2,6-anthracene spacers (Scheme 1i,j, with n = 1 and 2, respectively) [137]. These results show thus that one nanometer is definitely not the upper limit for the observation of magnetic coupling in dicopper(II) complexes [138][139][140]. More importantly, they have predicted a wire-like magnetic behavior for these two series of oxamato-based dicopper(II) oligo-α,α -and oligo-β,β -acenophanes, regardless of the substitution pattern, as supported by DFT calculations [135]. This unprecedented wire-like magnetic behavior arises from the polyradical character of the longer OA spacers (n ≥ 3), as illustrated in Scheme 5. Magnetochemistry 2020, 6, x FOR PEER REVIEW 9 of 24 an ethynylene group between the phenylene spacers does not interrupt the magnetic communication between the spins of the metal centers [133]. Instead, a strong orbital overlap occurs between the π-type orbitals of the para-substituted benzene rings across the carbon-carbon triple bonds due to the almost planar configuration of the OPE spacers, when compared to the slightly twisted configuration of the OP spacers. This situation is in agreement with time-dependent density functional theory (TD-DFT) calculations, which point out the linear decay of the π-π* transition energy with the HOMO-LUMO energy gap along this series of dicopper(II) oligophenylethynylenophanes [134]. Polyradical Spacers Oxamato-based dicopper(II) metallacyclophanes with flat-like oligo-α,α′-and oligo-β,β′-acene (OA) spacers have been investigated by Ruiz and Cano as unique examples of MWs (Scheme 1i,j) [135][136][137]. As such, they have found a moderately strong antiferromagnetic coupling between the two Cu II ions separated by relatively large intermetallic distances (J = −18.6 and −23.9 cm −1 with r = 8.3 and 12.5 Å, respectively) for the former members of these series with the shorter 1,8-naphthalene and 2,6-anthracene spacers (Scheme 1i,j, with n = 1 and 2, respectively) [137]. These results show thus that one nanometer is definitely not the upper limit for the observation of magnetic coupling in dicopper(II) complexes [138][139][140]. More importantly, they have predicted a wire-like magnetic behavior for these two series of oxamato-based dicopper(II) oligo-α,α′-and oligo-β,β′-acenophanes, regardless of the substitution pattern, as supported by DFT calculations [135]. This unprecedented wire-like magnetic behavior arises from the polyradical character of the longer OA spacers (n ≥ 3), as illustrated in Scheme 5. This wire-like magnetic behavior is accompanied by a change from antiferro-to ferromagnetic coupling for the latter members of both series of oxamato-based dicopper(II) oligo-α,α′-and oligoβ,β′-acenophanes (Scheme 5a,b). Notably, DFT calculations predict a weak but non-negligible ferromagnetic coupling (J = 3.0 cm -1 ) between the two Cu II ions separated by a very large intermetallic distance (r ≈ 3 nm) through the decacene spacers in the series of β,β′-disubstituted OA spacers (Scheme 1j, with n = 9) [135]. The current efforts in our group are devoted to the preparation of dicopper(II) metallacyclophanes with longer tetracene and pentacene spacers (Scheme 1i,j, with n = 4 and 5) as unique prototypes of MWs for single-molecule spintronics. This wire-like magnetic behavior is accompanied by a change from antiferro-to ferromagnetic coupling for the latter members of both series of oxamato-based dicopper(II) oligo-α,α -and oligo-β,β -acenophanes (Scheme 5a,b). Notably, DFT calculations predict a weak but non-negligible ferromagnetic coupling (J = 3.0 cm -1 ) between the two Cu II ions separated by a very large intermetallic distance (r ≈ 3 nm) through the decacene spacers in the series of β,β -disubstituted OA spacers (Scheme 1j, with n = 9) [135]. The current efforts in our group are devoted to the preparation of dicopper(II) metallacyclophanes with longer tetracene and pentacene spacers (Scheme 1i,j, with n = 4 and 5) as unique prototypes of MWs for single-molecule spintronics. Dinuclear Copper(II) Metallacyclophanes in the POC Design of Molecular Magnetic Switches Molecular magnetic switches (MSs) which would allow for the interruption and restoration of the magnetic communication between the spin carriers are also basic components of a molecular spintronic circuit [100,141]. MSs are archetypical examples of bistable dynamic systems presenting two separately stable equilibrium states (or two distinctly accessible states) having totally different magnetic properties, which can be transformed in a reversible manner under some external stimuli. The external stimuli that are responsible for the magnetic switching behavior can be chemical, electronic, or photonic, among others, as occurs in conventional molecular electronic switches [142][143][144][145][146][147][148]. MSs formed by two localized spins whose magnetic communication can be switched by means of a chemical, redox, or photonic event ("chemo-, electro-, or photo-switching magnetic behavior") constitute the simplest molecules to be tested [93,94,97,98,[149][150][151][152][153][154][155][156][157][158][159][160][161][162][163][164][165]. In principle, the spins of the metal centers would be antiferro-or ferromagnetically coupled in one of the states (ON), whereas they would be magnetically uncoupled in the other one (OFF), as shown in Figure 7. MSs offer an alternative design concept to encode binary information in their corresponding ON ("1") and OFF ("0") states, because of the switching of the magnetic coupling between the spin carriers, as stated by Lloret [100]. A transistor-like magnetic behavior at the molecular scale can be achieved, opening thus the way for the potential applications of MSs in quantum computing [94]. Dinuclear Copper(II) Metallacyclophanes in the POC Design of Molecular Magnetic Switches Molecular magnetic switches (MSs) which would allow for the interruption and restoration of the magnetic communication between the spin carriers are also basic components of a molecular spintronic circuit [100,141]. MSs are archetypical examples of bistable dynamic systems presenting two separately stable equilibrium states (or two distinctly accessible states) having totally different magnetic properties, which can be transformed in a reversible manner under some external stimuli. The external stimuli that are responsible for the magnetic switching behavior can be chemical, electronic, or photonic, among others, as occurs in conventional molecular electronic switches [142][143][144][145][146][147][148]. MSs formed by two localized spins whose magnetic communication can be switched by means of a chemical, redox, or photonic event ("chemo-, electro-, or photo-switching magnetic behavior") constitute the simplest molecules to be tested [93,94,97,98,[149][150][151][152][153][154][155][156][157][158][159][160][161][162][163][164][165]. In principle, the spins of the metal centers would be antiferro-or ferromagnetically coupled in one of the states (ON), whereas they would be magnetically uncoupled in the other one (OFF), as shown in Figure 7. MSs offer an alternative design concept to encode binary information in their corresponding ON ("1") and OFF ("0") states, because of the switching of the magnetic coupling between the spin carriers, as stated by Lloret [100]. A transistor-like magnetic behavior at the molecular scale can be achieved, opening thus the way for the potential applications of MSs in quantum computing [94]. Other possibilities exist for the design of an MS, whereby the spins of the magnetic centers would be antiferromagnetically coupled in one of the states (OFF) and ferromagnetically coupled in the other one (ON), as shown in Figure 8. In this case, the spin alignment can be switched from antiparallel to parallel or vice versa through the action of a switchable magnetic coupler. The idea behind this design concept is to be able to invert the spin alignment of the spin-polarized current along the molecular circuit by means of an electric potential ('threshold voltage'), so that a rectifierlike magnetic behavior on the molecular scale can be achieved [166][167][168]. Other possibilities exist for the design of an MS, whereby the spins of the magnetic centers would be antiferromagnetically coupled in one of the states (OFF) and ferromagnetically coupled in the other one (ON), as shown in Figure 8. In this case, the spin alignment can be switched from antiparallel to parallel or vice versa through the action of a switchable magnetic coupler. The idea behind this design concept is to be able to invert the spin alignment of the spin-polarized current along the molecular circuit by means of an electric potential ("threshold voltage"), so that a rectifier-like magnetic behavior on the molecular scale can be achieved [166][167][168]. Molecular magnetic transistors (MTs) and molecular magnetic rectifiers (MRs) appear thus as some particular cases of MSs. A large variety of factors can influence over these MSs in a reversible way, such as pH, electrochemical potential, or light irradiation, leading to chemo-, electro-or photo-switching magnetic behaviors, as we will see hereafter. be antiferromagnetically coupled in one of the states (OFF) and ferromagnetically coupled in the other one (ON), as shown in Figure 8. In this case, the spin alignment can be switched from antiparallel to parallel or vice versa through the action of a switchable magnetic coupler. The idea behind this design concept is to be able to invert the spin alignment of the spin-polarized current along the molecular circuit by means of an electric potential ('threshold voltage'), so that a rectifierlike magnetic behavior on the molecular scale can be achieved [166][167][168]. Chemoactive Spacers Pereira and Julve have recently reported a unique, pH-triggered, structural and magnetic switching behavior for a related pair of oxamato-based dicopper(II) metallacyclic complexes with the flexible 4,4 -biphenylethylene spacer (Scheme 1h). It can adopt either syn (in alkaline media) or anti conformations (in slightly acidic media) depending on the protonation degree, as illustrated in Scheme 6 [169][170][171]. Magnetochemistry 2020, 6, x FOR PEER REVIEW 11 of 24 Molecular magnetic transistors (MTs) and molecular magnetic rectifiers (MRs) appear thus as some particular cases of MSs. A large variety of factors can influence over these MSs in a reversible way, such as pH, electrochemical potential, or light irradiation, leading to chemo-, electro-or photoswitching magnetic behaviors, as we will see hereafter. Chemoactive Spacers Pereira and Julve have recently reported a unique, pH-triggered, structural and magnetic switching behavior for a related pair of oxamato-based dicopper(II) metallacyclic complexes with the flexible 4,4′-biphenylethylene spacer (Scheme 1h). It can adopt either syn (in alkaline media) or anti conformations (in slightly acidic media) depending on the protonation degree, as illustrated in Scheme 6 [169][170][171]. A reversible syn-anti conformational change of the ligand occurs in aqueous solution upon protonation of the two amide groups in the double-stranded dicopper(II) metallacyclic complex of the cyclophane-type [169]. This gives rise to the corresponding bis(monohydrogenoxamato)-bridged dimer of single-stranded copper(II) metallacyclic species by free rotation around the central single carbon-carbon bond of the 2,2′-ethylenediphenylene spacer. This bistable pair of dicopper(II) metallacyclic complexes shows a switching from non-interacting spins (OFF) for the deprotonated syn isomer, to parallel spin alignment (ON) for the protonated anti isomer. In the latter case, the weak ferromagnetic coupling (J = 2.93 cm −1 ) is due to the accidental orthogonality of the magnetic orbitals of the two Cu II ions through the out-plane exchange pathway involving the axial carboxylate groups. In the former case, the extended non-conjugated π-pathway of the 2,2′-ethylenediphenylene spacers connecting the two Cu II ions is unable to mediate any significant EE interaction, as supported by DFT calculations [169]. Interestingly, this multifunctional dicopper(II) complex can be easily anchored over niobium oxyhydroxide or adsorbed on hybrid silica-based porous materials [170,171], opening thus the way for future applications as magnetic nanodevices in single-molecule spintronics. A reversible syn-anti conformational change of the ligand occurs in aqueous solution upon protonation of the two amide groups in the double-stranded dicopper(II) metallacyclic complex of the cyclophane-type [169]. This gives rise to the corresponding bis(monohydrogenoxamato)-bridged dimer of single-stranded copper(II) metallacyclic species by free rotation around the central single carbon-carbon bond of the 2,2 -ethylenediphenylene spacer. This bistable pair of dicopper(II) metallacyclic complexes shows a switching from non-interacting spins (OFF) for the deprotonated syn isomer, to parallel spin alignment (ON) for the protonated anti isomer. In the latter case, the weak ferromagnetic coupling (J = 2.93 cm −1 ) is due to the accidental orthogonality of the magnetic orbitals of the two Cu II ions through the out-plane exchange pathway involving the axial carboxylate groups. In the former case, the extended non-conjugated π-pathway of the 2,2 -ethylenediphenylene spacers connecting the two Cu II ions is unable to mediate any significant EE interaction, as supported by DFT calculations [169]. Interestingly, this multifunctional dicopper(II) complex can be easily anchored over niobium oxyhydroxide or adsorbed on hybrid silica-based porous materials [170,171], opening thus the way for future applications as magnetic nanodevices in single-molecule spintronics. Electroactive Spacers The aforementioned oxamato-based dicopper(II) paracyclophanes can be considered appealing candidates for MSs [125,126]. Because of the redox-active ("non-innocent") nature of their polymethyl-substituted 1,4-phenylene spacers, the permethylated dicopper(II) paracyclophane exhibits a unique redox-triggered magnetic rectifying behavior [125]. In this case, the magnetic bistability brings about a change from antiparallel (OFF) to parallel (ON) alignments of the spins of the two Cu II ions. This fact adheres to the polarization by the π-stacked delocalized monoradical ligand, which is generated upon one-electron oxidation of the double tetramethyl-p-phenylenediamidate bridging skeleton, as illustrated in Scheme 7. Magnetochemistry 2020, 6, x FOR PEER REVIEW 12 of 24 the two Cu II ions. This fact adheres to the polarization by the π-stacked delocalized monoradical ligand, which is generated upon one-electron oxidation of the double tetramethyl-pphenylenediamidate bridging skeleton, as illustrated in Scheme 7. Our group is currently investigating a novel series of oxamato-based dicopper(II) metallacyclophanes with flat-like electroactive 1,4-or 2,6-anthraquinone (OAQ) spacers (Scheme 1k,l) as new examples of MSs. In this latter case, a complete electrochemical reversibility may be reached upon four proton/electron-coupled reduction and oxidation in the dicopper(II) 2,6anthraquinophane, as reported earlier for the related dicopper(II) 1,4-anthraquinophane, which acts as a prototype of a molecular magnetic capacitor (MC) [130,131]. In the former case, the spins of the Cu II ions would be antiferromagnetically coupled through the extended π-conjugated, fully reduced dihydroanthraquinolate (ON), whereas they are magnetically uncoupled across the non-conjugated π-pathway of the 2,6-anthraquinone spacers (OFF), as illustrated in Scheme 8. Our group is currently investigating a novel series of oxamato-based dicopper(II) metallacyclophanes with flat-like electroactive 1,4-or 2,6-anthraquinone (OAQ) spacers (Scheme 1k,l) as new examples of MSs. In this latter case, a complete electrochemical reversibility may be reached upon four proton/electron-coupled reduction and oxidation in the dicopper(II) 2,6-anthraquinophane, as reported earlier for the related dicopper(II) 1,4-anthraquinophane, which acts as a prototype of a molecular magnetic capacitor (MC) [130,131]. In the former case, the spins of the Cu II ions would be antiferromagnetically coupled through the extended π-conjugated, fully reduced dihydroanthraquinolate (ON), whereas they are magnetically uncoupled across the non-conjugated π-pathway of the 2,6-anthraquinone spacers (OFF), as illustrated in Scheme 8. upon four proton/electron-coupled reduction and oxidation in the dicopper(II) 2,6anthraquinophane, as reported earlier for the related dicopper(II) 1,4-anthraquinophane, which acts as a prototype of a molecular magnetic capacitor (MC) [130,131]. In the former case, the spins of the Cu II ions would be antiferromagnetically coupled through the extended π-conjugated, fully reduced dihydroanthraquinolate (ON), whereas they are magnetically uncoupled across the non-conjugated π-pathway of the 2,6-anthraquinone spacers (OFF), as illustrated in Scheme 8. This intramolecular ("pseudo-bimolecular") photocycloaddition reaction constitutes a unique example of coordination-driven self-assembly for the supramolecular control of photochemical reactivity and photophysical properties in the solid state [99]. The lower photochemical efficiency and thermal reversibility of the dicopper(II) 1,5-naphthalenophane when compared to those of the 2,6-anthracenophane derivative agree with the different reactivity toward the photodimerization of anthracene and naphthalene themselves [137]. However, their photochemical efficiency and thermal reversibility are comparable to those reported for the intramolecular [4 + 4] photocycloaddition of purely organic naphthalenophane and anthracenophane analogues, reflecting thus the importance of the entropic effects associated with the cyclic or metallacyclic structures. A novel series of oxamato-based dicopper(II) metallacyclophanes with rod-like photoactive oligo(p-phenylenevinylene)-(OPV) and oligo(p-phenyleneazo) (OPA) spacers, such as 4,4 -stilbene or 4,4 -azobenzene (Scheme 1g), is currently under investigation by some of us. As such, a complete and totally reversible photochemical transformation may be reached after successive irradiation, with UV and visible light, of the dicopper(II) 4,4 -stilbenophane and 4,4 -azobenzenophane featuring a cis-trans geometric isomerization, as illustrated in Scheme 10. The spins of the Cu II ions are antiferromagnetically coupled through the planar trans-azobenzene spacers (ON state) in the former case, whereas they are magnetically uncoupled across the non-planar cis-azobenzene spacers (OFF state) in the latter one because of the steric repulsion between the ortho hydrogen atoms from the two benzene rings. dicopper(II) 1,5-naphthalenophane and 2,6-anthracenophane exhibit a photo-triggered magnetic transistor behavior [136,137]. The reported photomagnetic bistability arises from the more or less complete and thermally reversible conversion of the weak antiferromagnetically coupled dicopper(II) oligocenophane (ON) to the corresponding magnetically uncoupled dicopper(II) photodimer product (OFF) resulting from the intramolecular [4 + 4] photocycloaddition reaction of the two facing oligoacene spacers under UV light irradiation and heating, as illustrated in Scheme 9. This intramolecular ("pseudo-bimolecular") photocycloaddition reaction constitutes a unique example of coordination-driven self-assembly for the supramolecular control of photochemical reactivity and photophysical properties in the solid state [99]. The lower photochemical efficiency and thermal reversibility of the dicopper(II) 1,5-naphthalenophane when compared to those of the 2,6-anthracenophane derivative agree with the different reactivity toward the photodimerization of anthracene and naphthalene themselves [137]. However, their photochemical efficiency and thermal reversibility are comparable to those reported for the intramolecular [4 + 4] photocycloaddition of purely organic naphthalenophane and anthracenophane analogues, reflecting thus the importance of the entropic effects associated with the cyclic or metallacyclic structures. Conclusions and Outlook: Metallosupramolecular Chemistry Acts as a Rail from Single-Molecule Spintronics to Quantum Computing Metallosupramolecular complexes are particularly attractive systems for the construction of molecular magnetic devices for highly integrated molecular spintronic circuits. The chemical, electroand photo-chemical reactivities of the metal centers and/or the ligand spacers, their ability to respond Conclusions and Outlook: Metallosupramolecular Chemistry Acts as a Rail from Single-Molecule Spintronics to Quantum Computing Metallosupramolecular complexes are particularly attractive systems for the construction of molecular magnetic devices for highly integrated molecular spintronic circuits. The chemical, electroand photo-chemical reactivities of the metal centers and/or the ligand spacers, their ability to respond to changes in chemical and electrochemical potential or photoexcitation, and the geometrical features that allow positioning of substituent groups, allow for the exploration of a vast amount of molecular magnetic wires (MWs) and switches (MSs) [19][20][21][22][23][24][25][26][27][28]. The extension of the control of charge localization (or current flow) through MWs and MSs in response to one or several input signals is mandatory in the development of molecule-based logic circuits for quantum computing. The controlled entanglement through a very weak (but non-negligible) switchable magnetic interaction between a pair of highly coherent spin carriers acting as individual quantum bits (qubits) is crucial for the physical implementation of quantum information processing (QIP) through quantum gates. In contrast to current computational methodology based on classical bits, QIP may offer a new paradigm for performing quantum logic operations that exploits the quantum coherence properties of electron spin-based qubits in forming part of a quantum gate in future quantum computers [29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45]. The present review provides a brief overview on the evolution of the field of oxamato-based dicopper(II) metallacyclophanes, from the simple systems with ferro-and antiferromagnetic interactions, depending on the substitution pattern of the mand p-phenylene, 2,6-pyridine or 1,4-anthraquinophane spacers, to a plethora of multifunctional magnetic systems, playing with the chemical, electro-and photo-chemical properties of the oligo-o-phenylethylene, oligo-p-phenylene or p-phenylethyne, and oligo-α,α -or β,β -acene spacers (see Scheme 1). Oxamato-based dicopper(II) metallacyclophanes constitute thus a unique class of metalosupramolecular complexes because they combine the inherent magnetic properties of the intervening metal centers with the chemical, electro-and photo-chemical reactivity of the organic bridging ligands. Besides their use as ground tests for the fundamental research on electron exchange (EE) magnetic interactions through extended π-conjugated aromatic ligands (both experimentally and theoretically), they have emerged as ideal model systems for the proof-of-concept design of molecule-based multifunctional magnetic devices in the emerging fields of single-molecule spintronics and quantum computing, as shown in Figure 9 [99,100]. Oxamato-based dicopper(II) metallacyclophanes appear thus as suitable candidates for the study of spin-dependent electron transport (ET) across single-molecule junctions when they are connected to two gold electrodes through their carboxylate-oxygen donor atoms (Figure 9a), as proposed earlier [99]. A topological control of the ET can be envisaged in dicopper(II) meta-and paracyclophanes (Scheme 1a,b). The parallel spin alignment observed in the former case would lead to a greater electrical conductance than the antiparallel one found in the latter case [172,173]. Alternatively, the application of a moderate magnetic field that reverses the spin alignment from antiparallel to parallel would allow for a magnetic rectification of ET in dicopper(II) oligophenylenophanes and oligophenylethynylenophanes (Scheme 1e,f). In both cases, the energy difference between the parallel (triplet) and antiparallel (singlet) spin states can be chemically tuned by the length of the spacer [119,120]. Likewise, the electrical rectification of the ET seems feasible in dicopper(II) anthraquinophanes (Scheme 1k,l). A change of the spin alignment from antiparallel to parallel is predicted after the successive reduction of the two anthraquinone spacers under an applied voltage of the electrical current [166]. Another possibility is to get an optical switching of the ET through molecular wires in dicopper(II) oligoacenophanes (Scheme 1i,j). In this case, the Oxamato-based dicopper(II) metallacyclophanes appear thus as suitable candidates for the study of spin-dependent electron transport (ET) across single-molecule junctions when they are connected to two gold electrodes through their carboxylate-oxygen donor atoms (Figure 9a), as proposed earlier [99]. A topological control of the ET can be envisaged in dicopper(II) meta-and paracyclophanes (Scheme 1a,b). The parallel spin alignment observed in the former case would lead to a greater electrical conductance than the antiparallel one found in the latter case [172,173]. Alternatively, the application of a moderate magnetic field that reverses the spin alignment from antiparallel to parallel would allow for a magnetic rectification of ET in dicopper(II) oligophenylenophanes and oligophenylethynylenophanes (Scheme 1e,f). In both cases, the energy difference between the parallel (triplet) and antiparallel (singlet) spin states can be chemically tuned by the length of the spacer [119,120]. Likewise, the electrical rectification of the ET seems feasible in dicopper(II) anthraquinophanes (Scheme 1k,l). A change of the spin alignment from antiparallel to parallel is predicted after the successive reduction of the two anthraquinone spacers under an applied voltage of the electrical current [166]. Another possibility is to get an optical switching of the ET through molecular wires in dicopper(II) oligoacenophanes (Scheme 1i,j). In this case, the photocycloaddition of the oligoacene spacers by UV light irradiation would lead to the interruption of the electrical conductance, and thermal relaxation by heating would restore it [174][175][176][177]. One further step in this area will be the use of oxamato-based dicopper(II) metallacyclophanes as basic logic units (quantum gates) for the future generation of quantum computers (Figure 9b). The recent work by Rüffer and Kataev on the quantum coherence (QC) properties of related mononuclear oxamato-containing copper(II) complexes is particularly encouraging, as it shows that sufficiently long spin coherence times for quantum gate operations can be achieved in these simple systems [178]. However, this is not an easy task because it implies the rational design of very weakly interacting ("entangled") and potentially switchable molecules before applications could be envisaged, as illustrated by the impressive work of Winpenny on supramolecular arrays of metal rings [75][76][77][78][79]. In this respect, the two novel classes of electro-and photo-active, oxamato-based dicopper(II) 4,4 -azobenzenophanes and 2,6-anthraquinophanes (Scheme 1g,l) are particularly interesting as prototypes of double qubit-based quantum gates. In both cases, each Cu II ion (S Cu = 1 2 ) constitutes a two-state magnetic quantum system represented by the pair of m S = ±1/2 states, which may be viewed as a basic unit of quantum information or a "single" qubit, the quantum analogue of the classical bit with "0" and "1" values. That being so, the two potentially switchable Cu II ions may function as a "double" qubit with up to four degenerated |00>, |01>, |10>, and |11> quantum states. In fact, a completely reversible electro-or photo-magnetic switching could be achieved in the resulting dicopper(II) 2,6-anthraquinophane and 4,4 -azobenzenophane after redox cycling or under irradiation with UV and visible light (Schemes 8 and 10a, respectively). The spins of the Cu II ions are antiferromagnetically coupled through the extended π-conjugated, 2,6-anthraquinolate and trans-azobenzene spacers ("coupled" state), whereas they very weakly interact across the non-extended 2,6-anthraquinone and non-planar cis-azobenzene spacers ("entangled" state). Indeed, this review on the use of oxamato-based dicopper(II) metallacyclophanes as prototypes of multifunctional and multiresponsive molecular magnetic devices for single-molecule spintronics and quantum computing is more a proposal than a real achievement. This is rightly expressed in the voice of the most famous Portuguese poet, Fernando Pessoa: "Não sou nada./Nunca serei nada./Não posso querer ser nada./À parte isso, tenho em mim todos os sonhos do mundo" (extracted from the poem Tabacaria by Álvaro de Campos). In the years to come, we would like to accomplish some of these dream worlds concerning the physical realization of QIP in future quantum computers. In this respect, this review is also meant to be a tribute letter to Professor Francesc Lloret, who introduced us to the fantastic experience of molecular magnetism, on the occasion of his 65th birthday, and hoping that he will continue to pursue his dreams, from both professional and personal viewpoints.
v3-fos-license
2017-06-18T18:10:09.252Z
2013-10-22T00:00:00.000
1517633
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcevolbiol.biomedcentral.com/track/pdf/10.1186/1471-2148-13-227", "pdf_hash": "c87859576da5f915d09162d3854ed143c5c3681d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:238", "s2fieldsofstudy": [ "Biology" ], "sha1": "c87859576da5f915d09162d3854ed143c5c3681d", "year": 2013 }
pes2o/s2orc
Patterns of genetic differentiation at MHC class I genes and microsatellites identify conservation units in the giant panda Background Evaluating patterns of genetic variation is important to identify conservation units (i.e., evolutionarily significant units [ESUs], management units [MUs], and adaptive units [AUs]) in endangered species. While neutral markers could be used to infer population history, their application in the estimation of adaptive variation is limited. The capacity to adapt to various environments is vital for the long-term survival of endangered species. Hence, analysis of adaptive loci, such as the major histocompatibility complex (MHC) genes, is critical for conservation genetics studies. Here, we investigated 4 classical MHC class I genes (Aime-C, Aime-F, Aime-I, and Aime-L) and 8 microsatellites to infer patterns of genetic variation in the giant panda (Ailuropoda melanoleuca) and to further define conservation units. Results Overall, we identified 24 haplotypes (9 for Aime-C, 1 for Aime-F, 7 for Aime-I, and 7 for Aime-L) from 218 individuals obtained from 6 populations of giant panda. We found that the Xiaoxiangling population had the highest genetic variation at microsatellites among the 6 giant panda populations and higher genetic variation at Aime-MHC class I genes than other larger populations (Qinling, Qionglai, and Minshan populations). Differentiation index (FST)-based phylogenetic and Bayesian clustering analyses for Aime-MHC-I and microsatellite loci both supported that most populations were highly differentiated. The Qinling population was the most genetically differentiated. Conclusions The giant panda showed a relatively higher level of genetic diversity at MHC class I genes compared with endangered felids. Using all of the loci, we found that the 6 giant panda populations fell into 2 ESUs: Qinling and non-Qinling populations. We defined 3 MUs based on microsatellites: Qinling, Minshan-Qionglai, and Daxiangling-Xiaoxiangling-Liangshan. We also recommended 3 possible AUs based on MHC loci: Qinling, Minshan-Qionglai, and Daxiangling-Xiaoxiangling-Liangshan. Furthermore, we recommend that a captive breeding program be considered for the Qinling panda population. Background Evolutionary and conservation biologists are concerned with how genetic variation is maintained within populations of endangered species, especially within small and isolated populations [1]. The assumption is that a decrease in genetic variation and a lack of exchange between isolated populations increase the likelihood of extinction by reducing the population's ability to adapt to changing environmental conditions [2]. Generally, biologists use neutral markers (microsatellites) to estimate genetic variation in threatened populations [3,4]. Although variation at neutral markers can provide information about dispersal patterns [5], population connectivity [6], and population history (past demographic expansions or contractions) [2], thus informing decisions regarding the recognition of distinct management units (MUs) [7], these markers cannot provide information on adaptive variation [8]. Such information is necessary in order to designate adaptive units (AUs) for conservation purposes [9]. Hence, adaptive loci should be used in concert with neutral markers to facilitate optimal management decisions [9]. In this study, we consider patterns of variation in major histocompatibility complex (MHC) genes in combination with neutral markers in an effort to understand more about units of conservation associated with the giant panda, Ailuropoda melanoleuca [10]. The MHC genes encode molecules involved in immune responses and can be classified into class I and class II genes [11]. Class I genes are mainly associated with intracellular pathogens, such as viruses and protozoa, while class II genes are in charge of extracellular pathogens [12]. MHC class I genes can be further grouped as either classical (class Ia) or nonclassical (class Ib) based on their polymorphisms, expression levels, and functions [13]. Class Ia genes are involved in presenting endogenous peptides to CD8+ cells [14], while class Ib loci have various functions associated with control of natural killer (NK) cell activation [15], successful reproduction [16], and recognition of antigenic lipids [17]. MHC genes (either class I or class II) are highly polymorphic, especially within their antigen-binding region [18]. It is generally believed that balancing selection maintains MHC diversity, which includes overdominant selection and negative frequency-dependent selection [10]. Such variation has been hypothesized to enhance mechanisms of mate choice as well as to provide an adaptive strategy for dealing with new pathogens [19]. The giant panda (Ailuropoda melanoleuca) is a unique endangered species in China. At present, wild populations comprise only about 1500 giant pandas in 6 isolated mountain ranges of China ( Figure 1): Qinling (QLI), Minshan (MSH), Qionglai (QLA), Daxiangling (DXL), Xiaoxiangling (XXL), and Liangshan (LSH) [20,21]. These populations are isolated by several rivers (i.e., the Hanjiang, Jianglingjiang, Minjiang, and Dadu rivers; Figure 1) and many roads [21]. The QLI population has been shown to be genetically divergent [22,23], but there is disagreement about whether this population represents a subspecies or a distinct evolutionarily significant unit (ESU) [23]. According to the fossil record, the giant panda originated 3 million years ago (in the early Pleistocene) and was widely distributed from Zhoukoudian in China to northern Burma and northern Vietnam during the middle and late Pleistocene [20]. Seven functional MHC class II genes have been isolated in the giant panda [24,25], and locus-specific genotyping techniques have been established [26,27]. Studies on the MHC class II loci identified moderate levels of allelic diversity and indicated that natural selection and intragenic recombination maintains genetic diversity on MHC class II loci [27]. However, the giant panda appears susceptible to parasites [28,29] as well as several types of viruses associated with domestic animals [30,31]. There is still a need for further investigations of genetic variations at MHC class I genes in this endangered species. Recently, Zhu et al. [32] isolated 6 class I MHC genes (i.e., Aime-C, Aime-F, Aime-I, Aime-K, Aime-L, and Aime-1906) from the giant panda, including 4 class Ia genes (Aime-C, Aime-F, Aime-I, and Aime-L) and 2 class Ib genes (Aime-K and Aime-1906), and established locus-specific genotyping techniques for each class Ia gene. Therefore, this pilot study provided an opportunity to examine the adaptive variation of MHC class I genes in structured giant panda populations on a large geographical scale. In the present study, our aims were to: (1) assess patterns of genetic variation at 4 classical MHC class I genes and 8 microsatellites across 6 extant giant panda populations; and (2) estimate patterns of genetic differentiation among populations and identify conservation units based on both MHC and microsatellite data. The number of haplotypes within the 4 classical Aime-MHC class I loci varied among the wild populations, ranging from 17 in QLI to 22 in XXL and LSH ( Table 1). Some of these haplotypes were highly abundant in all of the populations (e.g., Aime-I*02 and Aime-L*02 and 03), while others were detected at very low frequencies and/or only in certain populations (e.g., Aime-C*01, 04, and 09; Aime-I*05 and 07; and Aime-L*05 and 07). Estimates of heterozygosity revealed higher than expected heterozygosities for DXL and XXL at Aime-C, for LSH at Aime-I, and for QLI, MSH, and DXL at Aime-L. In contrast, other population-locus combinations exhibited lower than expected levels of heterozygosity (Table 1). We only observed significant deviations from Hardy-Weinberg equilibrium (HWE) in the Aime-I locus of the smallest population, XXL, and at Aime-L in the QLI population; the other combinations all obeyed HWE (Table 1). Different levels of H E were found among the wild populations at each locus (Aime-C: 0.711-0.812; Aime-I: 0.734-0.832; and Aime-L: 0.740-0.843). Allelic richness (AR) was also different at 3 polymorphic loci, with Aime-C ranging from 5.736 to 7.519, Aime-I ranging from 4.842 to 6.613, and Aime-L ranging from 4.624 to 6.935 (Table 1). Among the 6 populations across 3 polymorphic MHC loci, the mean H E was 0.731-0.816 and the mean AR was 5.118-6.627 (Table 1). All 15 pairwise F ST comparisons revealed there was significant genetic divergence among all populations, with the exception of MSH and QLA (P > 0.05; see Additional file 1: Table S1). The neighbor-joining (NJ) tree indicated that the giant panda populations fell into 3 clusters. First, MSH and QLA clustered together with 71% bootstrap values ( Figure 2A). Second, The DXL, XXL, and LSH populations clustered together with a weak support of 34% ( Figure 2A). Finally, QLI formed the third cluster. F ST values among the 3 clusters are shown in Table 2. Bayesian clustering analysis based on MHC loci also indicate strong subdivision, where the delta k showed 1 peak at K = 3 (see Additional file 2: Figure S1A). QLI (in yellow) was a separate cluster, with the other 2 clusters being MSH-QLA (in red) and DXL-XXL-LSH (in blue; Figure 3). Most of the individuals showed high admixture levels among the 3 clusters. Microsatellite variation within and between populations We identified 121 alleles across 8 microsatellite loci, ranging from 8 to 23 (see Additional file 3: Table S2). Only QLA at Aime-3 and GP-4, XXL at Aime-10, and LSH at Aime-14 significantly deviated from the HWE after Bonferroni correction (see Additional file 3: Table S2). Among the 6 wild populations, XXL showed the highest mean number of alleles (MNA), mean AR, mean H E , and mean polymorphic information content (PIC) (MNA = 10.8; AR = 8.324; H E = 0.856; PIC = 0.832). Effective population sizes (Ne values) were estimated for each population, but larger populations (i.e., MSH and QLA) had lower Ne values, which were not expected (see Additional file 3: Table S2). All 15 pairwise F ST comparisons revealed significant genetic differentiation among all pairwise populations, with the exception of DXL and XXL, XXL, and LSH (P > 0.05; see Additional file 1: Table S1). The NJ tree showed that the 6 giant panda populations partitioned into 3 clusters. The first cluster contained MSH and QLA (78% bootstrap value), while the second cluster included DXL, XXL, and LSH (60% bootstrap value). Table S3. Different levels of grey in the habitat distribution represent the 3 management units as suggested by this study (light grey, QLI; grey, MSH-QLA; dark grey, DXL-XXL-LSH). QLI formed the third cluster ( Figure 2B). F ST values among the 3 clusters are also shown in Table 2. Bayesian clustering analysis of microsatellite variation indicated the same 3 clusters as MHC (Figure 3 and Additional file 2: Figure S1B). Most of the individuals from the QLI cluster showed very low admixture levels, whereas individuals from the other 2 clusters showed high levels of admixture ( Figure 3). The higher admixture levels suggested there was significant gene flow between MSH, QLA, DXL, XXL, and LSH populations. Conversely, low admixture levels demonstrate limited gene flow between QLI and the other populations, indicating that QLI may be suffering from strong genetic isolation. The STRUCTURE plot suggested nearly unidirectional migration from QLI to MSH-QLA (Figure 3), as evidenced by the large proportion of individuals in MSH-QLA that contained substantial QLI heritage (yellow) and the small proportion of individuals in QLI that contained substantial MSH-QLA heritage (red). This movement from QLI, but not into QLI, was in good agreement with previous results [22,33], which showed that the giant panda experienced 2 bottlenecks, the first serious one resulting in a single refuge, QLI, and the second causing 2 refuges, QLI and XXL. The unidirectional movement from QLI to MSH-QLA indicated range expansion followed by the bottlenecks. Mantel tests revealed that patterns of MHC class I genes and microsatellites were not correlated (r = 0.520, P = 0.132), indicating that patterns of MHC class I diversity were not strongly influenced by the effects of stochastic micro-evolutionary processes (migration and drift). Isolation by distance was more obvious for microsatellites than for MHC class I genes (microsatellites: r = 0.703, P = 0.022; MHC: r = 0.517, P = 0.017). Genetic variation levels of Aime-MHC class I genes In this study, we identified 24 exon 2-3 haplotypes for the 4 classical Aime-MHC class I genes in 218 wild individuals, averaging 6 haplotypes per locus. In our previous study [32], we detected 13 exon 2 and 16 exon 3 sequences, which formed 17 haplotypes in the Chengdu captive population, revealing that most diversity from wild populations was conserved in captive populations. Compared with the brown bear, the giant panda has similar or fewer MHC class I alleles. A total of 37 alleles (2 pseudoalleles) were observed from at least 5 loci in 234 brown bear individuals, averaging 7 alleles per locus. However, compared with other endangered felids, Aime-MHC class I genes maintain a relatively high level of genetic diversity. For example, a total of 10 alleles (9 functional alleles and 1 pseudo-allele) were detected from 4 putative MHC class I loci in 108 Namibian cheetahs, averaging 2.5 alleles per locus [34]. While 13 putatively functional alleles and one pseudo-allele were found from at least 4 MHC class I loci in 16 highly endangered India Bengal tigers [35]. Furthermore, Aime-MHC class II genes also showed higher polymorphism relative to other endangered species [27]. These findings suggested that the giant panda had relative higher genetic variation at their MHC genes, which is necessary for them to cope with changing environmental conditions (e.g., pathogens). Genetic variation within populations According to a survey conducted by the State Forestry Administration of China [21], XXL occupies the smallest habitat area and includes only 32 giant pandas. Interestingly, XXL represented more haplotypes, higher AR, and higher expected heterozygosity at MHC class I genes than those in the larger mountain populations, i.e., MSH, QLA, and QLI (Table 1). Our microsatellite data further revealed that XXL had the highest genetic variation among all of the populations in terms of AR, expected heterozygosity, and number of alleles. Furthermore, a recent MHC II study revealed that XXL has the greatest number of alleles within wild giant panda populations [33]. These results, regardless of adaptive or neutral markers, suggested that the XXL population may have arisen from an ancestral population that had a higher level of genetic diversity, which was also supported by the results of MHC class II study [33]. Although the MSH population covers the largest habitat area and contained 708 individuals as of the last survey round, it did not show the highest level of genetic variation, as was reflected by Ne estimates. Ne estimates based on microsatellites at 6 populations indicated that MSH had an Ne of 90.5, which was smaller than that of the majority of giant panda populations (see Additional file 3: Table S2). ESUs, MUs, and AUs in giant panda populations Population genetics data are useful to identify ESUs, MUs, and AUs in some endangered species [9,36]. In this study, we first defined ESUs in giant pandas in order to protect evolutionarily important groups. Second, we identified MUs in each ESU for management purposes. Finally, we looked for possible AUs to help the government make management decisions. MHC and microsatellite variations in this study revealed that the 6 giant panda populations formed 3 distinct groups. Based on these data, we recommended that the 3 groups be 3 AUs, but partitioned into 2 ESUs, and that one of the ESUs consists of 2 MUs. The QLI population should be viewed as a separate ESU. Funk et al. [9] defined ESU as "a population or group of populations that warrant separate management or priority for conservation because of high genetic and ecological distinctiveness," and they recommended using neutral and adaptive markers to define ESUs, since neutral and adaptive processes both shape ESUs. Therefore, our recommendation is based on our present genetic data and previous ecological and molecular genetics studies [22,23,37,38]. Our NJ trees based on microsatellite and MHC class I genes revealed that QLI formed a distinct cluster from other populations, which is consistent with our STRUCTURE analysis and previously reported genomic, microsatellite, and DNA fingerprinting data [22,23,37]. The QLI population is currently iso- [20]. Additionally, Wan et al. [38] revealed that QLI giant pandas have smaller skulls, larger molars, and different pelage color as compared to other populations' individuals; these differences may be due to different habitat characteristics in QLI and other mountains. Based on DNA fingerprint and morphological data, Wan et al. [22] suggested that the QLI should represent a separate subspecies. However, whether this population represents a subspecies or a distinct ESU is still controversial [23]. Because our evidence indicated that there is significant genetic and ecological distinctiveness between QLI and the other 5 southern populations, we propose that QLI should be a separate ESU and should be monitored and managed separately. Moreover, given that the QLI population has lower genetic diversity at MHC genes and microsatellites and fewer offspring in the captive population compared to the other 5 southern populations, captive breeding of Qinling giant pandas should be encouraged. The other ESU contains 2 MUs, represented by MSH-QLA and DXL-XXL-LSH. MUs are usually defined as demographically independent populations [36]. If the dispersal rate (m) is smaller than 10%, populations become demographically isolated [39]. Dispersal rate or gene flow is shaped by neutral processes; therefore, neutral markers should be used to define MUs [9]. Our Bayesian clustering analysis using microsatellites showed that 3 clusters existed within giant panda populations. Our results are different from those of a previous study based on microsatellites [23], where they detected 4 clusters (QLI, MSH, QLA, and XXL-LSH). In the present study, MSH and QLA formed 1 cluster, which was confirmed by an NJ tree and was consistent with the data from previously reported DNA fingerprinting and mtDNA analyses [22,40], but was inconsistent with the results of Zhang et al.'s study [23]. These inconsistencies could be the result of difference in samples used in the different studies. Three populations, i.e., DXL, XXL, and LSH, formed another cluster, which may not have conflicted with Zhang et al.'s study. Because there was only 1 sample collected from the DXL population in the previous study, this sample was considered part of the QLA population for the analysis [23]. The Ne values for MSH-QLA and DXL-XXL-LSH were 200 and 300, respectively (see Additional file 3: Table S2). Given that the threshold dispersal rate is 10%, this corresponded to an F ST of~0.0125 (F ST = 1 / [1 + 4Nem]). The F ST between MSH-QLA and DXL-XXL-LSH was 0.038 (Table 2), which was greater than the threshold of 0.0125; therefore, we can conclude that these 2 clusters should be separate MUs. Moreover, QLI also deserved a separate MU given the greater pairwise F ST between QLI and the other 2 clusters (Table 2). Since MSH and QLA showed no genetic structure among wild populations, we suggest that green corridors should be constructed between these 2 similar populations in order to preserve its existing genetic diversity and evolutionary potential of the populations. In addition, intrapopulation habitat fragmentation is a serious problem for the giant panda [21], so it is essential that we reconnect the patches inhabited by each population in order to enhance contemporary gene flow (individual dispersal) and ensure the long-term survival of the giant panda. When discussing AUs, adaptive loci should be used [9]. We determined 3 possible AUs (QLI, MSH-QLA, and DXL-XXL-LSH) based on patterns of variation at MHC loci that reflected the ability to adapt to various pathogens. These analyses suggested that QLI should be a separate AU, which was supported by our NJ tree and structure analyses and genomic structure data [37].Our NJ trees revealed that MSH and QLA were most similar (Figure 2A; bootstrap value = 78%); this was supported by our structure analysis and the F ST value between these 2 populations (F ST = 0.003), but was inconsistent with the results of Zhao et al. [37]. They detected 3 distinct populations (QLI, MSH, and QLA-DXL-XXL-LSH) based on genomic data. The discrepancy lies in whether MSH and QLA should be together considered as a single AU and could be due to differential sensitivity of these 2 groups of markers. However, given that it is better to use adaptive loci to delineate AUs, it is hard to say whether MSH and QLA should be viewed as separate AUs, though genomic data is much more sensitive than specific genes of known function (i.e., MHC loci) [9]. The genomic structure results reported by Zhao et al. were based on all loci [37]. Furthermore, we do not have any data on different types of pathogens within giant panda populations that could directly reflect the different characteristics among possible adaptive groups. Therefore, we can only recommended 3 possible AUs given the above limitations to our data. Conclusions In summary, our work revealed relative high genetic variation at MHC class I genes in the giant panda. Using all loci, we defined 2 ESUs: QLI and MSH-QLA-DXL-XXL-LSH. The differentiation index (FST)-based phylogenetic tree and Bayesian clustering analysis for microsatellite loci suggested the need for 3 MUs: QLI, MSH-QLA, and DXL-XXL-LSH. We recommended 3 possible AUs: QLI, MSH-QLA, and DXL-XXL-LSH based on the patterns of variation in MHC loci. QLI was found to be the most genetically differentiated and had fewer offspring in the captive population, suggesting that captive breeding of pandas from this population should be encouraged. XXL exhibited the highest genetic variation at microsatellites among the 6 giant panda populations and higher genetic variation based on MHC class I genes than that in larger populations (i.e., QLI, MSH, and QLA). Therefore, XXL should be considered before prior to other populations for translocation and captive breeding programs. These included 35 blood, 109 skin, and 123 faecal samples. Blood samples were obtained from wild-born giant pandas, considered part of the wild population (QLI, MSH, QLA, and LSH). They were collected during routine medical examinations and were stored in liquid nitrogen. Skin samples were obtained from skin tissues from dead wild pandas and were preserved in sealed paper bags in desiccators. The 123 faecal samples (25 DXL, 51 XXL, and 47 LSH; see Additional file 4: Table S3) were collected from nonoverlapping home ranges during the nonreproductive season (between August and November). For faecal samples from the same adjacent home ranges, we performed individual discrimination. First, we performed PCR amplification of 8 microsatellites and 4 Aime-MHC-I loci in faecal DNA and found that MHC genes yielded obviously higher amplification success rates than microsatellites. Thus, the faeces were considered to represent a single individual when all alleles were identical across the amplifiable microsatellites and all studied MHC class I loci. Twenty-three faecal samples (18.7%) did not yield PCR products at more than 4 microsatellites and were thus treated as failures of microsatellite-based individualization. We identified individuals from these samples based on genotyping results of 4 Aime-MHC-I genes, which nonetheless underwent additional confirmation in an Aime-MHC-II-based genotyping analysis conducted in another study [33]. These results allowed us to distinguish 123 faecal samples as having come from 16 giant panda individuals in DXL, 31 in XXL, and 27 in LSH. Thus, we ultimately used 218 individuals for our subsequent analysis (see Additional file 4: Table S3). Sampling and DNA extraction Genomic DNA was isolated as described by Wan [26]. MHC genotyping and haplotyping We performed locus-specific amplification of the 4 classical Aime-MHC class I genes characterized in our previous paper [32]. In addition to separate amplifications of exons 2 and 3, we amplified a long fragment comprising exon 2, intron 2, and exon 3 and used the resulting products to conduct haplotyping (see Additional file 5: Table S4). PCR amplification conditions are presented in Additional file 6: Table S5. A stringent multitube approach was used to obtain reliable genotypes from the faecal samples [41]. If the genotype could not be determined after 2 of 3 amplifications, a fourth was performed. We used single-strand conformation polymorphism and heteroduplex (SSCP-HD) analysis to screen the PCR fragments. Electrophoresis conditions were as described by Zhu et al. [32]. In addition to obtaining separate genotypic data from exons 2 and 3, we cloned PCR products representing a longer fragment of exon 2-3 into DH5α competent cells (TaKaRa, Ltd, Dalian, China) and used the recombinants to determine exon 2-3 haplotypes. To identify the combined exon 2-3 genotypes, positive clones were subjected to PCR-SSCP using exon 2-and exon 3targeted SSCP-series primers. To avoid errors arising from PCR-based recombination, we sequenced at least 8 clones, each showing a unique SSCP banding pattern. If a sequence appeared in at least 2 individuals or was found in 2 independent PCRs from a single individual, we recognized it as an allele. Microsatellite genotyping After assessing their amplification, polymorphism, and yield, we chose 8 giant panda dinucleotide microsatellite loci (see Additional file 7: Table S6) from 37 loci [42][43][44]. PCR amplification conditions are shown in Additional file 6: Table S5. Genotyping methods were the same as those reported by Li et al. [45]. A multitube approach was also used to genotype microsatellite loci, as described above. Summary statics We assessed deviations from HWE and calculated allele frequencies with GenePop 4.0 software [46]. Observed (H O ) and expected (H E ) heterozygosities were obtained from Arlequin 3.1 software [47]. AR, standardized for sample sizes of each locus, was calculated using FSTAT 2.9.3 [48]. Linkage disequilibrium (LD) between pairs of microsatellite loci was evaluated in GenePop 4.0 [46]. We used Micro-Checker to test for the presence of null alleles, stuttering, or large allele dropout for microsatellites [49]. Within the 8 microsatellite markers selected, no evidence was found for LD and/or other genotyping errors for each population. The Ne was estimated by the LD method, as implemented in the NeEstimator program [50]. Estimates of population differentiation We calculated pairwise F ST values in Arlequin 3.1 [47]. To further assess population structure, we first built NJ trees on the basis of F ST values [51] in PHYLIP 3.69 software [52]. Bootstrap values were obtained by resampling the loci 1000 times. We visualized trees in Figtree 1.4.0 [53], and rooted the trees at the midpoint. We then used Bayesian clustering methods in STRUCTURE V 2.3.3 to detect genetic structure [54]. We conducted 10 runs for K from 1 to 10 with 100,000 burn-in runs from 1,000,000 Markov Chain Monte Carlo (MCMC) operations for each K [54]. Then, the results were uploaded to the online Structure Harvester [55] program, which selects the number of clusters by simultaneously evaluating posterior probability and the delta K statistic of Evanno et al. [56]. Graphical output was displayed using DISTRUCT V1.1 [57]. We used Mantel tests to detect whether patterns of population differentiation at MHC and microsatellite loci showed isolation by distance. We first measured geographical distances between different populations by Google Earth [58]. Then, we tested for the relationship between log geographical distance of different populations and G′ ST /1 -G′ ST for 2 markers using a simple Mantel test. The G' ST estimate could control for differences between different markers with different heterozygosities [59]. We conducted Mantel tests in ZT [60]. Supporting data The data set supporting the results of this article is available in the Dryad repository [doi:10.5061/dryad.2gt86].
v3-fos-license
2022-12-05T14:10:21.705Z
2016-05-31T00:00:00.000
254233924
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10459-016-9688-3.pdf", "pdf_hash": "58a099e599e7222a8ceac97e1e3ef35172c55fff", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:241", "s2fieldsofstudy": [ "Medicine" ], "sha1": "58a099e599e7222a8ceac97e1e3ef35172c55fff", "year": 2016 }
pes2o/s2orc
Patients embodied and as-a-body within bedside teaching encounters: a video ethnographic study Bedside teaching encounters (BTEs) involve doctor–patient–student interactions, providing opportunities for students to learn with, from and about patients. How the differing concerns of patient care and student education are balanced in situ remains largely unknown and undefined. This video ethnographic study explores patient involvement during a largely student-centric activity: ‘feedback sequences’ where students learn clinical and practical skills. Drawing on a data subset from a multi-site study, we used Conversation Analysis to investigate verbal and non-verbal interactional practices to examine patients’ inclusion and exclusion from teaching activities across 25 BTEs in General Practice and General Surgery and Medicine with 50 participants. Through analysis, we identified two representations of the patient: the patient embodied (where patients are actively involved) and the patient as-a-body (when they are used primarily as a prop for learning). Overall, patients were excluded more during physical examination than talk-based activities. Exclusion occurred through physical positioning of doctor–patient–student, and through doctors and students talking about, rather than to, patients using medical jargon and online commentaries. Patients’ exclusion was visibly noticeable through eye gaze: patients’ middle-distance gaze coincided with medical terminology or complex wording. Inclusory activities maintained the patient embodied during teaching activities through doctors’ skilful embedding of teaching within their care: including vocalising clinical reasoning processes through students, providing patients with a ‘warrant to listen’, allocating turns-at-talk for them and eye-contact. This study uniquely demonstrates the visible nature patient exclusion, providing firm evidence of how this affects patient empowerment and engagement within educational activities for tomorrow’s doctors. Introduction Bedside teaching is a generic phrase that has commonly referred to any type of teaching and learning in the presence of a patient across the full range of modern healthcare settings in which the presence of a bed is not a necessary feature of the encounter (Bleakley and Bligh 2008;Bleakley et al. 2011;Janicik and Fletcher 2003;Steven et al. 2014). This triadic doctor-patient-student interaction within bedside teaching encounters (BTEs) serves the dual purpose of patient care and medical student learning (Celenza and Rogers 2006;Chacko et al. 2007;Wang-Cheng et al. 1989). This dual purpose, however, makes it difficult for educators to marry the two for the benefit of all Hindmarsh 2010): sometimes student learning presides, resulting in patients being objectified, marginalised or side-lined from their own medical consultations Monrouxe et al. 2009;Spencer and McKimm 2010). Such marginalisation runs counter to calls for more active involvement of patients in medical education, where students learn in conjunction with patients rather than merely in their presence (Rees et al. 2007). Patient roles within BTEs Numerous qualitative studies examining patient participation within BTEs, and the roles patients are positioned in or adopt, reveal a spectrum of participation. This spectrum is bookended by the following extremes: from active to passive, included to excluded, full partnership to no involvement, from formally to informally ascribed roles (see Online supplement, S1). However, along with interview studies, the majority of research examining bedside teaching comprises essays and editorials based on personal opinions and experiences (Ramani et al. 2003). Understanding BTEs like this can only offer relatively static insights into patient involvement: they cannot fully capture the full variation of patient participation as it occurs during any one encounter. As Spencer et al. argue ''[What is] required is more precise descriptions of how exactly patients are involved in particular educational settings'' (Spencer et al. 2000, p.856). Only a fine-grained analysis of video footage of BTEs can discover how patients are interactionally included or excluded during the different phases of BTEs and how BTEs are interactionally managed by clinicians in real-time. Interactional analyses of BTEs More recently, using a social constructionist framework viewing power, roles and identities as interactionally constructed, researchers have begun to examine interactional nuances within BTEs (Ajjawi et al. 2015;Elsey et al. 2014;Monrouxe et al. 2009;Rees et al. 2013;Monrouxe 2008, 2010;Rizan et al. 2014). For example, utilising a dramaturgy analysis of audio data, Monrouxe et al. (2009) examined the extent to which patients were part of the teaching team within hospital based BTEs. They found that patients were interactionally placed into a variety of roles that served to include or exclude them from the team. Patients were excluded by being constructed as a prop for students' learning and as the audience to clinicians' teaching. They were included when constructed as an actor through the telling of their illness narratives and as a director during physical examinations. No single role could be assigned to any patient across an entire BTE, and no single individual was responsible for the assignment of the roles: rather patients' involvement in BTEs was constructed and resisted through interaction. Patients were also included/excluded from the teaching team through frontstage and backstage talk with clinicians using medical jargon in hurried and hushed voices when teaching students, but addressing patients in clearer, louder voices to signal a shift in the encounter from that of teaching to that of patient care. This finding resonates with Hindmarsh's work considering the dual purpose of service care and student teaching in dental training, in which an interactional separation of service and teaching was found, clearly including and excluding different parties at different stages of the encounters (Hindmarsh 2010;Hindmarsh et al. 2011). Such strategies, otherwise known as format changes, are useful . They enable clinicians to manage the interaction during BTEs, address what is happening now (or next) so that patients and students can follow and contribute appropriately. However, they continue to maintain the divide between student teaching and patient care, rather than moving towards an inclusive and seamless encounter of learning together. Indeed, the main question that now remains is whether these dual purposes can be met simultaneously, or whether patient care and student learning are, indeed, mutually exclusive activity types (Sarangi 2000). The aim of this paper is to examine this issue, drawing on conversation analysis techniques to explore how clinicians manage the dual purpose inherent within BTEs by specifically examining how doctors' and students' verbal and non-verbal practices work to enhance or limit patient participation within a single 'activity type': student feedback (henceforth feedback-in-action). Feedback-in-action The term feedback often refers to comments and judgements evaluating a situation post hoc. However feedback can also be conducted in interaction, and is a crucial aspect of teaching and learning within BTEs that has largely been ignored. These interactional sequences involving feedback-in-action are a fertile environment within which to explore patient involvement: they comprise a seemingly student-centric activity in which performance is continuously evaluated through clinicians' confirmations and corrections in the presence of the patient. We have recently examined such feedback-in-action within videoed General Practice (GP)-based BTEs (Rizan et al. 2014). Using Mehan's (1978Mehan's ( , 1979 Initiation-Response-Evaluation sequences (I-R-E), we examined how timely and sensitive feedback-in-action from clinicians promotes student learning and development. Indeed, I-R-E components within GP-based BTEs were attributed to participants in the following indicative ways: I = Question/instruction from clinical teacher; R = Reply/action from the student; E = Evaluation of reply/action by clinical teacher. In our corpus feedback sequences were always 'initiated' by clinicians who acted as facilitators of the interactions and activities found in the BTEs and predominantly as conduits between students and patients. Evidently, by applying this schema within GP BTEs, no conversational turn was allocated for the patient in this research. However, we found a single instance in which ''news'' regarding the patient's future treatment and referral to a specialist was discussed during a feedback sequence, which provided the patient with a warrant to listen, that extended beyond the doctor-student dyad (Rizan et al. 2014;Sacks et al. 1978). Thus, despite the physical presence of patients, they were largely excluded from the analysis. With this in mind, how then do we conceive of patient involvement and participation during these feedback sequences? Given the lack of patient talk, we therefore explore non-verbal aspects of these encounters to examine more precisely the level to which patients are involved within these student-centric activities, thereby addressing limitations of our previous research. Methodological approach We employ a video ethnographic approach utilising findings from conversation analytic (CA) and ethnomethodological studies of ordinary conversations (Sacks 1995;Schegloff 2007), school classrooms (Macbeth 2003(Macbeth , 2004(Macbeth , 2011McHoul 1978McHoul , 1990Mehan 1979;Payne and Hustler 1980) and healthcare settings to inform our analysis (Frankel 1990;Heath 1986;Heritage and Maynard 2006;Pomerantz et al. 1995). CA enables us to study participants' verbal and physical activities. No inference is made regarding participants' internal state (i.e. cognitively). Rather, we examine the socially constructed, on-going and reflexive (rather than causative) activities within the encounter. Data was gathered by a single researcher (CE) who was employed specifically to work on the project and therefore had no relationship with the participants outside of the research study. Furthermore, despite any external relationships, we acknowledge that the presence of the researcher and camera will inevitably have an effect on the situation (so-called observer effects), although clinicians were encouraged and actively instructed to adopt their 'normal' consultation styles. However, as we do not adopt a positivist perspective in which causality is assumed, where relevant and appropriate the presence and influence of the researcher and camera are accounted for as part of our analysis. For a detailed discussion of observer and camera effects on participant reactions see Heath et al. (2010:37-53). Ethical considerations Ethical approval was granted by two health boards. Age appropriate information sheets and consents forms were developed. The GP and General Surgery and Medicine (GSM) doctors were initially recruited, informed of the study purpose and the research process. In GP settings, when patients telephoned the surgery, reception staff informed them that (a) a medical student might be present for their consultation, (b) the consultation may be recorded as part of a research project; and (c) they could opt into/out of activities at any stage. Within the GSM setting, patients were provided this information on arrival to the clinic. Prior to participation, clinicians, patients and medical students read the information sheet, and signed the consent form if they wished to participate. Opportunities to discuss implications for participating with the researcher (CE) were provided. The present analysis was conducted under the auspices of the original ethics approval. Principles of selection and data collection Previous interactional studies of BTEs are restricted in terms of clinical settings, primarily occurring in single-sites. For this study we therefore involved a range of specialities to include a diverse patient group: GP (n = 12), general surgery and medicine (GSM; n = 15), paediatrics (n = 5) and geriatrics (n = 11). We originally aimed to record equal numbers of BTEs in each setting (with at least two with each clinician), but participation varied due to patient willingness and time factors. 43 BTEs (937 min) were videoed from two angles. The researcher (CE) was present for every BTE session except BTEs 29-32 in which the GP (FD4) elected to operate the cameras herself. We draw on a subset of this data from General Practice (GP: n = 12, 209:20 mn:ss) and General Surgery and Medicine outpatient consultations (GSM: n = 13, 165:20 mn:ss). These settings fit within our broad interpretation of what constitutes bedside teaching. Our rationale for using these subsets is: (1) systematic analysis of feedback sequences was initially conducted on the GP data (12) due to GP-specific funding; (2) the phenomenon of feedback sequences was originally identified in the GSM subset and a systematic analysis of this would be fruitful: in the GP setting there is only ever one student present at a time whereas in GSM there is typical two or more which might have implications for patient involvement; and (3) the GSM and GP subsets have a similar size sample whilst being a distinct medical specialty, suggesting that new findings might be identified. Analytic approach Video data was transcribed, anonymised and managed using Transana video management software (Woods and Dempster 2011). Transcriptions included verbal and non-verbal aspects of the interaction, indicated using a modification of Jeffersonian transcription conventions (Jefferson 2004): (0.5) = elapsed time in silence (tenths of seconds); (.) = a noticeable micro pause (roughly 0.1 s); [] = overlapping talk between speakers; -= a cut off in the talk; (word) or () = a possible, but uncertain hearing of the talk; (()) = extra descriptions, e.g. laughter, pointing etc.;°H ello = slightly quieter voice. Further extended conventions for non-verbal features of interactions such as eye-gaze, gestures, positioning, movement and embodiment, developed from prior work (Goodwin 1981(Goodwin , 2003Heath 1986Heath , 2006, were adapted for our interest in patient involvement (Table 1). One researcher (AC) reanalysed the GP feedback sequences from Rizan et al. (12) whilst watching and listening to original video data to consider non-verbal activities with respect to patient involvement. Findings were discussed with the team before AC examined all GSM recordings for the presence of feedback sequences. Categorisation was regularly checked and verified with another researcher (CE) for consistency. All three researchers then met, discussed the findings and refined the analysis. Results Ten of 12 GP and eight of the 13 GSM BTEs contained feedback-in-action sequences (n = 108: 100:57 mn:ss). Due to time and funding constraints, 47 I-R-E sequences were then further transcribed and analysed to the level of non-verbal activities (comprising all 8 of the GP excerpts previously published (Rizan et al. 2014) and all 39 from the GSM data). These sequences were embedded within different methods of consultation management by the doctor (Table 2 below), broadly categorised as (a) purely talk-based activities (i.e. history-taking, diagnostic phases, treatment explanations; n = 23), and (b) physical examination activities (also included elements of talk: such as instructions and online commentaries; n = 24). Therefore, the opening phase and related exchanges do not feature in this analysis. However, we have previously examined the opening phase of BTEs in our dataset and document how medical students are introduced to patients (including name and status) and describe how sometimes recaps of patient medical histories are shared if applicable . Talk-based Passive patient Doctor-student I-R-E sequences 10 Active patient Doctor-student-patient I-R-E sequences; students talk evaluated and topicalised in doctor-patient talk 13 Physical examination Passive patient as a prop; as a resource (providing information); as audience (to online commentary for doctor) 20 Active patient presenting their body for examination; as recipient of online commentary 4 Talk-based feedback sequences This section describes patient involvement during feedback sequences within talk-based phases of consultations: namely history-taking, diagnosis, patient education and treatment planning. Here, patients are excluded through doctors' and students' use of medical jargon, talking about rather than to them and separating teaching from the on-going consultation. Patients are included as key players in students' education by directly telling them about their illness (rather than this being communicated through the doctor) and through students' diagnostic/treatment suggestions being topicalised by the doctor when talking with the patient. We now present three indicative in-depth examples of patient exclusion/inclusion within I-R-E feedback sequences. Doctor-student I-R-E feedback sequences (patient as-a-body) We begin with an example of a GP student-teaching interaction that works to exclude the patient, casting her in a passive role during student learning (BTE 24, Table 3). Here, a 53-year-old female patient presents to the male GP (MD10) with a shoulder problem. This excerpt has previously been discussed in Rizan et al. (2014) but the patients' presence was rendered 'invisible' due to the focus of that analysis being purely talk-based. We now represent this with the inclusion of non-verbal aspects of the interaction to demonstrate how the patient is excluded within this BTE. Interestingly, the feedback sequences take up a large percentage (21:3 mn:ss, 86 %) of this consultation, indicating that student learning takes priority over patient care. To aid the reader's understanding of the transcripts presented in the rest of this paper the first excerpt (Table 3 below) has been annotated with descriptions of the non-verbal transcript conventions illustrating the different symbols found, with full explanations appearing in Table 1. In terms of the I-R-E sequence, this excerpt flows as follows: Initiation-MD10 asks question (lines 1-2); Response-Medical student (MS9) provides answer (lines 3 and 5-6); Evaluation-GP evaluates answers (lines 4 and 7-8). Thus, the first evaluation from MD10 (turn 3) provides feedback that MS9's hesitant response may be correct, implying that an alternative diagnosis is required. MS9 still hesitates in his second response (turn 4), resulting in MD10 providing him implicit correction by suggesting later clarification will occur. Patient exclusion from her consultation is visibalised through the interactional management of the feedback sequence: MS9 physically examines her shoulder, whilst the patient gazes toward him (turn 1 ''P ? S''), MD10 then asks MS9 to formulate a differential diagnosis (lines 1-2). MS9 pauses from his examination to address MD10's request (line 3), as he does so, the patient adopts a middle distance gaze (line 3 ''Pl''). Her orientation switches to middle-distance again during the students' second response (line 5) and at the beginning of the evaluation stage (line 7) of the feedback sequence. Upon closer inspection the patient's shift in gaze to middle-distance coincides with the production of more specific medical terminology or more complex wording by MS9 (line 3-'impingement of tendon' and, line 5 'joint capsule but a ligament'). The predominance of technical or medical language indicates that MD10 and MS9 are not orienting their talk towards the patient by modifying their language or explaining the terms (Meehan 1981). The use of medical jargon, along with the use of the patient as-a-body within the IRE feedback sequences, leads to passive patient involvement at this point due to the doctor's management of this BTE. This indicates a switch to a middle-distance gaze. This gaze is not directed at either doctor or student. The non-verbal construction shows that the patient switches gaze towards the Doctor upon the Evaluation phase of IRE, before switching her gaze back to the student Doctor-student I-R-E feedback sequences (patient embodied) Having considered how patients are excluded from student feedback, we now turn our attention to two examples in which the patients are more actively involved in student learning. In the first excerpt the doctor manages the interactional transitions by addressing the patient and the 2 students present during the history-taking, effectively maintaining patient attentiveness throughout. In the second excerpt, we see how feedback sequences can be embedded within the history-taking phase of the consultation, thereby achieving a triadic and 'seamless' involvement of the patient in student learning. Table 4 comprises an excerpt taken from the GSM data in which two medical students are present (MS3 and FS3). The patient (a 54-year-old female with musculoskeletal pain: FP8) is incorporated into the BTE by the strategic placing of the feedback sequences by the male doctor (MD5) within the on-going history-taking phase. This excerpt begins with another classic I-R-E structure, with both medical students simultaneously responding to the initiation question (around ''causes of folic deficiency'') produced by MD5 (lines 1-4). MD5 evaluates their response (line 5), confirming this as a possibility and topicalises it by employing it as a basis from which to ask FP8 about her eating habits (lines 5-15). As such the 'topic' of the feedback sequences (and therefore students' responses) are made relevant to the on-going consultation, providing the patient with a 'warrant to listen'. The BTE continues beyond this excerpt, during which further 'correct' student replies then form the next subject for the history-taking between doctor and patient. Although the evaluation phase (lines 16-22) is primarily intended to teach the students, MD5 constructs his teaching with specific reference to FP8 (note he shifts from ''she eats'' to ''you eat''), and uses inclusive language asking the patient to confirm an aspect of the teaching point (lines 20-22). Again MD5 exhibits an awareness of the different parties to his talk. In effect the 'warrant' is extended and expanded to create a mediated doctorpatient-student interaction. Throughout this phase, FP8 follows the on-going explanation and clearly shifts her gaze between the students and doctor. The patient does not switch to middle-distance gaze through the evaluation phase. The attentive gaze might be due to the effective management of the consultation, maintaining patient involvement within the BTE via different techniques to aid patient inclusion. However, there is clear interactional separation in terms of participation in that the feedback sequences comprise doctor-student contributions, whereas the history-taking conversation is limited to doctor-patient utterances. This separation is further 'marked' by the shift in gaze of MD5 from the students during the feedback sequence, to FP8 throughout the history-taking. Our final talk-based feedback sequence provides an example of an embedded method. Previous research has demonstrated how patient involvement can be limited through lack of opportunities for patients to have a legitimate voice in the teaching process. Here, we see how the GSM male doctor (MD6) skilfully orchestrates the female patient's (71-yearsold with gallstone problems; FP9) active involvement within student feedback sequences as he facilitates the collaborative telling of the patient's story during the on-going historytaking phase of the consultation (BTE 14, Table 5). Illustrating how a doctor can elicit patient involvement though both verbal and non-verbal gestures, MD6 uses the patient's history to build towards a diagnosis with the two students (FS2 and FS4) with FP9 being actively involved in the educational process and given a sanctioned evaluative turn-at-talk. The excerpt begins as MD6 uses humour and non-verbal gestures (eye-contact, leaning in, touching the patient, line 2) to maintain her involvement within this student learning activity. FP9 nods by way of recognition. MD6 continues to include her verbally and nonverbally (lines 4-6), by inviting a contribution during the evaluation phase of the I-R-E sequence. He brings her into the conversation through his positive eye-contact along with his transformation of the students' technical response to his question (''right upper quadrant pain'', line 3) into lay-language (''did you have tummy pain'', line 4). Importantly, focussing on FP9's gaze, we notice that as the talk adopts more technical language (''upper quadrant pain…'' line 3), her gaze shifts away from MD6 to the middle ground (line 3). However, MD6's purposive eye-contact with FP9 then signals a return in her gaze towards him (line 4). The subsequent transformation of technical to lay language further facilitates her involvement in the form of a secondary evaluation within student-feedback as he acknowledges her primary access to her own experience and the value of this within student learning (line 5). As such the doctor does not over medicalise the opportunistic feedback sequences, and does not reduce FP9 to a teaching resource. Physical examination feedback sequences We now turn to consider how patient involvement and participation occurs during feedback sequences as doctors and students conduct physical examinations on them. Here, we found more exclusionary than inclusionary practices. Thus, patients are often treated as a passive clinical resources and props. Students' online commentaries about the patients' condition-reporting positive provisional findings of a physical examination-are produced for the benefit of the doctor rather than the patient. Doctors and students restricting their talk towards patients to merely instructional or information seeking, further inhibits active patient involvement. However, patients are sometimes rendered momentarily active through fleeting verbal 'side-sequences': patients being personally addressed to check for comfort or for further information regarding the location of their pain. Additionally, sometimes patients actively present their bodies for examination. Furthermore, students sometimes provide online commentaries for patient, rather than doctor, benefit. We now present in depth three indicative examples of how patients are excluded and included within I-R-E feedback sequences during physical examinations. We begin with an excerpt in which the patient is afforded no turn in the conversation and is exclusively used as a prop for learning. We then consider two different ways in which online commentaries enable and constrain active involvement of patients in student feedback activities. Doctor-student I-R-E feedback sequences (patient as-a-body) We begin with an excerpt (Table 6) taken from the GP BTE 24 (Rizan et al. 2014). Here we see the female patient being used by the GP (MD10) as a teaching resource. He renders her passive, in the role of prop, for student (MS9) learning as her elbow joint is the subject of teaching during her consultation. Throughout this sequence the orientation of the various parties is critical to patient involvement: MD10 exclusively talks with MS9, with his body This doctor-student positioning and the implementation of the I-R-E sequence fails to provide any turns at talk for the patient. The I-R-E sequence is characterised as follows: MD10 turns comprise a series of questions (I) resulting in MS9 responding (R) by either enacting a physical examination or verbally reporting his findings. MD10 evaluates (E) these responses, sometimes through his own verbal/physical actions (e.g. correcting MS9's positioning of the patient's arm by physically adjusting it himself without acknowledging her as he does so). Having briefly outlined the doctor-student interaction during this excerpt, we now consider the sequence again, focussing on the patients' gaze. The scene can be read as two parts. In the first part, MS9 instructs the patient and demonstrates how he wants her to position and move her left arm so that he can check the limits of her external rotation ability. The patient spends much of this time gazing towards him, although she briefly adopts a middle distance gaze when he uses technical language (lines 2-4). The second part of the scene is marked by MD10 asking MS9 to ''do it on the other side'' (line 12). As he begins to talk, the patients' gaze adopts a middle ground position, returning to MS9 at the end of his turn. As MS9 softly repeats the instruction (''°yeah (.) the other side as well°'', line 13), the patient takes the initiative to move her right arm. MD10 immediately corrects this action (line 14). However, he is correcting MS9 (for it is his examination) rather than the patient. With this correction he emphasises the patient's propness in a number of ways: (1) directing his talk to MS9, despite the patient's agency in responding; (2) referring to her arm as ''it''; and (3) reaching in and physically re-positioning the patient's arm twice (lines 14 and 16) without first looking at her or asking permission. During this process (lines 14-19), the patient mainly adopts a middle-distance gaze. Our second example comprises a particular feature of communication within physical examinations: online commentaries. We begin with an excerpt taken from BTE26 ( Table 7). The I-R-E sequence begins with the female GP (FD3) asking the male student (MS10) to listen to the patient's (MP13) chest (I). The response (R) comprises MS10's examination of MP13 with a minimal evaluation (E) from FD3. Following the initial request from FD3 for MS10 to examine MP13's chest, MP13 primarily adopts a middledistance gaze. He briefly looks at MS10 as he consents MP13 for his examination (line 2), but gazes away when he is asked to remove his clothing (line 3). MP13 briefly returns his gaze to MS10 who explains what he is about to do (line 4) and momentarily during the examination (line 7). However, throughout the examination (lines 7-10) MP13 primarily adopts a middle distance gaze as the online commentary produced by MS10 is aimed specifically towards FD3: emphasised by the use of medical jargon (e.g. ''42 times 2 (.) it's 84'', ''normal character'') and by talking about, rather than to, the patient (e.g. ''his heart rate'', ''his lungs'') (Heritage and Stivers 1999). Doctor-student I-R-E feedback sequences (patient embodied) Continuing the theme of online commentaries, we now present an excerpt from BTE9 (Table 8). This differs significantly from BTE26 in that the commentary is for the benefit of the patient, rather than the doctor (male GSM doctor: MD4). The I-R-E sequence begins prior to this excerpt with MD4 asking the female student (FS2) to examine the female patient's goitre (I). FS2 responds (R) by performing the examination (line 1) and her findings are confirmed by MD4 (E). In terms of FS2's actions, as she performs the examination we can see that she satisfies the requirement of MD4 by examining the patient, but she also pays attention to the patient and her concerns through her online commentary. Thus, her commentary begins by addressing MD4 as she reports her finding (using ''it'' and ''she'' line 3). However, noticing the patients' unease (her smile), she quickly shifts to addressing her commentary to the patient (line 4) to inform and reassure her (Heritage and Stivers 1999). Noticeably, there is very little middle-gaze orientation of the patient in this excerpt: she adopts the middle-gaze as she raises her head for the examination, and again when MD4 begins to document their findings (lines 5-6). Discussion The starting point for our paper was to consider student feedback sequences, as a seemingly student-centric activity, within which to investigate patient involvement (Rizan et al. 2014). By their very nature, the default position of feedback sequences is to promote student learning and development in a timely fashion during the course of BTEs: the 'feedback' and evaluative utterances of the clinical tutor is principally aimed towards students' conduct and performance (Pomerantz et al. 1995(Pomerantz et al. , 1997. Previous research in this area therefore purely focussed on doctor-student talk and the different strategies used for feedback-in-action within General Practice (GP) settings (Rizan et al. 2014). As patients rarely spoke during these interactions, they were rendered virtually invisible in and by the analysis. We therefore extended this research by documenting how patients are included or excluded from student feedback activities through talk, eye-contact and physical positioning and the consequences of this for patient involvement within the bedside teaching. Thus we utilised the full potential of our video data by exploring patient involvement across two distinct activity types within GP and hospital based BTEs-talk-based feedback sequences and feedback sequences within physical examinations-visually, as well as verbally. Overall, patients were excluded more commonly during physical examination feedback sequences than during talk-based activities. Exclusion occurred as doctors and students talked about, rather than to patients using medical jargon. The exclusion of patients was visibly noticeable through our observations of patients' eye gaze: patients shifted their gaze to middle-distance coinciding with the use of more specific medical terminology or complex wording. Although previous research has suggested that such talk might serve to exclude patient involvement within BTEs (Monrouxe et al. 2009), this is the first study to demonstrate the visible nature of this exclusion in terms of how this impacts patient engagement, thereby providing firm evidence of such exclusion. Patients were also marginalised through the use of online commentaries by students when spoken for the benefit of their clinical educators. Students use these online commentaries to display their understanding and competence to perform different examination techniques and their underlying clinical reasoning. During these commentaries, and doctors' responses to them, not only do students and doctors use technical terms to describe the patient but they also frequently use impersonal terms, such as referring the patients' body parts as it and talk about the patient, ''his lungs'', rather than talking to the patient. During such talk, patients do not reorient their eye-gaze, maintaining a neutral middle distance throughout. The physical positioning of doctor, student and patient also deters patients' eye-contact with the doctor and serves to further exclude them from fully participating in the activities. Previous research examining doctor-patient interaction suggests that eye gaze and an alignment of body posture are an important factor when displaying a mutual engagement during the opening phases of a consultation: this has been called an engagement framework, that demonstrates a reciprocated interest in, and attention to, the matters at hand (Goodwin 1981). Indeed, the direction of eye-gaze has been shown to be of utmost importance as a display of attention within doctor-patient interaction (Goodwin 1981;Heath 1986Heath , 2006Robinson 1998). For example, Robinson (1998) argued that, in primary care settings, doctors are required to simultaneously interact with two different representations of the patient, each providing key information on the patients' health problems: the patient embodied (as an active participant in the consultation), and the patient inscribed (in paper documents, or computer records). He examined videoed interactions of primary care physicians and found that doctors conveyed their interest in the patients' complaint by turning to gaze at the patient, rather than at their medical records. In doing so, the doctor displayed a patient-centered (treating the patient as an expert in their own illness), rather than a doctor-centered (focussing on the doctor's professional knowledge) orientation. We argue that within the setting of bedside teaching activities, the doctor's competing tasks are patient care and student education. These represent the tension between adopting patient-centered and student-centered orientation within the bedside teaching encounter. Drawing on and developing Robinson's distinctions, we see the two representations of the patient here being the patient embodied (during active engagement) and the patient as-abody (rendered as a prop). Indeed, our data demonstrates clearly how, during physical examinations, patients themselves give up their bodies momentarily for this teaching as they shift to a middle-distance gaze: adopting a psychological distance, they offer the body they have rather than the body they are. And as the technical talk between student and doctor is not for them, patients' psychological detachment through middle-distance eyegaze suggests a disengagement activity, during which their interest in, and attention to, matters at hand is not reciprocated (Goodwin 1981). As such, although all three parties are physically present within the BTE, the interaction becomes dyadic (doctor-student), rather than triadic (doctor-patient-student), with patients displaying a compliance or tolerance for student teaching (Chretien et al. 2010). However, our data has demonstrated that careful management during student teaching can facilitate patient engagement in their education thereby achieving a truly triadic interaction. One notable means of achieving this co-orientation is through establishing a 'warrant to listen' for patients and the allocation of turns-at-talk for them within feedback sequences thereby encouraging participation. For example, rather than breaking off from the consultation itself, students were strategically brought into the doctor-patient interaction through the skilful embedding of teaching within patient care. Thus, during the history-taking phase of the consultation, students were included as a way of vocalising the clinical reasoning process. For example, reading aloud the findings from the patients' blood test, one doctor asked students to hypothesise as to why her folic acid level was low. This provided the patient with a warrant to listen, and so maintained her attention as the students responded. The doctor then topicalised their response in his next enquiry to the patient. Following the patient's response, during his feedback to the students, he actively included her by addressing her directly: ''you eat''. The patient's turn therefore is significant: it served to evaluate the student's answers. This demonstrated how patients can be involved in the production of feedback sequences (I-R-E) if turns at talk are carefully managed by clinicians. Furthermore, other inclusory activities include a doctor facilitating the collaborative telling of the patient's story, building towards a diagnosis with the students, through eye-contact, physical positioning and translating technical talk into laylanguage. Finally, students' use of online communication for the benefit of both doctor and patient maintains the active involvement of the patient during teaching episodes, reassuring the patient that her signs are mild (Heritage and Stivers 1999). As a result of these interactional strategies and practices we can reveal how BTEs can move towards triadic (doctor-patient-student) interactions that benefit and involve all parties. It is essential to pause and consider the contested nature of bedside teaching, its place within modern medical education and its outworking in practice. While some commentators have noted the untimely decline (and even death) of bedside teaching (Gonzalo et al. 2010(Gonzalo et al. , 2013Qureshi 2014;Qureshi and Maxwell 2012;Ramani et al. 2003), highlighting the benefits and surmountable impediments (Janicik and Fletcher 2003;Ahmed and El-Bagir 2002;Nair et al. 1998;Peters and Cate 2013;Qureshi 2014), others have critiqued it as a discourse that is unfit for purpose (Cantillon and Dornan 2014;Glass 1997;Ruffy 1997). While acknowledging these developments in teaching and learning, we remain agnostic about BTEs in terms of their universal suitability or effectiveness for medical student training. It is therefore important to acknowledge that practically marrying patient care and student learning is interactionally complicated and necessarily knotty and requires reflection on the part of the educator. However, it is critical to state the core messages found in this this paper can equally apply in the context of workplace based learning, or indeed any teaching performed in the presence of 'real' patients (or otherwise) in any healthcare setting. For example, in this analysis of two subsets from our BTE video corpus only two BTEs fit the description of bedside teaching as contrived, staged or inauthentic interactions (the 'cold' medicine Atkinson refers to: Atkinson 1981Atkinson , 1997. Therefore, the majority of BTEs analysed are actually genuine medical consultations (so-called 'hot' medicine) in which actual symptoms, outcomes and treatment decisions are found. Further the challenges identified in this paper remain very real for any medical educator in any teaching environment (Ajjawi et al. 2015;Bleakley 2006;Bleakley et al. 2011;Dornan et al. 2007;Macbeth 2014). As with any research, this study has its strengths and challenges. Given our study is grounded in the theory and methods of conversation analysis and ethnomethodology the types of claims that we make are deliberately restricted to the visible/hearable (public) orientations and perspectives that participants exhibit for each other in their talk and actions (Schegloff 1991). To that end we do not feel it is appropriate or consistent to speculate about how common or representative these practices are in medical schools and to make blanket practice recommendations based on a single video corpus. The focus of the current analysis is limited to particular types of interaction within BTEs, namely feedback sequences during teaching exchanges. These types of interaction are inherently more likely to result in a higher concentration of doctor-student utterances and restricted participation opportunities. The analysis presented on physical examinations within BTEs only scratches the surface of the modes of communication that occur in these sensitive and highly embodied activities. Furthermore, although we identified just over 100 min comprising 108 feedback-in-action sequences with 50 participants, we only analysed 47 of these non-verbally due to restricted funding and time. Although this is still a relatively large data set for such an in-depth analysis, the interactions only comprised BTEs with adult patients and doctors across two contexts: GP and GSM. It is possible that if we had analysed a broader spectrum of specialties we might have found further ways in which patients are included and excluded from student feedback processes. For example, had we included data from a paediatric setting in which both patients (the children) and their carers are present, we might have seen more active patient/carer involvement due to the further requirement for students and doctors to achieve full co-operation of the children. Despite these challenges, our study has strengths. These include the uniqueness of our analysis of both verbal and visual aspects of interaction within a bedside teaching environment: the majority of this research has purely focussed on dyadic doctor-patient interactions, which is not easily transferable to the more complex doctor-patient-student interaction. Additionally, the limited amount of work undertaken within bedside teaching has primarily focussed on talk-in-interaction, largely ignoring the important non-verbal aspects of these encounters (Ajjawi et al. 2015;Elsey et al. 2014;Monrouxe et al. 2009;Rees et al. 2013;Monrouxe 2008, 2010;Rizan et al. 2014). Finally, it is important for us to consider what our analysis tells us about medical students' learning of patient-centeredness. Indeed, during bedside teaching encounters the doctor's role is to teach students the what's and how's of physicianship including the skills, knowledge, and attitudes required for future practice. Furthermore, while skills and knowledge can be learned outside the clinical environment, patient-centeredness cannot. Consider the dyadic BTE in which patient care and student learning comprise very different and separate activities. Even if doctors role model patient-centeredness during doctor-patient interactions, if this is not translated into practice during student learning activities then students can receive powerful implicit contradictory messages about what patient-centeredness really means, particularly in terms of patient empowerment. These hidden curriculum messages prioritise the transmission of the doctors' professional knowledge to the student: rendering the patient as-a-body, and culminating in a primarily doctor-centred orientation (Goodwin 1981). However, Rees and Monrouxe's (2010) findings that patients can use humour to resist passive roles in BTEs must considered as a counter-point to this conclusion, even though overall we found little evidence of this practice in our dataset. Therefore it is worth highlighting the humorous on-line commentary exchange in Table 8 (lines 3-4) in which the student responds to the patient's initial smile resulting in shared laughter, with the doctor not joining in and simply adding the evaluation in next turn. In a doctor-centred orientation, the students' purpose within the encounter may be seen as solely procedural, rather than diagnostic: students are primarily there so that they can practice the procedural aspects of physicianship (rather than professionalism), that is the learning and practicing of anatomy, physiology, examination techniques and clinical reasoning in a safe and supported environment. Thus, these actions minimise the focus upon students' practising appropriate and considerate communication with the patient. Doctors' detached demeanor and actions towards patients during these doctor-student teaching activities also send strong messages to the patients themselves: that they are to yield their bodies without resistance so that tomorrow's doctors can learn about them. By contrast, the truly triadic doctor-patient-student interaction in which student teaching and feedback is fully embedded within patient care activities prioritise patient embodiment, resulting in a seamless role modeling of patient-centeredness, with doctors, students and patients learning together for the benefit of all.
v3-fos-license
2017-08-03T01:29:53.643Z
2016-07-01T00:00:00.000
7663742
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-016-2831-y", "pdf_hash": "d23e9cfa93112d67b5fcd08b438f4939ae210bf3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:244", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "d23e9cfa93112d67b5fcd08b438f4939ae210bf3", "year": 2016 }
pes2o/s2orc
A next generation sequencing-based method to study the intra-host genetic diversity of norovirus in patients with acute and chronic infection Background Immunocompromised individuals with chronic norovirus (NoV) infection and elderly patients are hypothesized to be reservoirs where NoV might accumulate mutations and evolve into pandemic strains. Next generation sequencing (NGS) methods can monitor the intra-host diversity of NoV and its evolution but low abundance of viral RNA results in sub-optimal efficiency. In this study, we: 1) established a next generation sequencing-based method for NoV using bacterial rRNA depletion as a viral RNA enrichment strategy, and 2) measured the intra-host genetic diversity of NoV in specimens of patients with acute NoV infection (n = 4) and in longitudinal specimens of an immunocompromised patient with chronic NoV infection (n = 2). Results A single Illumina MiSeq dataset resulted in near full-length genome sequences for 5 out of 6 multiplexed samples. Experimental depletion of bacterial rRNA in stool RNA provided up to 1.9 % of NoV reads. The intra-host viral population in patients with acute NoV infection was homogenous and no single nucleotide variants (SNVs) were detected. In contrast, the NoV population from the immunocompromised patient was highly diverse and accumulated SNVs over time (51 SNVs in the first sample and 122 SNVs in the second sample collected 4 months later). The percentages of SNVs causing non-synonymous mutations were 27.5 % and 20.5 % for the first and second samples, respectively. The majority of non-synonymous mutations occurred, in increasing order of frequency, in p22, the major capsid (VP1) and minor capsid (VP2) genes. Conclusions The results provide data useful for the selection and improvement of NoV RNA enrichment strategies for NGS. Whole genome analysis using next generation sequencing confirmed that the within-host population of NoV in an immunocompromised individual with chronic NoV infection was more diverse compared to that in individuals with acute infection. We also observed an accumulation of non-synonymous mutations at the minor capsid gene that has not been reported in previous studies and might have a role in NoV adaptation. Electronic supplementary material The online version of this article (doi:10.1186/s12864-016-2831-y) contains supplementary material, which is available to authorized users. Background Norovirus (NoV) is recognized as a leading cause of epidemic and sporadic gastroenteritis around the world [1]. The viral RNA genome is about 7500 nt long and contains three ORFs. ORF1 encodes for a polyprotein that is cleaved into 6 non-structural proteins [2]. ORF2 and ORF3 encode the major (VP1) and minor (VP2) capsid proteins, respectively. NoVs are classified, based on VP1 amino acid sequences, into seven genogroups (GI-GVII) that are further divided into genotypes. To date, 41 NoV genotypes have been reported of which at least 29 have been found in humans [3], however, genotype GII.4 alone is responsible for over 60 % of all NoV outbreaks worldwide [4]. New genetic clusters of GII.4, commonly referred to as GII.4 variants, arise every 2 to 4 years and spread rapidly often causing global pandemics [4]. Novel GII.4 variants evolve by antigenic drift and display changes in VP1 epitopes that can avert immune responses mounted against previous variants [5][6][7]. Homologous recombination is another mechanism responsible for the genetic diversity among NoV GII.4 and it commonly occurs at the ORF1/ORF2 junction allowing the virus to exchange structural and nonstructural genes between different GII.4 variants or even different genotypes [8,9]. Norovirus acute gastroenteritis is a self-limited illness typically lasting 2 to 3 days while viral shedding can range from 13 to 56 days [10]. In immunocompromised patients, however, NoV shedding is usually prolonged [11][12][13][14][15] and cases of chronic NoV infection with shedding over 1-2 years have been reported [13,14,16]. Due to their long shedding periods and weak immune responses it has been hypothesized that immunocompromised patients with chronic NoV infection might be reservoirs where new GII.4 variants emerge [12,17]. Indeed, ORF2 sequence analysis in these individuals has shown that the virus can accumulate mutations in VP1 and develop an intra-host NoV population with large genetic diversity [12,16,[18][19][20][21][22][23]. However, it is still unclear how other regions of the viral genome evolve in immunocompromised hosts. More recently, the elderly and malnourished host have also been proposed as NoV reservoirs where NoV might accumulate mutations [17] but no studies have yet been performed in humans to confirm this hypothesis. The gold standard method to study intra-host NoV populations involves cloning viral RT-PCR amplicons into plasmids followed by Sanger sequencing [16,19,20,24], a labour intensive method that requires a relatively large number of clones to be processed in order to obtain an accurate assessment of the viral diversity. An alternative approach uses next generation sequencing (NGS) technologies, which process millions of fragments of nucleic acid in a single experiment. NGS is highly cost-effective in terms of cost per base, however, since NoV RNA represents a very small fraction of all stool RNA, as little as 0.01 % [25], the cost per viral base can be considerable. Fortunately, the efficiency can be improved by viral RNA enrichment, and strategies previously used for NoV enrichment include polyA tail selection [26], RT-PCR amplification using NoV-specific primers [21,23,27], VIDISCA [28,29] and virus purification [22]. Depletion of rRNA is another possible enrichment strategy [30]. For most cells, rRNA is the most abundant species of RNA and its removal with commercial kits can substantially increase the prevalence of non-rRNA [31,32]. The method poses an advantage over others in that it maintains the original representation of the viral population present in the sample and is less likely to be affected by NoV RNA fragmentation. In this study we establish a next generation sequencingbased method for NoV using bacterial rRNA depletion as an enrichment strategy for NoV RNA then use the resulting data to examine the intra-host viral population in samples from patients involved in NoV outbreaks (mostly elderly) and in longitudinal samples collected from an immunocompromised host with chronic NoV infection. Mapping of NoV sequencing reads The samples included in this study (n = 6) are described in Table 1. NoV sequences represented 0.01 % to 1.88 % of all quality-filtered reads and 0.04 % to 8.54 % of the non-rRNA reads. The non-NoV reads were further characterized and a description is provided under Additional file 1. The average coverage of the final consensus sequences ranged between 11X and 1,603X (Table 2 and Fig. 1). Five out of six samples yielded near full-length NoV sequences and called 99.91 to 100 % of the reference genome. OU3, the sample with the lowest percentage of NoV reads and lowest coverage, failed to yield a complete NoV genome sequence. The percentage of NoV sequences per sample showed a strong correlation with the Ct values of the RT-qPCR performed on RNA stool extracts (Spearman's rho correlation coefficient: 0.886, P = 0.019, two-tailed). An equally strong correlation was observed between the percentage of NoV sequences and viral titer per ng of RNA (Additional file 2). These results suggest that the poor yield of NoV sequences from sample OU3 was due to a low abundance of viral RNA rather than failure of the enrichment method. In order to validate our NGS method, the NoV consensus sequences of samples OU1 and OU3 obtained with the MiSeq platform were compared to those obtained using Sanger sequencing. There was a large concordance between both methods. NoV OU1, which achieved high coverage with MiSeq, showed a concordance of almost 100 % between the sequences from both methods except for one nucleotide. The base mismatch was located near the 3'end of the genome and was confirmed as an error in Sanger sequencing after review of the Sanger chromatogram and the coverage with MiSeq at that position (21X). Furthermore, the 5' and 3' ends of the genome could be extended by 9 and 24 nucleotides, respectively, with MiSeq. These nucleotides were not covered with Sanger because of their close proximity to the primers used for PCR amplification and sequencing. The consensus sequence with lowest coverage, NoV OU3, showed three mismatches over a total of 5,081 nt that could be sequenced using Sanger and MiSeq. The mismatches were identified at 1732 nt, 2167 nt and 2239 nt (positions are given relative to KU311160) and had, respectively, coverages of 1X, 2X and 2X and Phred base quality scores of 16, 39 and 39. All three mismatches occurred at wobble bases resulting in synonymous mutations and the last two were located in the p22 gene. Since these positions had very low coverage, it is possible that the mismatches were created by sequencing errors with MiSeq, however, the high base quality associated with at least 2 of these three nucleotides and their relative position in the genome suggests that they could also represent true SNVs originally present in the sample. Detection of single nucleotide polymorphisms Single nucleotide variants (SNV) were identified to measure the intra-host genetic population of NoV. In order to reduce false positives, we only called a SNV if it was observed at least 5 times and represented a minimum of 2 % of all observations. Using these criteria we did not detect any SNVs in all acute infection samples (OU1, OU2, OU3 and OU4). OU3 and OU4 have relatively low coverage (Table 2) limiting the ability to detect SNVs, but OU1 and OU2 have an average coverage above 250X and still no SNVs were found. In contrast, the first sample from the immunocompromised subject with chronic NoV infection, SP1, had 51 SNVs while SP2, the second sample collected from the same individual 4 months later, had 122 SNVs, indicating that the genetic diversity of NoV in this patient was higher and also increased over time (Fig. 2). The increased number of SNVs in SP2 vs. SP1 was confirmed even after controlling for differences in coverage (Additional file 3). The higher number of SNVs in the SP2 sample was not due to more sensitive detection, since SP2 had a lower average coverage than SP1, OU1, and OU2 ( Table 2). The percentages of non-synonymous single nucleotide variants (nsSNVs) for SP1 and SP2 were 27.5 % and 20.5 %, respectively. There were differences in the distribution of nsSNV across genes between SP1 and SP2. In SP1, most nsSNVs occurred in p22 (4/14), VP2 (4/14) (3/14) whereas in SP2, the majority of nsSNVs occurred in VP2 (12/25) and VP1 (9/25) ( Table 3). The enrichment of non-synonymous mutations in VP2, the P2 domain of VP1 and p22 was statistically significant (P < 0.05) after controlling for gene/domain size (Table 3) and is consistent with an increased rate of amino acid divergence in these proteins. The amino acid residues affected by nsSNVs (summarized in Fig. 3 . Of the 11 mutations occurring at the P2 domain of VP1, two (affecting amino acid residues 294 and 340) were located at epitopes reported to be targeted by antibodies that block the binding of NoV to human blood group antigens (surrogates of neutralizing antibodies) [6]. A comparison between the consensus sequences of SP1 and SP2 revealed differences in 9 positions, of which 2 were synonymous (203 nt and 3851 nt), 6 were non synonymous (2188 nt, 6095 nt, 6193 nt, 6281 nt, 7141 nt and 7167 nt; the corresponding amino acid changes are shown in Fig. 3) and 1 was located at the 3'UTR (7509 nt). Among the 9 differences between the consensus sequences of SP1 and SP2, 6 were SNVs at SP1 only (203 nt, 2188 nt, 3851 nt, 6281 nt, 7141 nt and 7167 nt), 1 was a SNV in SP2 only (6095 nt), 1 was a SNV in SP1 and SP2 (6193 nt) and 1 was not a SNV in SP1 nor SP2 (7509 nt). There were a total of 10 SNVs occurring at the same nucleotide positions of SP1 and SP2 (474 nt, 2303 nt, 4323 nt, 5580 nt, 5706 nt, 6193 nt, 6222 nt, 6803 nt, 7172 nt and 7252 nt; shown with red color in Fig. 2). Discussion In this study we established a method to analyze the intra-host genetic diversity of NoV in samples from patients with acute and chronic NoV infection. Our next generation sequencing-based method coupled to bacterial rRNA depletion as a NoV RNA enrichment strategy produced between 0.01 % and 1.9 % of NoV reads. Characterization of the non-NoV reads confirmed that samples still contained 64 to 95 % of bacterial rRNA reads after depletion which differs with a previous study reporting 1 to 5 % of bacterial rRNA reads after using the same depletion method with stool RNA [32]. The reasons for the lower rRNA depletion efficiency observed with our samples could not be identified. Since the method we used for depletion works with a library of probes that must first hybridize the bacterial rRNA for subsequent removal, it is possible that the hybridization step was affected by sequence incompatibility. Also, rRNA depletion was carried out with the maximum amount of input RNA recommended by the manufacturer, which could have affected the efficiency of the method. Based on our data we estimate that if all 16S and 23S bacterial rRNA could be removed from stool RNA, NoV sequences would have represented between 0.04 % and 8.5 % of all NGS reads. For all but one sample we were able to obtain sufficient sequence data to retrieve near full-length NoV genome sequences. Moreover, even the non-enriched sample (OU4) provided enough data for de novo assembly of a NoV genome, AlbertaEI404/2012/CA, which to our knowledge, is the first near full-length NoV GI.7 genome reported. Strains of genotype GI.7 were observed with increased prevalence in Alberta, Canada, between July 2012 and June 2013 [33] and also among children in Pakistan between April 2006 and March 2008 [34]. Our analysis included four cases of acute NoV infection. Two samples from these acute cases, OU1 and OU2, yielded sufficient coverage to identify SNVs at frequencies ≥ 2 %. However, no NoV SNVs were detected in these two samples, indicating that the viral population in typical acute infections is homogeneous. In contrast, we identified numerous SNVs in an immunocompromised Fig. 2 Distribution of NoV SNV frequencies across the viral genome. Samples SP1 and SP2 were collected four months apart from an immunocompromised bone marrow transplant patient with chronic NoV infection. SNV calling was performed using Freebayes. Only SNVs with frequencies ≥ 2 % and ≥ 5X coverage are reported. Positions with coverage < 10X were excluded from the analysis. SNVs shared in common between SP1 and SP2 are shown in red patient with chronic NoV infection. Interestingly, there was also an increase of SNV over time (51 SNVs with ≥ 2 % frequency in the first sample and 122 SNVs with ≥ 2 % frequency in the second sample collected 4 months after), revealing intra-host NoV evolution. As of the date of writing this manuscript there were two published studies comparing the intra-host populations of NoV between immunocompetent and immunocompromised patients by using NGS data. Both reported similar observations [21,22]. Vega et al. found no SNVs throughout the viral genome (ORF1, ORF2 and ORF3) in an immunocompetent subject with acute NoV illness but identified multiple SNVs in three different immunocompromised bone marrow transplant patients (15, 67 and 235 SNVs with ≥ 10 % frequency). In an analysis of just the ORF2 and partial ORF3 regions, Bull et al. detected multiple SNVs in immunocompetent individuals with acute NoV infection (5 to 8 SNVs with ≥ 2 % frequency). However, an immunocompromised individual with chronic NoV infection displayed considerably higher NoV diversity that also increased over time (48, and 34 to 48 % in immunocompromised subjects) whereas we observed a rate of 21 and 28 % in the immunocompromised patient. Our rates are similar to those reported in another study with bone marrow transplant patients receiving immunosuppressive drugs (236 non synonymous mutations out of a total of 1082 mutations across 13 samples, equivalent to an overall rate of 21.8 %) [23]. It could be argued that Bull et al. achieved higher coverage and therefore higher resolution of SNVs with PCR, however, at least two of our samples (one from an outbreak patient and another from the immunocompromised patient) had average coverage levels greater than 1300X, well above 950X, the average coverage reported by Bull et al. We can only speculate that the differences between our study and that of Bull et al. could be due to the processes of quality filtering and trimming of reads as well as the parameters used for SNV calling, which probably were more stringent in our study. We observed an enrichment of non-synonymous SNVs in the P2 domain of VP1 which matches the expectation that the most exposed and possibly the most antigenic part of the virus bears the highest pressure to diverge. In fact, the majority of studies analyzing NoV intra-host genetic diversity have been restricted to identify potential changes at VP1 [12,14,21,24,35,36]. By analyzing near-full length NoV genomes, we also observed an enrichment of non-synonymous mutations in VP2, which suggests that during chronic infection this gene can also be under pressure to diverge. Although our observations are based on samples from a single immunocompromised patient, we believe this is plausible because it agrees with previous analysis of sequence alignments of multiple GII.4 strains showing high evolutionary rates for VP2 [27,37]. The role of VP2 in the viral life cycle is still unknown. Only a few copies of VP2 molecules are present in the final virion (the precise number is yet unclear). VP2 appears to bind the interior surface of the capsid and has been shown to enhance the expression and half-life of VP1 [38]. Since VP2 is rich in basic residues and therefore is positively charged, it has also been suggested that it interacts with the negatively charged genomic RNA, and possibly plays a role during the encapsidation process [38]. The VP2 protein of murine noroviruses regulates the maturation of antigen presenting cells and is an important determinant of NoV protective immunity [39]. It is interesting to consider that human NoV VP2 might be an important epitope of host immune responses. By analyzing whole NoV genome sequences using traditional cloning, Chan et al., found accumulation of mutations at VP1 and VP2 (38 and 15 nt resulting in 9 and 5 amino acid changes, respectively) over a period of 4 months in an immunocompromised patient with agammaglobulinemia and thymoma [19]. Conversely, Vega et al. reported that, in comparison to NoV strains in circulation, NoV mutations in immunocompromised bone marrow transplant patients occur mostly at random positions except for gene p22, which presents a significantly large proportion of mutations and positively selected sites [22]. We speculate that the difference in results observed in these studies and our study might be due to temporal changes in the strength and specificity of host immune responses during chronic infection. This will be dependent on the type of immunosuppressed host, the type of exogenous immunosuppression administered, the time of infection during the patient's clinical course and the time of the patient's immune reconstitution. Regarding this point, we observed that the distribution of non synonymous mutations across genes changed with time which suggests that NoV might face changes in its evolutionary trajectory during chronic infection as reported for other viruses such as HCV and HIV [40][41][42]. Besides whole genome sequencing, NGS methods are powerful tools for studying viral population dynamics within a host because of the high sequencing depth that can be achieved. In addition, samples can be prepared relatively quickly without the need of cloning into vectors and multiple isolates can be sequenced in parallel (in our case, six samples were analyzed in one MiSeq 2x121bp run). The number of samples that can be sequenced depends on the throughput of the sequencing technology and the abundance of the viral RNA in the sample. Different forms of enrichment can enhance the later. We demonstrated that bacterial rRNA depletion is a beneficial treatment that can be further optimized. Hybrid capture is a promising new enrichment strategy that remains to be examined in the future [43]. Conclusions This study provides data useful for the selection and improvement of NoV RNA enrichment strategies for NGS and is also the first study to look at the intra-host diversity of NoV by analyzing near full length-genomes from acute cases of NoV infection and in longitudinally collected samples from an immunocompromised patient with chronic NoV infection. We identified a larger viral diversity in the immunocompromised patient, which increased over time and we also observed that genes VP2, p22 and VP1 accumulated the majority of nonsynonymous mutations during chronic infection. Further studies are needed to observe if the accumulation of mutations at VP2 is consistent across immunocompromised patients with chronic infection and unveil the role of VP2 and p22 proteins in relationship to host immune responses. We are currently planning a larger follow-up study to assess the intra-host genetic diversity of NoV in solid organ transplant recipients. Patient samples All stool specimens were collected in Alberta between 2012 and 2014 and stored at -80°C until analysis. A total of six stool samples previously genotyped within our routine program of NoV surveillance in Alberta were included in this study [33,44]. The patient's description and NoV genotype associated to each sample are listed in Table 1. Samples OU1, OU2, OU3 and OU4 were collected from outbreak patients with acute NoV infection. The near full-length norovirus genome for sample OU1 was determined in a previous study using Sanger sequencing [9] and was included for comparison purposes. Samples SP1 and SP2 were collected four months apart from a pediatric bone marrow transplant patient who first tested positive for NoV 6 months before sample SP1 was collected. RNA extraction Samples OU1, OU2, OU3, SP1 and SP2 were processed as follows: 50 to 75 mg of stool were mixed with 20 μL of proteinase K and 200 μL of lysis buffer from Maga-zorb®RNA mini-prep kit (Promega, Madison, WI). Nucleic acids were extracted with 1 mL of Trizol® (Life Technologies, Carlsbad, CA) and according to the manufacturer's instructions. The yield of RNA per extraction was estimated using NanoDrop 1000 and the process was repeated to obtain at least 50ug of RNA per sample. The extracts were then treated with DNase (Promega, Madison, WI) and purified by phenol chloroform extraction and ethanol precipitation. RNA extracts were eluted through OneStep™ PCR Inhibitor Removal columns (Zymo Research, Irvine, CA). Bacterial rRNA was depleted using Ribo-Zero® bacterial kit (Epicentre, Madison, WI) according to the manufacturer's instructions using 5 μg of RNA per sample and purifying by ethanol precipitation as the final step. Depletion of bacterial rRNA was performed once per sample. For comparison purposes, sample OU4 was processed using our routine enteric virus nucleic acid extraction method previously described [33,44] using Magazorb®RNA mini-prep kit. The nucleic extract was treated with DNase and purified by phenol chloroform extraction followed by ethanol precipitation. Sample OU4 was not depleted of bacterial rRNA. The presence of norovirus RNA was confirmed in all extracts by RT-qPCR as previously described [45]. Illumina library preparation and sequencing Sample libraries were prepared from 1 μg of RNA using the TruSeq RNA sample preparation kit v2 (Illumina, San Diego, CA) following the manufacturer's instructions with a fragmentation time of 1 min during the "elute-fragmentprime" step and unique indexed adapters for each sample. cDNA libraries were quantified with Qubit and the average fragment size was estimated using the Agilent 2100 Bioanalyzer. A control library of phage X714 was also included in each sample. All sample libraries (n = 6) were sequenced once on a single Illumina MiSeq run to produce paired end reads of 250 bp each, resulting in reads of 121 bp each after removing adapters. Identification, characterization and removal of rRNA reads Raw sequence reads were quality-trimmed and filtered with Prinseq-lite, version 0.20.4 [46] using the following criteria: the first nucleotide at each end (5' and 3') was trimmed and the following nucleotides were also trimmed stepwise if their base quality was below 20. Sequences with an average base quality below 20 or with more than 90 % of Ns were also removed. A description of the reads that were filtered is provided in Additional file 2. Ribosomal RNA reads were identified and filtered out with SortmeRNA [47] using the 23S/28S large subunit (LSU) and 16S/18S small subunit (SSU) rRNA SILVA 119 databases and the 5S and 5.8S rRNA Rfam databases for all three domains of life (Eukarya, Bacteria and Archaea). All reads failing to pass filters (i.e. non-rRNA reads) were maintained as paired-ends reads and used in downstream analyses the command used for analysis is described in Additional file 2). A subset of 150,000 single-end reads with length ≥ 80 nt and identified as bacterial 23S rRNA, the most predominant type of rRNA found in all samples, was uploaded in the SILVAngs data analysis service (https://www.arb-silva.de/ngs/) for identification of operational taxonomic units (OTUs). Since no full-length or near full-length genome NoV GI.7 sequences were available in GenBank, additional steps were followed for OU4. OU4 reads were assembled de novo with Velvet version 1.2.10 using several hash lengths with read category set to 'short paired' and including the consensus sequence (partial ORF1, complete ORF2 and ORF3) as a long read (the command used for this analysis is described in Additional file 2). Norovirus contigs were identified among all Velvet assemblies with BLAST [49] using JN899243.1, a genotype GI.9 strain, as query sequence. All NoV contigs and the partial consensus sequence were aligned using MEGA 6.0 [50] to obtain the final genome assembly. The ends of each NoV genome were extended beyond the reference sequences by using the first and last 15 nt as query for matches among fastq sequences using the Unix "grep" command. Matching reads were aligned to the consensus sequence and any extra nucleotides (5'or 3' overhangs) were incorporated into the consensus sequence. The process was repeated until no more extra nucleotides were found at either end of the consensus sequence. Sample reads were mapped with Bowtie 2 to their respective NoV extended consensus assembly. The final alignments (SAM files) were used to calculate the coverage per genome position using BEDTools, version 2.14.3-1 [51]. NoV sequencing using Sanger's method The NoV strain from sample OU3 was sequenced using Sanger's method to compare results against those obtained with Illumina MiSeq. Nine pairs of primers (described in Additional file 2) were designed to retrieve overlapping PCR amplicons between~600 to 1100 bp long, spanning altogether all NoV ORFs. The RT and PCR reactions were performed as previously described [9]. PCR products were obtained for six out of nine pair of primers and were sequenced in both directions. The assembly of the sequences produced two nonoverlapping contigs: a 3,448 bp sequence containing a partial ORF1 (incomplete at the 5' and 3' ends) and a 1,906 bp sequence spanning ORF2 and ORF3 (incomplete at the 5' and 3' ends, respectively). Characterization of non-rRNA, non-NoV sequences The sequences failing to align with Bowtie 2 to the final NoV consensus sequence were analyzed with BLAST to further characterize the major components of stool RNA. All reads from a single end were queried against the non-redundant nt database using megablast (standalone version with databases downloaded on July 24, 2015; see parameters of BLAST analysis in Additional file 2). Results were analyzed with SPSS after removing duplicates, i.e. if a read had more than one BLAST hit, then only the hit with the lowest e-value was included in the analysis. Bacterial hits belonging to the normal human gut were identified using as reference the microoganisms reported in previous studies [52,53]. Analysis of single nucleotide variants Single nucleotide variants (SNVs) were called with Free-Bayes [54] and visually inspected in Tablet version 1.14.04.10 [55]. The following criteria was used for SNV calling with FreeBayes: -K (report all alleles passing filters), -haplotype_length =1 (call haplotypes as 1 nt long), -m or mapping quality =10 (chance that the read truly originated elsewhere of 1 in 10), -q or base quality =20 (chance of a wrong base call of 1 in 100), -F or alternate fraction =0.02 (call SNVs with frequencies ≥ 2 %), -min-coverage =10 (call SNVs for positions with coverage ≥10X) and -C or min-alternate-count = 5 (call SNVs with coverage ≥ 5X) (the command used for the analysis is described in Additional file 2). The choice to set up the analysis to detect variants with frequencies ≥ 2 % was made based on a study reporting that sequencing errors with the MiSeq platform can produce false variants that are undistinguishable from true low frequencies variants at ≤ 1 % with a 1000X average coverage [56]. We also set up the analysis to call SNVs with a coverage of ≥ 5X based on: 1) a study that eliminated virtually all false positives by calling variants if counted ≥ 10 times independently [57] and 2) the lower coverage per genome position achieved with our samples compared to Van den Hoecke et al. [57]. Additional files Additional file 1: Major components of stool RNA after bacterial rRNA depletion. This file provides a description of the rRNA and non-NoV-non-rRNA reads from each sample. (DOCX 1108 kb) Additional file 2: Various supporting data. This file contains: 1) the data used to estimate the correlation between percentage of NoVreads and Ct values, 2) the commands used for NGS analysis, 3) a summary of the sequences filtered out by Prinseq-lite due to quality control and 4) a description of the primers designed for sequencing sample OU3 via Sanger's method. (DOCX 86 kb) Additional file 3: NoV SNV calling results. This file lists all synonymous and non-synonymous NoV SNVs detected by FreeBayes in samples SP1, SP2, and a subset of sequences from SP1 that produced an average NoV coverage similar to thatobserved in SP2. (XLSX 38 kb)
v3-fos-license
2019-03-20T13:03:50.929Z
2019-03-01T00:00:00.000
83462344
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2227-9032/7/1/43/pdf", "pdf_hash": "b009f5b743a651ee225a5fab233c7d47be5af759", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:245", "s2fieldsofstudy": [ "Psychology" ], "sha1": "f86255a95f1246debe30ece4fef8f7d3e1b48e61", "year": 2019 }
pes2o/s2orc
Making Homes More Dementia-Friendly through the Use of Aids and Adaptations The majority of people with dementia live in their own homes, often supported by a family member. While this is the preferred option for most, they often face multiple challenges due to a deterioration in their physical and cognitive abilities. This paper reports on a pilot study that aimed to explore the impacts of aids and adaptations on the wellbeing of people with dementia and their families living at home. Quantitative data were collected using established measures of wellbeing at baseline, 3 months and 9 months. In-depth case studies were carried out with a sample of participants. Findings from the pilot suggest that relatively inexpensive aids can contribute towards the maintenance of wellbeing for people with dementia in domestic settings. The project also increased the skills and confidence of professionals involved in the project and strengthened partnerships between the collaborating organisations across health, housing and social care. Providing aids that can help people with dementia to remain living at home with a good quality of life, often with the support of a family member, is an important element in the development of age-friendly communities. Introduction The profile of ageing is changing. In 2017, the global population over the age of 60 numbered 962 million, rising from 382 million in 1980. The number of adults over the age of 80 has tripled and older adults are set to outnumber young people under 10 years old by 2030 [1]. In the UK, health and social care services are supporting increasing numbers of people over the age of 65. This trend is set to continue in coming years, with over half of local authorities expecting to see 25% of their population to be over the age of 65 by 2036 [2]. Ninety-six percent of older people live in mainstream, un-adapted housing as owner occupiers [3]. However, this is a population which is paving the way for change. The growing number of older people represent an influential body who voice higher expectations for living in communities which are more responsive to their needs and 'age-friendly', yet 'the places in which older people experience ageing have often proved to be hostile and challenging environments' [4]. One response to population ageing at international and national levels is the development of age-friendly communities, based on the premise that 'physical and social environments are key determinants of whether people can remain healthy, independent and autonomous long into their old age' [5]. Ageing is often accompanied by challenges to physical and cognitive wellbeing. In recent years the UK government has prioritised an agenda to support people to live well with dementia [6], including an aspiration for communities to become dementia-friendly [7]. There are currently an estimated 850,000 people living with dementia in the UK, a figure which is projected to increase to over 1 million by 2025 [8]. This picture is replicated globally where the number of people living with dementia is estimated to be in the region of 36 million, doubling by 2030 and projected to be more than tripled by 2050 [9]. Dementia is a complex and multi-faceted condition which impacts each individual differently, resulting in a range of symptoms which can limit a person's ability to function independently. Memory loss is a common symptom of dementia, but the condition brings other challenges, such as compromised visual and spatial awareness, difficulty with object recognition, challenges in seeing colour and colour contrast, greater need for increased light levels and challenges with orientation to space and time. For people with long-term degenerative conditions such as dementia, living well in their own homes can be a challenge and moving to long-term care is often seen as the only option. However, the projected increase in this population places substantial financial burdens on society, so that the traditional expectation of supporting people in long-term residential settings is no longer viable. Additionally, with greater diversity within the housing and care markets, residential care is now just one option alongside a range of models including sheltered housing, extra care housing and remaining in one's own home with additional support. There is growing evidence to suggest the importance of the physical environment in enabling older people to attain their full potential [10], sometimes known as 'ageing in place'. This is also recognised within established theoretical approaches such as the environmental press model, which focuses on the fit between the environment and an individual's physical and cognitive capacities [11]. Eighty-five percent of people in the UK say they would prefer to remain living in their own homes if they received a diagnosis of dementia [12]. An estimated two thirds of people with dementia live in their own homes, and of this population one third live alone and one third live in housing with care [8]. This brings with it additional difficulties including a greater risk of social isolation and loneliness. Research conducted in the UK by the Alzheimer's Society found that 62% of people with dementia who live alone felt lonely, compared to 38% of all people with dementia [13]. For people living with dementia, the symptoms they experience can have a significant impact on their confidence and ability to continue to lead an independent and full life, yet remaining in a familiar environment with the right assistance can often be beneficial. Based on data from the English Housing Survey [14], there are at least 475,000 households in England lived in by adults aged over 65 with a disability or long-term limiting illness, many of whom report that they lack the home adaptations they need [15]. There is good evidence that minor aids and adaptations can improve a range of outcomes for older people and help them remain at home for longer. In addition to increased levels of confidence and autonomy, aids and adaptations can reduce hospital admissions for avoidable conditions such as falls and urinary tract infections, which remain some of the most common reasons for hospital admissions among the elderly [16]. However, there is little evidence in relation to the value of aids for people living with dementia in their own homes [15]. This paper adds to the body of knowledge in understanding the importance of aids and adaptations in the home from a UK perspective. It demonstrates that for people with dementia at the early stages of their journey, minor aids and adaptations can have significant benefit for helping to improve quality of life and supporting living well at home. 'People with a dementia have the right to live life . . . as they did before their diagnosis . . . to live in their home, in the neighbourhood they know and perhaps surrounded by friends and caring neighbours' [17]. Materials and Methods A study providing aids and adaptations to people living with dementia in their own homes was piloted in Worcestershire, a county in the West Midlands region of the UK, for a 12 month period during 2017-2018. Worcestershire has approximately 588,000 residents with 3.9% having a diagnosis of dementia, a figure which is slightly lower than the national average. Known as the Dementia Dwelling Grant (DDG), the pilot study built on an existing service through which people with a dementia diagnosis were allocated a dementia advisor (DA). Assessment for the DDG was carried out by the DAs, an approach that it was hoped would minimise disruption and anxiety for the people living with dementia and their families. While dementia is associated with older age, it was felt that the potential benefits of the DDG should be made available to anyone referred to the DA service, regardless of their age. The DDG was not means-tested and was available to people with a clinical diagnosis of dementia who were living at home. The DDG pilot did not provide a monetary grant but instead offered a range of small-scale aids and home adaptations that were believed to benefit people living with dementia, and that were not available through other programs. Where necessary, these were delivered and installed by the established handyperson service. The list of aids and adaptations was informed by research and best practice in dementia-friendly design. It included items for use around the home including key locators and clocks, and those for specific areas, such as touch bedside lights and bath mats. A research team at the University of Worcester was commissioned to carry out an evaluation of the pilot, with the broad aim of exploring the impacts of the aids and adaptations that were provided on the wellbeing of recipients. Two paper-based forms were developed by the research team in consultation with the local authority administering the project, to capture information from people living with dementia who consented to participate in the study. The first, an assessment form, captured basic demographic data as well as information on which aids and adaptations were to be provided with the grant. The second form comprised a series of validated measures to assess aspects of the grant recipients' health and wellbeing. This form was completed as part of the baseline assessment and repeated after three and nine months to capture the impact of the DDG intervention over time. The measures were taken from the UK Office for National Statistics 'People, Population and Community' (UKPPC) survey [18] and the Short Warwick Edinburgh Mental Wellbeing Scale (SWEMWBS) [19]. General wellbeing was measured using four questions that assess quality of life on a scale from 0 (not at all) to 10 (completely). The SWEMWBS tool asks respondents to describe their experience over the past two weeks in relation to seven statements on a five-point scale from 'none of the time' to 'all of the time'. In addition to the individual statements, composite SWEMWBS scores can be generated on a scale from 7 to 35, with higher scores indicating greater mental wellbeing. The information captured by the assessment form was analysed to provide descriptive statistics about the evaluation participants, while the validated measures in the evaluation form were analysed according to the relevant process for each individual measure. Where possible, findings were compared between baseline, 3 month follow up and 9 month follow up to investigate the longer-term experiences and impacts of the aids and adaptations for intervention participants. The results were analysed to see if any significant changes had taken place between the different time points, and any significance will be highlighted in the results. In the absence of a control group, comparator data were obtained from the UK Office of National Statistics to enable the DDG information to be viewed within a wider context. In addition, a purposeful sample of 15-20% of grant recipients who had completed a three-month evaluation were chosen as case studies. The sample aimed to mirror the wider group of DDG recipients by including participants with a variety of dementia diagnoses, ages, living situations, and types of aids required. The case studies used semi-structured interviews conducted in a person's home to explore which aids and adaptations had been of most benefit, and if any additional aids or adaptations would be useful and might be made available and included in future grants. Finally, towards the end of the pilot, research interviews were carried out with key project stakeholders to discuss how the project was developed and implemented and to explore the main benefits, facilitators and barriers. The interviews with grant recipients and project stakeholders were transcribed and analysed for key themes. Participants and Interventions In this pilot project, 510 people were assessed for the DDG by the dementia advisors. Of these, 382 (75%) received a DDG, with 101 (26%) of these consenting to be part of the full evaluation. The majority of referrals (60%) came from the Early Intervention Dementia Service, with 14% unknown, and 13% from the Community Mental Health Team. The remainder were from families, self-referral and family doctors. The age range of those receiving a DDG was 36 to 98 years with an average (mean) of 80 years old. Fifty-five percent were female and 97% were White British. This profile closely reflects the local population. Sixty-two percent of DDG recipients were married, with the majority of the remainder being widowed. Although those consenting to be evaluation participants were slightly younger than those who did not give consent (mean age 78 compared to 81), their overall demographics were very similar to the whole group of DDG recipients. Among the evaluation participants, Alzheimer's disease was the most common dementia diagnosis (40%) followed by vascular dementia (22%) and mixed dementia (21%). Fifty-four percent had at least one other medical condition, with arthritis, diabetes, mobility issues, frailty and heart conditions being the most common. Ninety-five percent had at least one carer, with 80% living with their carer. This person was most commonly a partner or spouse, followed by a son or daughter. Eighty-six percent of the evaluation cohort were owner occupiers, with 64% living in a house and 23% in a bungalow. Ages of the 13 case study participants ranged from 55 to 92 years, with an average of 80. Nine were female and four were male. Five had Alzheimer's disease, four had mixed dementia, two had vascular dementia, one had Lewy-bodies and one had fronto-temporal dementia. Ten case study participants lived with their spouse with three recipients living alone supported by carers or family. All individuals in the evaluation cohort requested at least one item; 12 items were the maximum requested by an individual. The five most popular items requested were a dementia clock (two types were offered: a day/night clock and a digital 12/24 h clock), noticeboard/white board, touch-activated beside light, key locator and memo minder. The average number of items required by customers was five (four different types of item) at a cost of £138. This cost does not include additional costs, such as the time of a dementia advisor to undertake the assessment or the time of the handyperson to deliver and install items. The Wellbeing of Participants General wellbeing was measured at baseline, at 3 months and at 9 months as shown in Table 1. Comparator data from the UKPPC survey [19] show slightly lower levels of general wellbeing for DDG participants at baseline than for the wider population in relation to items 1 to 3. Scores for item 4 indicate levels of anxiety that are considerably higher than those for the wider population. Table 1. General wellbeing scores for intervention participants and the UK population. Percentages for items 1, 2 and 3 refer to respondents who scored 9 or 10 on a scale from 0 (not at all) to 10 (completely). Percentages for item 4 refer to respondents who scored 1 or 2 on the same scale. DDG: Dementia Dwelling Grant. Two further items taken from the UKPPC survey were used to measure satisfaction with health and satisfaction with accommodation, using a seven-point scale from 'completely dissatisfied' to 'completely satisfied'. The findings shown in Table 2 indicate higher levels of satisfaction with their health and accommodation for those receiving the intervention than for the wider UK population. Wellbeing The final measure of general wellbeing asked participants to answer the question 'How often do you feel lonely' on a five-point scale from 'often/always' to 'never'. A high proportion (14.8%) responded 'often/always' compared with 4.1% of the wider UK population. Mental wellbeing was measured using SWEMWBS [19]. Responses were largely positive for each item as shown in Figure 1, with the majority of respondents selecting at least 'some of the time'. Composite SWEMWBS scores were generated for the 77 participants who responded to at least five of the seven items and so would have a valid score. This gave a mean score of 23.6 for the DDG group compared with 24.6 for the wider population. Mental wellbeing was measured using SWEMWBS [19]. Responses were largely positive for each item as shown in Figure 1, with the majority of respondents selecting at least 'some of the time'. Composite SWEMWBS scores were generated for the 77 participants who responded to at least five of the seven items and so would have a valid score. This gave a mean score of 23.6 for the DDG group compared with 24.6 for the wider population. Due to the timings of the baseline assessments and the ongoing nature of referrals to the DA service, it was only possible to carry out 80 of the 3 month follow up assessments during the evaluation, with 73 participants still living at home and being able to complete the assessment process. Mean scores for satisfaction with life, feeling worthwhile and happiness had improved slightly for the 73 participants, while remaining lower than the national average. Similarly, anxiety levels decreased for the participants but were still substantially higher than the wider population. At three months there was little or no change in 'satisfaction with health' and 'satisfaction with accommodation' compared to baseline. There was also no significant change in the composite SWEMWBS scores, although they had declined slightly. However, there was a reduction in levels of loneliness, with 10.6% of respondents reporting that they felt lonely 'often' or 'always' compared with 14.9% at baseline. This improvement was not statistically significant. Nine-month assessments were completed with 36 participants, with the reduction in numbers again being closely linked to the timing of the baseline assessment in relation to the lifetime of the study. In terms of general wellbeing, there was a slight decline in the mean response for 'satisfaction with life' and 'feeling worthwhile' between baseline and 9 months, and a slight improvement for 'anxiety', while 'happiness' was unchanged, as shown in Table 3. The reduction in loneliness that was seen at 3 months continued at 9 months, with fewer participants reporting that they were lonely 'often', 'always' or 'some of the time'. Due to the timings of the baseline assessments and the ongoing nature of referrals to the DA service, it was only possible to carry out 80 of the 3 month follow up assessments during the evaluation, with 73 participants still living at home and being able to complete the assessment process. Mean scores for satisfaction with life, feeling worthwhile and happiness had improved slightly for the 73 participants, while remaining lower than the national average. Similarly, anxiety levels decreased for the participants but were still substantially higher than the wider population. At three months there was little or no change in 'satisfaction with health' and 'satisfaction with accommodation' compared to baseline. There was also no significant change in the composite SWEMWBS scores, although they had declined slightly. However, there was a reduction in levels of loneliness, with 10.6% of respondents reporting that they felt lonely 'often' or 'always' compared with 14.9% at baseline. This improvement was not statistically significant. Nine-month assessments were completed with 36 participants, with the reduction in numbers again being closely linked to the timing of the baseline assessment in relation to the lifetime of the study. In terms of general wellbeing, there was a slight decline in the mean response for 'satisfaction with life' and 'feeling worthwhile' between baseline and 9 months, and a slight improvement for 'anxiety', while 'happiness' was unchanged, as shown in Table 3. The reduction in loneliness that was seen at 3 months continued at 9 months, with fewer participants reporting that they were lonely 'often', 'always' or 'some of the time'. Overall there was a slight decline in terms of composite wellbeing scores from baseline to 9 months. Participants also reported greater satisfaction with their accommodation, with 94% being 'completely satisfied' at nine months compared with 71% at baseline. Levels of satisfaction with health and accommodation remained higher than the UK average at 9 months. The data only allowed calculation of a composite SWEMWBS score for ten participants at the 9 month follow up. For these, the average score increased marginally from the 3 month figure, while remaining slightly below the UK average. As for the 3 month assessments, no statistically significant changes were seen at the 9 month follow up. Case Study Themes While participants were on the whole very pleased with the aids they had received, they appeared to have had little involvement in choosing them. Most had products chosen for them either by the dementia advisor or by their spouse. The items reported as being of most use were whiteboards, lights/lamps and clocks. Whiteboards were most commonly fitted in the kitchen area and used to remind participants about appointments and events, although some were kept in the lounge to remind them of immediate tasks. One participant described how she used the whiteboard to plan her week and maximise her independence: "I write everything on there. I put everything that we are going to do through the week. I write it all down so that I don't have to keep saying 'what are we doing' all the time. When we have done something, I immediately rub it off because I know that's done. And it makes me think as well, I like that." (Marjorie). Her husband added that initially she was writing everything haphazardly on the board and it became confusing for her. He divided the board into days of the week and found that this provided an excellent way to enable Marjorie to note, and anticipate, events for the forthcoming week. Several participants found lights and touch lamps to be the most beneficial aids. Some had chosen battery operated as opposed to plug in lights; some had chosen motion sensitive lights whereas others could be switched on and off manually. The lights appear to have helped with orientation, preventing injury and maintaining continence: "The best thing for me is the light, we've got it on top of the landing and it comes on by movement so in the middle of the night when either of us goes to the loo, it comes on. We sleep with our bedroom door open and I've only got to move my blanket and it comes on." (Peggy). "Before we had them, I meant to switch on the switch by the door, but I missed it and I cut my finger all down there because there was no light." (Joan). Several participants were provided with multiple aids through the grant program. For example, Nancy and her daughter who was her main carer had chosen a GPS tracker, a large button telephone, a memo-minder, a touch lamp, a red toilet seat, a white board, a key locator and new signage. She particularly liked the big button telephone, which allows speed dialling by using large buttons at the top of the display: "We haven't put pictures on it . . . we have just put (son's name) press to call and (daughter's name) press to call. I think it's good to put 'press to call' rather than just a photograph because if it's just a face you don't know that's going to call." However, Nancy viewed the new signage as intrusive and unnecessary: "No, I don't like that . . . because I don't need a blooming thing like that . . . I just go out of there and into there." Other participants also described the limitations of specific aids that were provided. For example, Florence's husband talked about the memo-minder that was fitted adjacent to the front door and played a message to remind his wife to close the door properly or to take her keys if she left the house. He felt that the device was 'too sensitive' and had become a nuisance: "I've recorded various messages. The one at the moment says 'Florence, don't forget to close the door properly' because sometimes she doesn't latch it properly and lock it, 'and if you go out, don't forget your key'. Now that's been on but it did get on our nerves a bit so what we've started to do is for me to only switch it on when I go out and I don't go out that often, just one night a week when I play squash, and I like to switch it on then but sometimes I forget and that's the disadvantage of that method . . . it's easy to go out and forget to switch it on. It could be useful but if you open the door to anyone it goes off." Other problems that were reported included someone who found it difficult to understand the digital clock when it was set to 24 h time mode. They had been unable to find out how to change the function and settings of the clock. Stakeholder Perspectives Stakeholders identified a wide range of benefits arising from the DDG pilot. For example, the aids provided were thought to offer crucial support after a diagnosis of dementia, as well as a way to promote continued independence: "You've got to keep them using it, you've got to keep them stimulated. And some of this equipment does just that, they can tell their own time, they can tell what time of the day and night it is you know? They can see where they're going, they can look in a drawer, and know that it's the right one, because it's got a label on. Okay it's got a label on, but so what? At least it means that they're not going into the wrong drawer, becoming frustrated, and then giving up." The benefits for family carers of someone living at home with dementia were perceived to be equally important: "I think if we can benefit the carer and make life better, easier for the carer as well, to be able to care for that person, and stay well themselves, then yeah, absolutely, I don't think we should distinguish between the two, as such." Additional benefits were thought to arise from the highly collaborative nature of the pilot, putting the partners in a good position to deliver future initiatives: "Partnership working as well, has been really beneficial between obviously, the University, but also with Worcester City Council, and with Care and Repair (the local home improvement service), and our knowledge, as well, has increased in terms of what people need and want, to be able to manage their dementia, to be able to live at home as well." Finally, there were seen to be substantial benefits for some of the professionals involved in implementing the grant, in terms of their skills and confidence levels: "The more they (handyperson staff) went into people, they'd always visited people with dementia, 'cause they had mobility issues as well, but they actually hadn't thought about it from the dementia person's point of view, whereas actually fitting equipment and showing people how to work it, they got more of a feel for it, and their experience, and they became obviously more sensitive to the issues, and could also raise other issues that they were worried about." The flexibility that was allowed in terms of the list of aids and adaptations on offer was seen as an important feature of the grant: "I think, as a regular list, this one is fine, then we just say to people, if there's something outside the box, you let us know, and we will review, and if it's okay, and comes from a reputable source, we'll probably buy it, to be honest with you." Similarly, the lack of means-testing was viewed by all stakeholders interviewed as a key factor in the success of the pilot, largely due to the additional burden that means-testing would place on people with dementia and their families: "And yes, it means that we get stuff to people quicker, and they benefit from it quicker as well. It doesn't matter whether you've got the money or not, if you haven't got the capacity, and you've got a carer who's stretched to the limit, they really aren't going to go out and source these things, and bother with them. So, they will go without them. And, at that point in time, that person then will deteriorate and lose their independence, and I think, for the small cost that it is, because it's not a massive amount of cost, means-testing would be too much trouble, in reality." Discussion Findings from the pilot study reported in this paper suggest that relatively inexpensive aids were associated with increased overall wellbeing for people living with dementia in their own homes three months after receiving them. This should be considered in the context of an intervention group who were living with dementia and whose quality of life might be expected to be deteriorate over a period of nine months. Levels of wellbeing for pilot study participants were lower than that for the wider population, particularly in relation to loneliness, which again is not unexpected given the widely reported challenges of living with dementia. However, it was more difficult to account for the fact that levels of satisfaction with both health and accommodation were higher at baseline for participants than for the wider population. Of particular note is the reduction in levels of loneliness amongst the people using the aids, which has been recognised as an important issue for older people generally [4] and those living with dementia in particular [13]. The picture was more mixed at nine months, with a slight deterioration in satisfaction with life and feeling worthwhile but an improvement in terms of anxiety and overall mental wellbeing. This may reflect the complexities of health and wellbeing for participants. For example, levels of co-morbidity were over 50%, which indicates the high levels of frailty experienced by people living with dementia. It also raises the possibility that the benefits reported from having the aids related not just to their dementia, but also to other conditions such arthritis, diabetes and heart conditions. In addition, it is important to note that most participants received several aids, with one person having 12, which raises the possibility different aids may be having different impacts for specific individuals. While this pilot study identified specific aids as being of most value to participants (dementia clock, notice board or white board, touch beside light, key safe), more research is required to explore the impacts of such items individually and in combination. The findings also demonstrate the key role played by family carers, usually a spouse, in supporting people with dementia in their own homes. This highlights the importance of providing aids, and other services, that can protect their wellbeing and enable them to continue in their role. The case study findings draw on the experiences of those receiving the aids to highlight the impact they had on quality of life for people with dementia and their families. For example, the use of a whiteboard for planning weekly activities and tasks brought major benefits for one person with dementia and her husband. Similarly, touch-activated bedside lights made it easier for participants to get up at night and make their way to the bathroom. However, several participants experienced challenges when using the aids provided. One family carer described having to turn off the memo minder because it had an over-sensitive activation mechanism, while one person with dementia found the 24 h clock to be confusing. One unanticipated theme that emerged from this pilot study was the benefits experienced by the professionals involved, particularly increases in knowledge and confidence for working with people with dementia. Learning from the pilot study has informed the following key recommendations: • It is important to maximise involvement of the person with dementia and their family in selecting the aids and equipment. This may involve walking around the house and identifying difficulties and potential solutions, e.g., dark areas in the house which may be improved with LED motion-sensitive lights. For the person with dementia, having ownership of these decisions will make it more likely that they will engage in the use of the items and understand their purpose. • The value of future proofing should not be underestimated. There are many advantages to identifying items that could be useful in the future and which will help people retain their independence. This might include providing specific items that are not on the standard list but which grant recipients have identified as being useful. • The scheme works most effectively with a relatively small list of 'stock' aids and adaptations. However, this can only be developed in response to feedback regarding what items are useful and popular. • It is important to provide support beyond the provision of the aids and adaptations, for example, ensuring that recipients and their families are conversant with setting up devices such as changing 24 h digital display clocks to a 12 h setting. Additionally, it could include explaining that some items may be useful for supporting the grant recipient rather than for them to use themselves, e.g., a key safe for use by family or friends. Conclusions In conclusion, the findings from the pilot study reported in this paper indicate that relatively small and inexpensive aids and equipment can make a positive difference to the lives of people living with dementia in their own homes. The benefits spanned three main areas: promoting independence and quality of life for people with dementia and their family carers; increasing the skills and confidence of professionals involved in the project; and strengthening partnerships between the collaborating organisations across health, housing and social care. During the pilot study, five aids were reported to be the most beneficial: dementia clock, noticeboard/white board, touch-activated beside light, key locator and memo minder. While people earlier in their dementia 'journey' have the opportunity to become more familiar with the equipment, this should not prevent people with more advanced dementia from benefitting, particularly when a carer or family member can also become familiar with the items and their potential use. Providing aids that can help people with dementia to remain living at home with a good quality of life, often with the support of a family member, should be considered as an important element in the development of age-friendly communities [5]. Following the positive findings from this evaluation, the grant scheme is continuing to be offered to people with a diagnosis of dementia living at home across Worcestershire. Author Contributions: S.E., S.W., J.B. and T.A. contributed towards conceptualisation, methodology, investigation, original draft preparation, review and editing. S.E. and S.W. contributed towards project administration and funding acquisition. Funding: This research was funded by the six district councils in Worcestershire: Bromsgrove, Malvern Hills, Redditch, Worcester, Wychavon and Wyre Forest.
v3-fos-license
2018-12-28T04:36:32.530Z
2016-10-12T00:00:00.000
59405133
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.intechopen.com/citation-pdf-url/51670", "pdf_hash": "c4864d44393c8598b20585a00dc0812261731a81", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:246", "s2fieldsofstudy": [ "Medicine" ], "sha1": "cd3af95837cd622706f01cb6ce62b73395748dc3", "year": 2016 }
pes2o/s2orc
Assessment and Management of Older People in the General Hospital Setting Worldwide, populations are ageing. Older people, particularly centurions, represent the fastest growing sector and are counted as the success of the society. But not everyone ages successfully and enjoys good health. Many older people have multiple long-term medical, physical, mental, psychological and social problems. This can result in reduced quality of life, higher cost and poorer health outcome including increased mortality. Chronic diseases are associated with disability and low self-reported general health. In addition, physiological changes of ageing and consequent loss of functional reserve of the organ systems lead to the increased physical disability and dependency. Therefore, geriatric medicine could warrant a more holistic approach than general adult medicine. Nearly two-thirds of people admitted to hospital are over 65 years old and an increasing number are frail or have a diagnosis of dementia [1]. Our current training not only generates relatively low number of geriatricians but there also remains a huge need for better staff training and support to provide safe, holistic and dignified care. The cornerstone of modern geriatric medicine is the comprehensive geriatric assessment (CGA). This is defined as multidimensional, interdisciplinary diagnostic process that aims to determine a frail older person’s medical conditions, mental health, functional capability and social circumstances in order to develop a coordinated and integrated plan for treatment, rehabilitation and long-term follow-up [2]. All older people admitted to hospital with an acute medical illness, geriatric syndromes including falls, incontinence, delirium or immobility, unexplained functional dependency or need for rehabilitation warrant CGA. CGA could screen for treatable illnesses, establish the key diagnosis leading to hospital admission and formulate a rational therapeutic plan thus resulting in the improved outcome. This chapter starts with an introduction to the ageing nation and impact of ageing on hospitals. This will be followed by discussing physiological changes of ageing and the various components of multidisciplinary assessment for older people admitted to hospital with an acute illness that could lead to high-level holistic care. It also covers a wide range of issues and challenges which medical team/multidisciplinary teams often come across during routine care of acutely © 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. unwell older people. The chapter concludes by a literature review on current evidence on the effectiveness of CGA and recommendations to enhance clinical care. Introduction Older people attending the emergency department (ED) or acute medical units (AMU) often have more complex needs due to multiple co-morbidities, physical limitations, increased functional dependence and complex psychosocial issues. Thus, they are more vulnerable and could easily decompensate with minor stressors, resulting in increased frailty. There are established detrimental effects of hospitalisation on older adults and about 17% of older medical patients who were independently mobile 2 weeks prior to hospital admission required assistance to walk at hospital discharge [3,4]. Therefore, to improve outcomes for frail older people with multiple co-morbidities and an acute illness, admission should be to an Emergency Frailty Unit (EFU), a separate unit within an AMU but led by a geriatrician and the multidisciplinary team (MDT) to provide comprehensive person-centred care. The clinical assessment of frail older people is challenging, as they often have multiple comorbidities and diminished functional and physiological reserves. In addition, the physical illness or adverse effects of drugs are more pronounced resulting in atypical presentation, cognitive decline, delirium or inability to manage routine activities of daily living (ADLs) [5]. Among the potential adverse outcomes for frail older inpatients, are the risks of continued deterioration as a consequence of medical complications such as pressure sores, hospitalacquired infections or functional decline. This can also lead to long-term increased dependency, institutionalisation and death. Impact of ageing on hospitals Hospitals face a rising demand from an increasing number of acute emergency admissions of people aged 65 years and above with multiple co-morbidities and psychosocial problems. The admission rates for people over 65 years are three times higher than for people aged 16-64 years. Older patients cannot always be transferred quickly from the hospital after acute illness and on average hospital length of stay (LoS) is significantly higher than for under 65 years [6]. The older people occupy around two-thirds of acute hospital beds and emergency admissions have been rising for several years [7]. The healthcare cost and the proportion of hospital bed days used by older people are likely to increase further due to ageing population [8]. Physiological changes of ageing The normal physiological changes occur with ageing in all organ systems ( Table 1) and this has implications for the clinical assessment of older people [9][10][11]. Therefore, it is essential to be aware of these changes as these have an impact on drug metabolism and pharmacodynam-ics. In addition to comprehensive geriatric assessment (CGA), these changes can be delayed or reversed with appropriate diet, exercise and medical intervention. Assessments of older people in hospital The holistic assessment of older people is best achieved by the MDT. The MDT members include doctors, nurses, physiotherapist (PT), occupational therapist (OT), dietician, clinical pharmacist, social worker (SW), specialist nurses (e.g. tissue viability nurse and Parkinson's disease nurse specialist), hospital discharge liaison team and carers. Input from a clinical psychologist or old age psychiatrist may be needed depending on individual patients' needs. All members engage with patients and carers to complete their assessments and intervention, followed by multidisciplinary meeting (MDM) to formulate ongoing care plan and follow-up. Medical assessment The medical assessment begins at the time of admission to an AMU or an EFU with the appropriate investigations and thus establishing the relevant diagnosis. In addition to treating acute illness, there must be an attempt to optimise the symptoms and treatment of chronic diseases [12]. The common medical diseases among older people are listed in Table 2. A carer or a relative usually accompanies an older patient to the hospital, and a short conversation with them can rapidly reveal the diagnosis and direct ongoing management. Acute medical illness Older people admitted to the hospital with an acute illness often a non-specific presentation, which can obscure the serious underlying pathology or medical diagnosis. For example, acute bowel infarction in older people may not present with typical abdominal pain or tenderness or lack of typical signs on meningism in bacterial meningitis. The atypical presentation in older people could be one or combination of 'feeling unwell', 'inability to cope', 'off-legs', 'fall', 'confusion', 'dizziness', 'incontinence', 'weight loss', etc. The atypical presentation with possible background sensory impairment, lack of collateral history, polypharmacy and high prevalence of cognitive deficits limits good clinical assessment. 'Feeling unwell' or 'inability to cope' could be a presentation of an acute infection, exacerbation of underlying chronic disease (e.g. chronic heart failure), drug side effect (e.g. constipation) or dehydration. However, this could be due to underlying malignancy; therefore, such a presentation warrants good clinical examination and appropriate investigations. Worldwide, falls are the second most common cause of unintentional injury and death. A nonaccidental fall is a complex system failure in the human organ system, where a person comes to rest on the ground from a standing or a sitting height, unintentionally with no associated loss of consciousness [13]. The prevalence of falls increases with age, and oldest old is at highest risk. One-third of older adults over 65 years and half of older people above 80 years could experience one fall in a year [14,15]. Falls are most common in institutionalised older people [16] and half of the fallers will fall again within a year [17]. Older people with high risk of falls are sometimes admitted to the hospital to avoid future falls but in reality, hospitals are associated with a higher risk of falling due to several new risk factors such as unfamiliar environment, increased risk of delirium, high beds, single rooms and so on [18,19]. Falls are associated with a threefold increased risk of future falls, fear of falling, prolonged hospital stay, functional decline, increased dependency, institutionalisation, increased expenditure, morbidity and mortality [20,21]. Falls result in injury (4%) and fragility hip fracture (1%), following which up to 10% of people will die within a month, a third dying during the following year after [22]. The evaluation of falls begins by distinguishing it from brief sudden loss of consciousness (syncope). However, it could be challenging to do so in certain cases but every effort should be made. Falls cannot only be simply related to underlying medical or neurological disorder as falls are usually multifactorial including a wide range of intrinsic and extrinsic factors. The most common factors leading to falls in neurological patients are the disorder of gait and balance (55%), epileptic seizures (12%), syncope (10%), stroke (7%) and dementia. Falls have particularly being linked to Parkinson's disease (62%), polyneuropathy (48%), epilepsy (41%), spinal disorders (41%), motor neuron disease (33%), multiple sclerosis (31%), psychogenic disorders (29%), stroke (22%) and patients with a pain syndrome (21%) [16]. Dementia is associated with impaired mobility and is an independent risk factor for falling [23]. People who present with a fall or report recurrent falls in the past year or demonstrate abnormalities of gait and/or balance should have multifactorial, multidisciplinary assessment for falls, risk factors, perceived functional abilities and fear of falling. In addition, bone health and history of previous fragility fractures should be explored [24]. 'Delirium' is a common syndrome affecting older people admitted to AMU or EFU. It is a serious acute problem which has been best understood as an 'acute brain dysfunction' or an 'acute confusional state' characterised by a rapid onset of symptoms, fluctuating course and an altered level of consciousness, global disturbance of cognition or perceptual abnormalities. The Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) defines delirium as 'a disturbance of consciousness that is accompanied by a change in cognition that cannot be better accounted for by a pre-existing or evolving dementia' [25]. The diagnosis of delirium is based on clinical observations, cognitive assessment, physical and neurological examination. Despite the common problem, delirium remains a major challenge and often under-diagnosed and poorly managed. Clinically, delirium can be divided into hyperactive, hypoactive or mixed forms, based on psychomotor behaviour. The Confusion Assessment Method (CAM) supports a diagnosis of delirium if there is a history of acute onset of confusion with a fluctuating course and inattention in the presence of either disorganised thinking and/or altered level of consciousness [26]. Collateral history from the family member or carers is helpful to detect a recent change in cognition. Delirium usually occurs as a result of complex interactions among multiple risk factors such as cognitive impairment, Parkinson's disease, stroke, poor mobility, history of previous delirium, hearing or visual impairment, malnutrition or depression. It is often precipitated in the hospital setting due to acute medical illnesses including infection, acute coronary syndrome, bowel ischaemia, surgical disorder, polypharmacy, pain, dehydration, electrolyte imbalance, new environment, sleep deprivation, constipation, hypoxia, use of restraints or indwelling catheters. Delirium, if not recognised early and managed appropriately, can result in poor outcomes, including prolonged hospital stay, increased functional dependence, institutionalisation, a risk of developing dementia, increased inpatient and post-discharge mortality [27][28][29]. Therefore, an older person admitted to hospital with confusion should be promptly assessed for delirium to improve clinical outcomes. The optimal assessment should be completed to identify underlying modifiable risk factors and treating precipitating factors, followed by reorientation and restoration of cognitive functions using non-pharmacological strategies including carer support and education, good communication among MDT and appropriate follow-up. The pharmacological drugs including haloperidol or risperidone should be used to manage severe agitation or behavioural disturbance. 'Dementia' is often recognised for the first time as an incidental condition when people are admitted to an acute hospital for another reason. More than one-third of acute medical admissions (42.4%) for over 70s have been reported to have dementia and only half of which were diagnosed prior to admission [30]. However, dementia can be misdiagnosed as an acute illness and can be accompanied by reversible cognitive decline. In addition, older people with known dementia who present with an altered mental state can be mislabelled as having progressed to another stage of dementia missing undiagnosed delirium. Older people with cognitive impairment are at increased risk of falls [31] and are also more likely to die during hospitalisation, and increased severity of cognitive impairment is associated with higher odds of mortality (from 2.7 in those with moderate impairment to 4.2 in those with severe impairment) [32]. Therefore, older people in hospital settings should be carefully assessed for underlying cognition. Dementia is a chronic progressive brain disorder marked by a disturbance of more than two domains of brain functions for more than 6 months. The various cognitive deficits may include short-term memory loss, language-or word-finding difficulties, mood and personality changes, impaired reasoning, learning new skills, inability to concentrate, plan or solve problems, difficulty in taking decisions or completing a task, disorientation, visuospatial difficulties or problems with calculations. Dementia is the most appropriate diagnosis when two or more cognitive deficits have an impact on ADLs or social interaction, often associated with behavioural and psychological symptoms of dementia (BPSD) [33]. 'Frailty' is defined variably and there is no single generally accepted definition. Fried et al. [34] reported a clinical definition of frailty based on the presence of three or more frailty indicators: unintentional weight loss, slow walking speed, subjective exhaustion, low grip strength and low levels of physical activity. Frailty, based on these criteria, was predictive of poor outcome including institutionalisation and death [34]. Whereas Rockwood and Mitnitski [35] had advocated an alternative approach to frailty by considering frailty in relation to the accumulation of deficits with age, including medical, physical, functional, cognitive and nutritional problems. The frailty index expresses the number of deficits identified in an individual as a proportion of the total number of deficits considered. Higher values indicated a greater number of problems and hence greater frailty. For example, if 40 potential deficits were considered, and 10 were present in a given person, their frailty index would be 10/40 = 0.25 [36]. A valid index can be derived from the routine information collected on CGA [37][38][39]. Therefore, the presence of frailty on clinical judgement should prompt consideration to holistic assessment by MDT. Chronic co-morbidities Older people usually have more than five medical conditions and one pathological disorder in an organ, which can weaken another system. This results in increased disability, physical dependence, functional deterioration, isolation or even death. Long-term conditions in older people require very careful assessment and monitoring particularly whilst undergoing acute medical treatment in the hospital. Every older person admitted to MAU or EFU should have assessment of underlying chronic medical conditions, including ischaemic heart disease, heart failure, chronic respiratory diseases, chronic inflammatory and autoimmune problems. Modifiable cardiovascular risk factors such as diabetes mellitus, hypercholesterolemia, hypertension, obesity, excessive smoking or alcohol consumption should be reviewed and optimally addressed. Mental health assessment Many people with long-term physical health conditions also have mental health problems [40]. Mental health problems are common in older people, and 8-32% of patients admitted to acute hospitals were found to be depressed [41,42]. Depression is not a natural part of ageing but can be easily missed in older patients, thus resulting in adverse outcome including delayed recovery and suicide. It is often reversible with early recognition and prompt intervention. Delirium has been reported in 27% of older patients above 70 years [41]. The prevalence of dementia in acute hospitals was reported as 48% in men and 75% in women older than 90 years [30]. The current service models for the provision of mental health input in general medical care wards are variable. The prevailing view in the United Kingdom is that old age psychiatrists have the main responsibility for the diagnosis and management of dementia and other mental health problems. In many hospitals, both psychiatric and medical notes are not easily accessible and are mostly kept separately [43]. The National Service Framework (UK) for older people was published in 2001-standard seven aims to promote good mental health in older people and to treat and support those older people with dementia [44]. The liaison mental health services have not only shown improved clinical outcomes as measured by the length of hospital stay or discharge to original residence but also suggested cost effective models. However, concerns have been raised about the reliability and validity of the various studies included in this systematic review [45]. The hospital liaison multidisciplinary mental health team is the model advised in the United Kingdom to offer a general hospital a complete service. The Rapid Assessment Interface and Discharge (RAID) service model is an example in the United Kingdom where a psychiatry liaison service provides MDT input to acutely unwell older people with existing mental health admitted to hospital [46]. The RAID service has shown to be an effective, enhanced service model for older people who are at risk for dementia or other mental health problems and has shown good outcomes with quality improvements in the care of older people [46]. Collateral history from the family or carers remains the key feature for initial assessment. If dementia is suspected in a person, a full medical assessment must be completed, an example being the British Geriatrics Society's guidance on CGA of the frail older people [12]. Older people in the hospitals should be assessed for mood, anxiety and depression. The hospital anxiety and depression scale (HADS) is a simple, valid and reliable tool for use in hospital practice [47]. It is a self-assessment screening tool, which warrants further assessment based on abnormal scores. The score for the entire scale for emotional distress can range from 0 to 42, with higher scores indicating more distress. Score for each subscale (anxiety and depression) can range from 0 to 21 (normal 0-7, mild 8-10, moderate 11-14, severe 15-21) [48]. A short-form Geriatric Depression Scale (GDS) consisting of 15 questions can be used for depression [49]. Any positive score above 5 on the GDS short form should prompt a detailed assessment and evaluation. Generalised anxiety disorder (GAD) is the most common mental disorder encountered in older patients and is often accompanied by depression. It could be helpful to assess older person's emotional state and sense of well-being as they may report psychological burden of the disease, for example, fear of falling or fear of being in the hospital which is associated with loss of independence by older people. History of delusions and hallucinations or previous use of psychotropic drugs may suggest a mental health problem. Patient's permission should be sought before interviewing their relatives or carers for collateral history. Following initial suspicion or diagnosis of a mental health problem in older people, a more collaborative work between physicians and old age psychiatrists for the prompt diagnosis and management of mental health problems will improve outcome [46]. Drugs Drug prescribing increases with both age and incidence of co-morbidities [50,51]. Polypharmacy is defined as use of either five or more concurrent medications or, at least, one potentially inappropriate drug. Half of older people aged between 65 and 74 years and two-thirds of those aged 75 years and over are affected by polypharmacy including conventional and complementary medicines [52]. Polypharmacy is associated with adverse outcomes including hospital admissions, falls, delirium, cognitive impairment and mortality [53,54]. Although drugs have an important role in managing co-morbidities, it is not without harm and adverse outcomes [55]. There is conflicting evidence that psychotropic medications are associated with higher falls in people with dementia [56,57] though there is clear evidence that there is associated increased fall risk in cognitively intact people [58]. Other classes of drugs including Parkinson's disease drugs, anticonvulsants, steroids and fluoroquinolone can result in acute confusion [59]. Drug interactions could impair electrolytes, cause postural hypotension, hypothermia, gait disorder or gastrointestinal disturbance, resulting in prolonged hospital admission [55]. Therefore, all older inpatients should have drug review and withdrawal of any possible offending agent if practical would be logical. This can be based on screening tool of older persons' prescriptions (STOPP), and a tool to alert doctors to commence appropriate treatment (START) criteria should be used [60]. Patients should also be assessed for their ability to manage their drugs, understanding of drug, dexterity and vision. At the same time, appropriate new medications if deemed necessary and evidence-based should be commenced. Older people with cognitive impairment should be prescribed with greater care, adhering to the principle of‚ 'starting low and going slow' [61]. Physical performance Gait and balance are regulated by both central and peripheral nervous system; thus, various neurological disorders can result in postural instability and poor mobility. Balance system can be affected by the impact of neurological disease on postural responses, postural tone, sensory feedback, visuospatial disorder, executive dysfunction or delayed latencies. Gait disorders have been classified into lower (e.g. peripheral), middle (e.g. spinal, basal ganglia) and higher level gait disorders (e.g. frontal or psychogenic) [62]. The more pragmatic approach could be used to describe gait disorders including hypokinetic gait disorders, dystonic, hemi-or paraparetic gait, ataxia, vestibular, neuromuscular and psychogenic gait [62]. All components of gait including initiation of walking, step length, coordination, walking speed, symmetry, stride width, rhythm and posture should be assessed. Various tools/scales can be used for further assessment of gait and balance ( Table 3). Most physicians work closely with PT and rely on their assessment of patient's needs in relation to mobility, balance and posture. Multidimensional assessment and multiagency management of mobility in older people lead to better outcomes. Physical activity interventions for people with an intact cognition are well documented and shown to be effective in improving balance and reducing falls. People with dementia are two to three times more likely to fall [16] and risk is further increased in people with Lewy body dementia (LBD) and Parkinson's disease dementia (PDD) [23,68]. There is limited evidence showing significant gait and balance improvement following the targeted exercise programme in the community-dwelling older people with dementia [69]. More recently, it has been shown that supervise exercise training in people with dementia living in community could improve muscle strength and physical activity [70]. There is dearth of similar studies in the hospital setting and further research is required. A simple flexible home-based muscle strengthening and balance-training exercise programme along with medication could improve the physical performance in the older people. Functional status It is not uncommon for older people to be admitted to the hospital with functional deterioration or increased dependence, thus unable to cope. Older people admitted to the hospital with an acute medical problem, 'geriatric giants' [71,72], incontinence, immobility, postural instability (falls) and intellectual impairment (dementia) or who are frail with one or more disability should get an appropriate functional assessment. A typical geriatric assessment for such people should begin with a review of their functional status. This is usually captured in two commonly used functional status measurement-basic ADL and instrumental ADL (IADL). The ADL that is initially affected includes complex or IADLs such as shopping, handling finances, driving, cooking or using the telephone followed by basic ADL including bathing, dressing, toileting, transferring, continence or feeding. Whether patients can function independently or need some help is usually determined by OT, as part of the comprehensive geriatric assessment. OTs work closely with the physiotherapists to assess patient's own environmental and home status with the identification of appropriate equipment and its delivery before discharge. In addition to optimising functional independence, OT intervention also enhances home comfort, safe use of available facilities, safe access to transport or potential use of telehealth technology and local resources. The assessment of functional limitations is best completed by an interview with the person and caregiver with open-ended questions about their ability to perform activities. They can further be assessed by direct observation either in their usual place of residence or whilst performing a routine activity, for example, toilet use. The functional status can also be assessed using a standardised assessment instrument with questions about specific ADLs and IADLs. There are more than 15 validated scales to complete functional assessments including Katz index of independence in activities of daily living [73], the modified Blessed dementia scale [74], the instrumental activities of daily living scale [75], the Functional Assessment Questionnaire [76], Functional Assessment Staging Test [77], Barthel Activities of Daily Living Index Scale [78], Alzheimer's Disease Co-operative Study-Activities of Daily Living Inventory [79], Disability Assessment for Dementia [80] and Bristol activities of daily living [81]. The functional scales can detect early functional impairment and often help discriminate mild dementia in comparison to those with no cognitive impairment. The scales that assess complex social functional activities are better in detecting dementia compared to those scales that involve basic ADLs [82]. A good timely recognition of functional difficulties may arrest further decline, postponing the need for care-home placement. The functional assessment scales can only provide a guidance and these scales are commonly used to assess the treatment efficacy in scientific research studies. Continence assessment Urinary incontinence (UI) is defined by the International Continence Society as 'the complaint of any involuntary leakage of urine'. Older people may assume that UI is a normal consequence of ageing and often may not be reported. UI is a common problem and older people may feel embarrassed to discuss the problem and avoid evaluation. Incontinence is associated with social isolation, institutionalisation and medical complication including skin irritation, pressure sores, recurrent infections and falls. The prevalence of urinary incontinence depends on the age and gender; for older women, the estimated prevalence of urinary incontinence ranges from 17 to 55% (median = 35%, mean = 34%). In comparison, incontinence prevalence for older men ranges from 11 to 34% (median = 17%, mean = 22%) [83]. There is a strong association of faecal incontinence (FI) with age; FI increases from 2.6% in 20-29-year-old up to 15.3% in 70 years or above [84]. In hospital settings, UI can be an atypical presentation and is a risk factor for adverse outcomes. The aetiology of incontinence in older people is often multifactorial. People with cognitive impairment usually encounter UI and later FI. Older people often find it difficult and challenging to express the need of regular toilet use, and as dementia progresses, it could be difficult to identify toilet or use it appropriately. Incontinence and inability to use toilet independently can be frustrating and distressing, which may lead to psychological burden, isolation, immobility or institutionalisation. Therefore, a good continence assessment should be an essential component for any older people admitted to hospital to ensure good-quality person-centred care, promoting independent living. Assessment of precipitating factors and identification of treatable, potentially reversible conditions are essential steps. Continence problems can be secondary to drug side effects, constipation, impaired mobility, arthritic pain, inappropriate clothing or dexterity. A good clinical history could categorise UI as stress UI (involuntary urine leakage on exertion), urgency UI (a sudden compelling desire to urinate) or mixed UI (involuntary urine leakage associated with both urgency and exertion). Overactive bladder (OAB) is defined as urgency that occurs with or without incontinence and usually with frequency and nocturia. Bladder diary (72-h urine frequency volume chart) and pre-and post-void bladder scan support clinical diagnosis. Vaginal inspection is helpful to exclude vaginal atrophy, prolapse or infections. Older people with FI should have an anorectal examination to exclude faecal loading, lower gastrointestinal cancer, rectal prolapse, anal sphincter problems or haemorrhoids. Neurological causes of cauda equina syndrome, frontal lobe tumours, neurodegenerative disorders or stroke could also result in UI or FI. The continence problems can be minimised by promoting regular toilet use, appropriate toilet adaptations and providing walking aids to improve accessibility to toilet. Nocturnal incontinence remains a challenging situation but can be managed using various containment methods or limiting fluid intake in the evening. Drug treatment after specialist continence assessment is usually the next step if non-drug measures failed to provide symptomatic benefits. The aim should be to treat the underlying cause but people who continue to have episodes of UI or FI after initial management should be considered for specialised management. Nutritional assessment Older people admitted with an acute illness are at increased risk of weight loss and this remains a challenge for the teams in the hospital setting. Acute illness can result in loss of appetite, and management of an acute illness may take priority, therefore making older people more vulnerable in the hospitals, particularly those with cognitive impairment or those who cannot communicate their needs. The National UK Dementia Audit Report in 2013 showed that nutritional assessments were undertaken in less than 10% of patients in some hospitals [85]. A detailed nutritional assessment should be undertaken on admission to hospital and should include any recent weight loss, dietary intake and habits. The risk factors including dry mouth, poor oral hygiene, problems with dexterity, reduced vision, acute or chronic confusion, constipation or pain should be explored and actively managed to avoid poor nutrition. Regular nutritional assessments using Malnutrition Universal Screening Tool (MUST) can be helpful and this has been validated to be used by any health professional in the hospital. It is a fivestep screening tool, which can identify those who are at risk of weight loss or are malnourished [86]. A collective and simple approach with involvement of family and carers can prevent malnutrition during hospitalisation. Patients should be offered small frequent meals and regular snacks or preferred food is often helpful. Protected meal times and regular prompting or assistance for those with cognitive impairment can lead to improved food intake [87]. Oral and dental hygiene Higher levels of poor oral health can be commonly observed and it is challenging to provide good and regular oral hygiene care to older people in hospitals. The oral hygiene in older people can be compromised secondary to impaired sensory functions, reduced physical dexterity and functional dependence. Older people are often on polypharmacy including anticholinergics, diuretics, antidepressants and antipsychotics. The common side effects of drugs are reduced salivary flow, which could affect the efficiency of chewing, leading to dental problems. Older people with cognitive deficits are at higher risk of developing oral diseases and conditions including dental caries, dental plaques and missing teeth [84]. Poor oral hygiene can also be related to uncontrolled diabetes, inappropriately fitted dentures, lack of teeth, poor mobility or salivary gland dysfunction [88]. Oral Health Assessment Tool (OHAT) screening has been proposed for the timely assessment of oral and dental hygiene. This tool has been validated for use by nursing staff in care-home residents [89] also those with dementia [90]. There could be reluctance and resistivity to maintain basic good oral hygiene by choice or lack of knowledge/information. Enhanced engagement of carers with oral hygiene strategies, a good education on oral hygiene in older people and timely identification of oral health problems by regular dental consultations could be effective in preventing oral diseases. Skin Older people, in general, are at higher risk of skin problems including pruritus, eczematous dermatitis, purpura, venous insufficiency and pressure ulcers. Other risk factors include loss of protective fat, malnutrition, frailty, sarcopenia, urinary or bowel incontinence and cognitive impairment. The risk of pressure ulcers further increases with hospitalisation secondary to poor oral intake and reduced physical activities. Prompt assessment and appropriate skin-care plan including good personal hygiene, healthy balanced diet, avoiding excessive heat and friction, promoting continence and early mobilisation are the key factors to minimise the risk of skin breakdown. Vision Visual impairment is common in older people and this risk increases with advancing age. The visual impairment increases from 6.2% at ages 75-79 to 36.9% at age 90 or over [91]. Blindness also increases from 0.6 (75-79) to 6.9% in 90 years or over. Visual impairment in older people is often under-diagnosed and can complicate the accurate assessment of ADLs. Older people who experience visual problems may avoid activities that require good vision and become isolated or even need to be institutionalised. People with cognitive impairment may further experience visuoperceptual difficulties such as visual hallucinations, colour perception, background contrast and depth perception. Simple measures such as the use of blinds or shades to reduce glare, wearing the correct glasses, minimising visual and physical obstacles, using colours and contrasts to mark different areas, assistive technologies such as automatic lights, audio labels or audio books can minimise the risks. Requesting eyesight testing by involving optometrists or ophthalmologists to examine eyes for the causes of sight loss is a first step in defining appropriate interventions. Hearing Hearing impairment is one of the three most common chronic diseases along with arthritis and hypertension [92]. People with hearing loss are less likely to participate in social activities and are less satisfied with their life as a whole. Hearing loss does not only affect individual's emotional well-being but also their ability to manage IADLs. Older people with hearing loss are prone to develop dementia [93] and hearing loss is commonly reported in people with dementia. Hearing loss can be conductive and sensorineural. The causal factors that may contribute to hearing impairment could include ear wax build-up, ear infections, degenerative ageing process, excess occupational noise, stroke, head injuries, drug side effects or neoplasms like an acoustic neuroma. All patients with hearing impairment require thorough examination and presence of dementia should not preclude assessment for a hearing aid. Simple measures such as speaking in a normal tone, giving attention and making eye contact are helpful. Appropriate seating, eliminating background noise and repeating the key phrases or summary points improve communication. Hearing aids are often useful, though they do not improve cognitive function or reduce BPSD but has shown that patients improved on global measures of change [94]. Pain Pain should be treated as a fifth vital sign. Pain assessment involves holistic evaluation of the person on the first presentation of pain and then following up with regular pain assessment. Pain assessment should include the site of pain, type, precipitating factors and impact of pain on the individual. Physical assessment should be performed for any skin bruise or infection, constipation, reduced range of joint movement, vertebral tenderness, recent injury or fracture. There are several pain scales available, visual analogue scale or the numerical rating scales are most useful. Older people with cognitive impairment and those who cannot verbally communicate their symptoms particularly pain, observation or collateral information from relative or carer or suggestion of change in person's behaviour could help to assess the severity of pain [95]. The numeric pain-rating scale (0-10, where 10 being most severe pain) is often used in routine clinical practice. The specific pain-screening tools such as 'Assessment of Discomfort in Dementia (ADD)' are available to be used in patients with cognitive impairment. The tool involves assessing pain history, physical examination and administration of analgesics and giving analgesics as needed [96]. Sociocultural assessment It is important to assess person's language, ethnic background, cultural beliefs, personality, education, family experience, socio-economic status and life experience to complete assessment holistically and provide person-centred care. A detailed assessment of social network, daytime activities and informal support available from family or friends should be done on the first day of admission to the hospital. A prompt, patient-centred identification of the requirement of social services input helps with safe timely discharge to the most suitable and friendly environment. Social Worker (SW) should ideally be allocated if a need for social services is anticipated at the time of hospital admission. Once all the needs of the patient are identified, SW should be contacted to organise formal carers or care-home placement if the patient is not suitable for home discharge. Quality of life The quality of life (QoL) assessment was almost unknown 20 years ago but it is now an established fact that the psychological burden of an illness cannot be described fully by measures of disease status. It has been acknowledged that various psychosocial factors such as apprehension, anxiety, restricted mobility, difficulty in fulfilling ADLs and the financial burden must also be addressed to complete holistic assessment of older people. The most important constituents of the quality of life in older age from older people's perspective are having good social relationships with family, friends and neighbours; participating in social and voluntary activities and individual interests and having good health and functional ability [93,94]. Other measures of good QoL include living in a good home and neighbourhood, having a positive outlook and psychological well-being, having an adequate income and maintaining independence and control over one's life [97,98]. The assessment of a patient's experience of disease and its effect on their quality and outcome framework (QoF) should be one of the central components of healthcare assessment to acknowledge safe and early hospital discharge. The family members should be involved on occasions when it is difficult to measure the patient's QoL due to underlying cognitive impairments and communication deficits [99]. Sexuality Sexual desires and the physical capacity to engage in sex continue throughout life. Though many older people enjoy an active sex life, there has been a little mention of sexuality or the problems that older people may face related to sexual issues in government policies [96]. There are several causes for loss of interest and frequency of sexual activity in later life including physical health problems, emotional distress, drug use, male or female sexual dysfunction, practical problems, willingness or lack of partner and not necessarily only ageing [100]. Healthcare professionals routinely avoid discussing sexual problems with older people; however, sharing physical relations and closeness are very important in maintaining long-term emotional and physical intimacy. Examination Thorough physical examination from head to toe in a systematic fashion is essential, especially if the cause of acute illness or deterioration is not clear from the history. The clinical signs may not be very obvious as often older people have an atypical presentation, for example, hypothermia instead of hyperthermia, lack of typical signs of heart failure or meningism. Older people sometimes get fatigued after history taking; in such occasions, physical examination may have to be done at a different time. Investigations The investigations should be requested only as indicated by clinical examination. For example, urine analysis should only be done if symptomatic, unexplained systemic sepsis or delirium. As over diagnosis of urinary tract infection may point towards inadequate assessment of frail older people. The common investigations usually include blood oxygen saturation, complete blood count, kidney, liver, bone profile, urinalysis and a chest radiograph. An electrocardiogram should be obtained because there is a higher risk of silent myocardial infarction in older people. Other investigations including CT brain or lumbar puncture are helpful in those with unexplained altered mental status. Management The drug and non-drug treatment should be evidence based with aim to treat underlying medical illness. The management of older people needs close liaison work with geriatricians, acute physicians, ED and MDT. The model of care should be established in hospitals so that supportive care for older people can be provided within the first few hours of an admission [101]. For older people with frailty, multiple co-morbidities and an acute illness, admission should be to an Emergency Frailty Unit (EFU), a separate unit within an AMU. EFU or a similar unit led by a geriatrician and the multidisciplinary team (MDT) could not only provide comprehensive person-centred care but also enhance clinical outcomes irrespective of age [102]. In addition, a close working with liaison old-age psychiatry can improve outcome [43]. There should be minimal intra-and inter-hospital transfer to reduce the risk of delirium. Interventions should be planned very carefully and keeping the associated risks in mind, for example, older people should not be routinely catheterised unless there is evidence of urinary retention. Patient education Hospital admission could be a good opportunity to educate older people and their carers on chronic disease and its management, healthy lifestyles, physical activity, sufficient fluid intake and healthy nutritious foods. Alcohol consumption is under-recognised in older people and an informal discussion by a health professional could be beneficial. A brief discussion with a clinical pharmacist can improve adherence to medication in older people. Staff training Training in hospitals is usually directed towards patient safety, managing acute medical conditions, good handover, and rapid response to a sick patient; however, it is equally essential to augment knowledge and skills of hospital staff in assessing and managing older frail patients. The majority of older patients are admitted to hospital through AMU or directly to EFU, which justifies the need for an EFU geriatrician taking a lead in staff training at the front end [101]. Nursing staff need regular training and education on geriatric giants and frailty [103]. Systematic nurse training has shown to reduce work-related stress [104] and improved outcomes as measured by reduction of inpatient falls [105]. Dementia awareness training should be mandatory and should also be included in induction programmes. Staff members should be encouraged to collect personal information about people with dementia to help improve care, for example, use of 'This is Me' document. Information sharing and communication among staff, carers and patients should be improved to ensure that all staff coming into contact with older frail people are aware of their problems and associated needs. Caregiver problems Occasionally, problems of older patients are related to neglect or abuse by their caregiver. Hospital staff should consider the possibility of 'elder abuse' if there are suggestions on clinical assessment. Certain injury patterns are particularly suggestive, including frequent bruising (middle of the back, upper arms or groin area), fearfulness of a caregiver or unexplained burns. Service outcome review The regular involvements in audits and analysis of hospital readmission rates, delayed discharge and mortality could identify the needs for service improvement and provision of safe enhanced good quality care for older people. Discharge planning Older people admitted to hospital are entitled to receive a smooth transition from one stage of hospital care to the next stage of care in the community. A lack of coordinated and personcentred discharge planning can lead to poor outcomes for the patients, thus affecting their health and safety. Poor discharge planning can also lead to inappropriate prolonged LoS or premature discharge and thus result in possible readmission to the hospital. Independence Maintenance of independence and participation in social and voluntary activities are the key benefits of home discharge. This has been quoted as one of the major elements of good QoL. Older people usually have a fear of losing independence as a result of ageing. Older people have reported that being independent, free to please oneself and freedom from time constraints are the best things about growing old [106]. Independence is usually associated with good health, living in own home and ability to walk independently. However, independence is felt to be lost if older people are unable to manage their ADLs. The perceived physical environmental barriers and mobility or ADLs have significant positive correlation [107]. Safe, effective and timely discharge The principal aims of the safe and effective discharge process are to ensure that patients should not stay in the hospital any longer than necessary. Discharge should be on 'pull system' rather than on 'push system' in order to maximise their social interaction and independence by providing timely and comprehensive carer support according to their needs. Discharge planning should be a systematic coordinated process, which should begin on the first contact with health professional based on the specific needs of the patient with documentation of expected date of discharge (EDD). An older person must be assumed to have capacity unless suggested otherwise and all patients should be encouraged to take their informed decision with an aim to maintain their maximum independence and social interaction in the community. Where a discharge process is complex, a safe discharge meeting (SDM) should be set and should be attended by members of MDT with SW and preferably by the patient's relative/carer. There should be a clear purpose of the meeting and needs of the patients should be discussed. The information should be gathered from the SW regarding existing care support services. If there is no need for further specialist referral then discharge date should be set and appropriate requirement of support should be requested by involving social services or voluntary organisations. The confirmation of fitness to discharge must be agreed at least 24 h in advance of EDD with appropriate arrangements for transport. Ethical issues related to discharge The patient's autonomy should be respected both ethically and legally considering that a patient can understand proposed place of discharge, alternatives, risks and benefits in order to consent or refuse it. Patient's autonomy also requires consulting them and obtaining their informed consent before planning a discharge. The healthcare professionals should practice the principles of beneficence and non-maleficence together and aim at producing net medical benefit with minimal or no harm Individual's interests and family wishes The patient's interests and wishes should be taken into account when considering discharge planning and future care. The hospitalised patients can wax and wane in the level of alertness, so they should be assessed when they are fully awake and have not received any medications, which can impair their cognitive functions. If there are any doubts about the patient's expressed wishes, they should be evaluated at a later stage. There should be an attempt to involve the family and carers to organise patient-centred hospital discharge process, particularly for those patients who have underlying cognitive or uncorrected sensory impairment. Decision-making capacity According to English Law, an adult has the right to make decisions affecting his or her own life, whether the reasons for that choice are rational, irrational, unknown or even non-existent. Adults over 16 or those who lack capacity to make their own decisions to medical care and treatment are protected by The Mental Capacity Act (MCA) (UK). The MCA provides a statutory framework and aims to support an individual's right to protect them from any harm caused due to lack of capacity to make autonomous decisions for themselves [108]. Therefore, every effort should be made to support people who lack capacity to make their own decisions; however, if the person clearly lacks capacity, this should be formally assessed. The decision should be discussed among MDT members and 'best interest meetings' should be organised in liaison with family or carers to make important decisions. Follow-up Older people discharged should have appropriate access to outpatient follow-up clinics, intermediate and social-care services. There should be effective electronic information-sharing with primary care and community. End-of-life care Some older frail people discharged from hospital could have a poor outcome. Mortality rates for frail older people in the year following discharge from AMUs are high (26% in one series) [109]. Most very old individuals with severe dementia in the community die away from a usual place of residence and hospitals remain the most common place of death [110]. Dementia care during end of life is not similar to the other life-limiting illness [111]. The symptoms experienced by the people with dementia are similar to those with cancer patients but often dementia is not considered as a life-threatening illness. People with dementia not only experience symptoms over longer period but also need more support from the social services and palliative teams [112]. Therefore, healthcare and social care professionals should discuss and record advance care planning statements, advance decisions to refuse particular treatments or preferred place of care in future. The decisions made should be shared with community team and families/carers. Current evidence on CGA The concept of CGA evolved as a result of multiple complex problems in older patients. The first comprehensive meta-analysis of the benefits of CGA was conducted in 1993, which demonstrated that CGA could improve the functional status, survival, reduce the hospital LoS and subsequent health service contacts as well as reduce care-home admissions. This metaanalysis also showed that an improvement in physical function from the geriatric evaluation and management unit (GEMU) interventions was maintained at 12 months (odds ratio (OR): 1.72; 95% confidence interval (CI): 1.06-2.80) [113]. Although there is a proven role for intensive geriatric rehabilitation in improving the functional outcome and independence in patients with hip fracture [114,115], other randomised control trials (RCTs) comparing CGA to routine care in later years showed no significance in physical functioning or hospital LoS [116][117][118]. The systematic review of the literature including 20 randomised controlled trials (RCT) (10, 427 participants) of inpatient CGA for a mixed elderly inpatient population was conducted in 2005. This review confirmed the benefits of inpatient CGA and increased chances of living at home at 1 year, and improved physical and cognitive function with no long-term mortality benefits [119]. More recently, systematic review and meta-analysis involving 17 trials with 4780 people compared the effects of general or orthopaedic geriatric rehabilitation programmes with usual care. The specifically designed inpatient rehabilitation for geriatric patients showed beneficial effects over usual care for functional improvement, preventing admissions to nursing homes and reducing mortality [120]. It appears that setting up a CGA unit carries increased staffing costs or insufficient costeffective data are available [120] but in American studies of medical and surgical patients the financial costs of managing care for older people in a specialised hospital unit were not more expensive than caring for patients on a usual-care ward [121,122]. A meta-analysis of RCTs in 2011 has confirmed not only benefits of CGA but also a potential cost reduction compared to general medical care [123]. However, the nature of CGA varies and many, but not all, older people have complex care needs. Therefore, it is difficult to identify which patients will benefit the most and those at risk of adverse outcomes. Frailty status measurement by an index of accumulated deficits generated from routine CGA has shown strong association with adverse outcome; therefore, frailty index may have clinical utility, augmenting clinical judgement in the management of older inpatients [39]. In summary, older frail patients should have early access to inpatient CGA and interdisciplinary involvement in a specialist ward for optimal care to reduce LoS, regain function and physical stability [120]. 9. Limitations to a good assessment 1. Lack of training for doctors, nurses and multidisciplinary members and unfamiliarity with key principles and practices of geriatric medicine [103,124]. 2. Awareness and support to MDT members is relatively poor. 3. Lack of interest and associated negative societal attitudes towards older people. 4. Limited access to dementia care training to meet the complex care needs of older people [125]. Conclusion Comprehensive geriatric assessment has proven benefit and this should be considered as the evidence-based standard of care for the frail older inpatients. There is a need to configure emergency, acute medical and geriatric services to deliver high-quality CGA for frail older people at the earliest possible time following contact with the acute sector. The aim should be better integration among multidisciplinary members to achieve well-coordinated, high standard of care and improve outcomes. Older people are the major users of acute care and AMU is the key area for initial decision-making; therefore, staff training to meet the needs of frail older people in Acute Medical Unit or Emergency Frailty Unit is mandatory. Conflict of interest The author has no financial or any other kind of personal conflicts with this article.
v3-fos-license
2023-01-29T16:08:35.127Z
2023-01-26T00:00:00.000
256353537
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2304-8158/12/3/553/pdf?version=1674744508", "pdf_hash": "4665a6aaedeb99f6d5934fccd2123e34308c5508", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:247", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "8db46a1ba0905da6b2c2aeb8c2fd939f323a4d97", "year": 2023 }
pes2o/s2orc
Alginate Coating Charged by Hydroxyapatite Complexes with Lactoferrin and Quercetin Enhances the Pork Meat Shelf Life In this work, the effect of an alginate-based coating loaded with hydroxyapatite/lactoferrin/quercetin (HA/LACTO-QUE) complexes during the storage of pork meat was evaluated. FT-IR spectra of HA/LACTO-QUE complexes confirmed the adsorption of QUE and LACTO into HA crystals showing the characteristic peaks of both active compounds. The kinetic releases of QUE and LACTO from coatings in an aqueous medium pointed out a faster release of LACTO than QUE. The activated alginate-based coating showed a high capability to slow down the growth of total viable bacterial count, psychotropic bacteria count, Pseudomonas spp. and Enterobacteriaceae during 15 days at 4 °C, as well as the production of the total volatile basic nitrogen. Positive effects were found for maintaining the hardness and water-holding capacity of pork meat samples coated with the activated edible coatings. Sensory evaluation results demonstrated that the active alginate-based coating was effective to preserve the colour and odour of fresh pork meat with overall acceptability up to the end of storage time. Introduction Meat and meat products are important sources of food nutrients such as proteins and B-complex vitamins [1]. However, their composition makes them highly perishable products with a short shelf life. The main phenomenon related to the spoilage of meat is the microbial growth that occurs during storage, causing off-odours and flavours that make the product unsuitable for human consumption [2]. Recently, several approaches have been proposed to preserve the safety and quality of meat and meat products, such as edible films and coatings from biodegradable biopolymers [3]. Among several polysaccharide-based biopolymers commonly used for edible films and coating for food preservation, chitosan and sodium alginate are the most exploited for meat and meat products [4,5]. However, unlike chitosan, which has shown to be an effective natural antimicrobial agent against Gram-positive and Gram-negative bacteria [6], alginatebased layers act only on the water and gas exchanges [5] with no effect on Pseudomonas spp. and Enterobacteriaceae growth during the cold storage of chicken fillets [7]. Similar results were reported by other authors [8,9], highlighting the same total viable count increment rate during the storage of uncoated and alginate-based coated fresh chicken breast meat. However, active edible films and coatings loaded with antimicrobial and antioxidant compounds seem to have great potential for preserving the quality and prolonging the shelf life of meat products [10]. Recently, different studies evaluated the ability of active edible coatings enriched with essential oils [11][12][13], phenolic compounds [14], organic acids [15] and natural extracts [16,17] to preserve the quality of fresh meat and meat products during storage. Positive effects of coatings enriched with essential oils [11][12][13], phenolic compounds [14], organic acids [15] and natural extracts [16,17] were obtained in extending the shelf life of meat, inhibiting microbial growth as well as lipid oxidation and weight loss. As regards the application of active alginate-based coatings in fresh meat and different types of meat products, we report in Table 1 some of the published reports in the last 5 years. Table 1. Recent applications of active alginate-based coatings in fresh meat and different types of meat products. Food Active Substance Results Reference Chicken meat Quercetin glycoside compounds Coating significantly inhibited the growth of spoilage bacteria, as well as the total volatile basic nitrogen and slowed down the changes in hardness during the storage time of 11 days. [7] Chicken meat Citrus and Lemon Coating resulted in less growth of microorganisms in the samples. [18] Chicken meat Black cumin High antimicrobial activity versus Escherichia coli, less variation in pH and lower colour changes, over 5 days of storage at 4 • C. [19] Chicken meat Lactoperoxidase Lactoperoxidase addition into the alginate-based coating system led to higher bacterial and sensorial quality values of chicken meat. The effect was even increased with the increasing concentration of lactoperoxidase. [20] Beef Ginger essential oils Active coating increased the shelf life of chilled beef slices by 9 days, delaying lipid oxidation and microbial spoilage. [21] Beef Basil The active coating increased antioxidant activity and reduced meat lipid oxidation and weight loss. [22] Pork Meat epigallocatechin gallate The results showed that fresh pork coated with the active coating had a significant inhibitory effect on its microbial growth. [14] Lamb meat Essential oils of thyme and garlic The active coating had an effect on lamb meat quality that helps maintain its characteristics during its shelf life after thawing. Thyme led to lower lipid oxidation and better colour maintenance. [13] Among active compounds, the flavonoids' quercetin and quercetin glycosides with antioxidant [23] and antimicrobial activity [24] are present in various fruit and vegetables besides being used in numerous consumer applications such as dietary supplements. Lactoferrin is a glycoprotein with well-known antimicrobial activity [25,26]. The use of lactoferrin as a nutritional supplement is GRAS by the US Food and Drug Administration and a novel food ingredient after the positive scientific opinion provided in 2012 by European Food Safety Authority. Due to the high sensitivity of these compounds to pH, temperature and light, a carrier for their prompt release is necessary to overcome the problem of their use as active compounds in food packaging. Moreover, the efficacy of an antimicrobial packaging system also depends on the kinetic release of the compound from the coating to the food surface. The latter depends on both the solubility of the compounds in an aqueous medium and the type and strength of the polymer network used to produce the coating. Moreover, to protect the bioactive compounds from degradation that could occur in the edible coating during the storage period, carrier systems such as polymeric nanoparticles [27][28][29], nanoemulsion [30][31][32] and nanocomposites [33] have been proposed in the literature of alginate-based edible film and coatings. Within the delivery systems of active compounds, hydroxyapatite (HA), due to its chemical physical properties including biocompatibility, stability, and degradability, could be an interesting candidate for carriers in food packaging applications. It represents the major component of cartilaginous tissues, such as bone and tooth; due to the biomimetic crystal structure and properties, HA crystals are widely employed in different medical practices [34]. Furthermore, the use of HA in food is allowed in Europe by Regulation (EC) No 1333/2008 on food additives classifying the HA by the code E341. To the best of our knowledge, no previous studies exist in the literature on the application of hydroxyapatite as a component of edible coatings for food shelf-life extension, except those published by our research group [7,35,36]. Our research group recently developed alginate-based coatings loaded with hydroxyapatite/quercetin complexes for the shelf-life extension of fresh chicken fillets [7]. To increase the antimicrobial activity versus Pseudomonas spp. we also evaluated the synergistic effect of lactoferrin and quercetin loaded in hydroxyapatite crystals at different active compound concentrations [25]. Based on these considerations, this work aimed to evaluate the effectiveness of HA lactoferrin and quercetin complexes loaded in an alginate-based coating during the cold storage of fresh pork fillets. For this purpose, the effects of alginate coating charged with free lactoferrin and quercetin and HA lactoferrin and quercetin complexes were carried out by the evaluation of microbiological, physical and sensory properties of pork fillets stored at 4 • C for 15 days. Moreover, the physical characterisation of Hydroxyapatite complexes was performed besides studying the kinetic release of lactoferrin and quercetin from activated alginate-based coatings. Materials Fresh pork meat was purchased at a food retailer meat counter (Salerno Italy) and cold (4 ± 1 • C) was transported to the laboratories of the University of Salerno. Sodium alginate, calcium chloride and glycerol were all obtained from Sigma-Aldrich (Milano, Italy). Quercetin glycoside compounds (QUE, 98.6% food grade) were purchased from Oxford ® Vitality Company (Bicester, UK) and Lactoferrin (LACTO, 95% food grade) from Fargon (Newcastle, UK). Biomimetic Hydroxyapatite (HA) was obtained by Chemical Center Srl (Research and Development Department, Italy) and synthesised according to the procedure of Palazzo et al. [34]. Production and Characterisation of HA Loaded with Quercetin Glycoside Compounds and Lactoferrin Based on previous antimicrobial activity results of the LACTO and QUE complexes against Pseudomonas fluorescent [20], HA/LACTO-QUE complexes were prepared with 100 mg/L in LACTO and QUE. LACTO and QUE were adsorbed in HA crystals according to the procedure reported by Montone et al. [25]. The morphology of HA/LACTO-QUE complexes was evaluated by a Scanning Electron 33 Microscope (Leo, model EVO 50). Before the analysis, samples were placed on a conductive graphite surface and then coated with a thin layer of silver in a sputter coater (Edwards, S150B) for 3 min through a flux of argon ions. After this time, the argon flow was stopped, and the samples were left under vacuum for 24 h. Fourier Transformed Infrared analysis of HA, QUE, LACTO and HA/LACTO-QUE complexes was performed according to Nocerino et al. [37], using a Thermo Nicolet 380 FT-IR spectrometer. Preparation of Alginate-Based Solution Alginate solution was prepared by dissolving sodium alginate in distilled water according to the procedure described by Malvano et al. [7]. Successively, two types of sodium alginate solutions were prepared: the first one contained 100 ppm of LACTO and QUE, and the second one HA/LACTO/QUE complexes with 100 ppm of LACTO and QUE. Quercetin and Lactoferrin Release from Coatings The release study of quercetin and lactoferrin from alginate-based coatings charged with HA/LACTO, HA/QUE and HA/LACTO-QUE complexes was performed following the procedure developed in our previous studies [24,25]. Pork Meat Fillets Coatings and Storage Study Fresh pork meat fillets (60-80 g) were obtained from whole pork meat piece using an automatic slicer. The edible coating process was carried out by exploiting the layer-by-layer method. Pork fillet samples were dipped before into sodium alginate solutions for 1 min and then in calcium chloride solution (1.5% w/v) for 1 min, according to Malvano et al. [7]. After, the samples were got dried at room temperature for 15 min and then put into PET boxes provided with a lid. Storage study was performed by comparing three different pork meat samples prepared as below: Fifteen boxes for each treatment were maintained at 4 • C for 15 days. At 0, 2, 4, 7, 11 and 15 days physicochemical, microbiological, texture and sensory analyses were carried out. For each storage time, three replicate pork fillet boxes were employed. Microbiological Analysis The microbiological parameters investigated during the storage of uncoated and coated pork fillets samples were Total viable bacterial count (TVC), Psychrotrophic bacteria count (PBC), Pseudomonas spp. and Enterobacteriaceae. The analysis of the above microbiological parameters was performed according to Malvano et al. [7]. Total Volatile Basic Nitrogen Evaluation Total Volatile Basic Nitrogen (TVB-N) was evaluated by a Kjeldahl distillation unit (UDK139 Velp Scientifica) according to Albanese et al. [38]. Water-Holding Capacity Evaluation Water-holding capacity (WHC) was calculated as a percentage of weight loss concerning its initial weight, according to the Equation (1): Coatings were manually removed from pork samples and the weight of the samples was recorded. pH and Colour Evaluation The pH values, as well as total colour differences (∆E), were determined according to Malvano et al. [7]. Texture Profile Analysis (TPA) A Texture Analyzer (LRX Plus, Lloyd Instruments, Chicago) provided with a 100 N load cell was employed to evaluate Hardness (N), Gumminess (N), Cohesiveness (dimensionless), Chewiness (N*mm) and Springiness (mm) parameters. Two consecutive compressions with a cylindrical probe of 1 cm diameter at 1 mm/min were performed on the pork fillet samples. All measurements were performed in triplicate for each fillet sample. Sensory Evaluation Coated (LACTO-QUE; HA/LACTO-QUE) and control (C) pork samples were assessed by colour and odour before the cooking, and after the cooking, performed by broiling, for taste, odour and overall acceptability. The active alginate-based coatings were not removed from the samples to evaluate their impact on the colour, odour and taste before and after the cooking of pork fillets. Ten members (4 male and 6 female) of the Department of Industrial Engineering at the University of Salerno (Italy) were engaged, based on their frequency of consumption of pork meat. Before the sensory trials, all Panel Participants released signed Informed Consent according to American Meat Science Association [39]. The sensory attributes were rated on a 5-point scale from "none" (1) to "high" (5). Scores equal to or lower than 3 were considered not suitable for marketing. Statistical Analysis All the analyses were performed in triplicates. Experimental data were reported as mean and standard deviation and subjected to analysis of variance (ANOVA). The significance of differences (p < 0.05) among samples was determined by Student's t-test with SPSS software version 13.0 for Windows (SPSS, Inc., Chicago, IL, USA). HA Complex Characterisation FT-IR spectra of HA, LACTO, QUE and HA/LACTO-QUE complexes are reported in Figure 1. According to previous studies [7,40] the FT-IR spectra of HA show the characteristic adsorption bands at 1000-1100 cm −1 related to the asymmetric stretching mode of vibration for PO 4 group and other bands at, 880 cm −1 , 1466 cm −1 , 1545 cm −1 and 3497 cm −1 due to the carbonate type A (hydroxyl site)-substituted and type B (phosphate site)-substitute HA crystals. Moreover, the FT-IR analysis of HA/LACTO-QUE complexes revealed both active compounds. In particular, QUE showed its characteristics peaks at 1500 cm −1 (corresponding to C=C stretching), 1670 cm −1 (corresponding to C=O stretching), 3248 cm −1 (corresponding to O-H stretching) and other peaks in the range of 650 and 1000 cm −1 pointed out the presence of the aromatic compounds [7]. LACTO showed its characteristic peaks at 1655 cm −1 (C=O), 1532 cm −1 (C-H) and 1392 cm −1 (C-N) [41]. SEM images of HA, LACTO, QUE, HA/LACTO and HA/LACTO-QUE complexes were reported in Figure 2. According to our previous study [7], SEM images of HA showed µm particle size ( Figure 2a). HA nanoparticles tend to agglomerate in micrometric clusters probably due to Ostwald ripening [42] and their Z-potential values near 0 mV [7]. The characteristic strip-like structures of quercetin glycoside compounds [43] and the spherical structures of lactoferrin [44] are clearly shown in Figures 2b and 2c respectively. SEM images of HA complexes highlighted the first doping with LACTO ( Figure 2d) and after with QUE (Figure 2e) showing the characteristic structures of QUE and LACTO, respectively, in the crystal structure. Release Study The release of a bioactive compound from a biopolymer matrix into an aqueous medium happens thanks to physical phenomena occurring in sequence and is ruled by physicochemical interactions between polymer, solvent and the bioactive compound. At first, the water penetrates and diffuses into the structure causing the polymeric network widens which allows the diffusion of the active compound until the equilibrium is reached [45]. The kinetic releases of QUE and LACTO from alginate-based coating loaded with HA/QUE, HA/LACTO and HA/LACTO-QUE complexes are shown in Figure 3. As can be observed for the coating loaded with HA/LACTO-QUE complexes, the release of LACTO, as well as the achievement of equilibrium, occurs in a shorter time than QUE. In particular, the release of LACTO started after 9 h compared with the QUE release that occurred after 31 h. Moreover, QUE reached the equilibrium 42 h later than LACTO. As regards the influence of simultaneous loading of the active compounds into HA crystals, the presence of QUE seems to influence the release of LACTO, which is faster in the case of HA/LACTO-QUE complexes than in HA/LACTO ones. In contrast, the coating loaded with HA/QUE complexes starts releasing quercetin in a shorter time before than the coating loaded with HA/LACTO-QUE ones, even if no kinetic release differences were observed. The observed different release behaviour in active alginate-based coatings could be due to the difference in the solubility of the active compounds in the aqueous medium (10 g/L for LACTO and 1.61 g/L for QUE) besides the different interactions of the active compounds could show with HA structure. The initial TVC of pork meat samples (Figure 4a) was 3.50 log CFU/g, according to previous studies on the shelf-life evaluation of pork meat [14,17]. TVC values increased with the increase of storage days for coated and uncoated samples. On day 7, uncoated samples reached TVC values close to 7 log CFU/g, considered the maximum acceptable limit for fresh meat [46]. LACTO-QUE coated samples reached the TVC threshold at 11 days while this value was never reached by HA/LACTO-QUE samples until the end of the storage period. This indicates that, from a microbiological point of view, a shelf-life extension, of at least 7 and 4 days was obtained for HA/LACTO-QUE samples compared with the control and HA/LACTO, respectively. Coating loaded with HA/LACTO-QUE complexes inhibited the growth of the Enterobacteriaceae (Figure 4b) and the final charge (4.6 log CFU/g), at 15 storage days, was lower (1 log cycle) significantly (p < 0.05) compared to the control sample. Meat spoilage is mainly caused by the metabolic activity of psychrotrophic bacteria, in particular Pseudomonas spp., the prevailing spoilage flora of food from the animal origin [21] causing off-odours and off-flavours during storage in cold conditions. Figure 3c,d show that PBC and Pseudomonas spp. for uncoated and coated pork meat samples showed a similar trend to the TVC, increasing with the increase of storage time. In particular, during the storage period, LACTO-QUE and HA/LACTO-QUE samples showed significantly (p < 0.05) lower Pseudomonas spp. and PTC, as compared to the control. The results of LACTO-QUE and HA/LACTO-QUE coated samples highlighted the higher capability of the coating to slow down bacterial growth when the active compounds were adsorbed into HA crystals. The antimicrobial activity of both quercetin glycoside compounds and lactoferrin against meat spoilage microorganisms such as Pseudomonas spp. has been tested by our group and other authors [24,37,[47][48][49]. In addition, our group also verified the enhanced antibacterial activity of lactoferrin and quercetin when adsorbed into HA crystals [25]. The capability of alginate-based coating charged with HA/LACTO-QUE to slow down the microbial charge coated pork fillets during the storage agrees with our previous study focused on synergistic inhibition effect versus Pseudomonas fluorescens of LACTO and QUE loaded in the structure of hydroxyapatite [25]. When HA was incubated firstly with LACTO and then with QUE, the highest inhibition was reached. The complex HA/LACTO-QUE (100 pm) exhibited a synergistic effect on the growth of Pseudomonas fluorescens, showing a fractional inhibitory concentration (FIC) index equal to 0.4, according to the interpretation that a FIC index of ≤0.5 suggests the synergistic interaction [50]. pH, Total Volatile Basic Nitrogen and Colour Evaluation Meat pH strongly influenced meat quality and freshness characteristics, such as colour, tenderness and microbial growth [51]. The effects of alginate-based coatings on pork meat pH during the storage period are shown in Figure 5a. The pH value of the fresh pork meat sample was 5.59, close to the values reported by other authors [14,16,17], and it increased with the increase of the storage days in all samples. In particular, until day 4 no pH change was observed in HA/LACTO-QUE samples. On day 11, a significant increase in pH was observed for all meat samples: however, the pH values were always lower in the coated samples than in the control ones. At the end of the storage, the coating containing HA/LACTO-QUE complexes granted the lowest pH value, with an increase of 21.82% concerning day 0. An increase of 30.59% and 27.19% was reached for the control and LACTO-QUE samples, respectively, at the end of the storage period. The rise of pH in pork samples may be explained as related to the degradation of proteins that occur at the last stage of storage that allows the production of volatile alkaline nitrogen molecules, including ammonia and amines, through microbial activity and meat endogenous proteases [14,38]. pH results agreed with the amount of total volatile basic nitrogen (TVB-N), which represents the main product of protein decomposition by spoilage bacteria in pork meat [52] affecting the sensory acceptability of the meat besides being toxic for human health [53]. Changes in TVB-N values for all fillet pork samples are reported in Figure 4b. The initial value of fresh meat was 1.15 mg/100 g, and this value increased with the storage time. In fact, according to microbiological results, microbial growth causes protein degradation and damage to the muscle cell structure. As result, endogenous enzymes were released that accelerated protein degradation and the release of amino acids [54]. On each day of storage, the TVB-N values of uncoated samples were significantly (p < 0.05) higher than coated ones. The minimumTVB-N value was guaranteed, at any time of storage, by the coating containing hydroxyapatite-lactoferrin-quercetin complexes. Considering that the TVB-N acceptable threshold for pork meat is ≤15 mg/100 g [55], control and LACTO-QUE samples exceeded the limit on 11th cold storage, in contrast to the HA/LACTO-QUE samples, which did not reach this limit for the whole storage period. Changes in L*, a*, b* and total colour differences ∆E of pork meat samples during the storage time are shown in Table 2. At the beginning of storage (day 0), coated samples showed similar lightness, lower than the control sample. This difference could be due to the colour of the film-forming solution, which is the result of the pale-yellow colour of sodium alginate adding to the yellow and pink colour of quercetin and lactoferrin, respectively. L* values of coated and uncoated pork meat samples remained constant until day 7, and after that a rapid decrease in lightness was registered for uncoated samples. The pork meat coated samples showed a slight decrease in lightness from day 7 until the end of storage. This means that the alginate-based coating was able to protect meat oxygen responsible for browning phenomena, while active compounds as antioxidants reduced oxidation. Similar results were obtained also by Ruan et al. [14]. The initial a* value, which indicates the freshness of pork meat, ranged from 4.40 to 7.20 and these values did not change until day 2. After that, the coated samples showed a slight increase during the entire storage period, while a marked increase was registered for uncoated ones. b* values increased during the storage period (Table 2), probably due to the presence of hydrogen sulfide (H 2 S) produced by microorganisms and enzymes that degraded proteins and bind to haemoglobin to form yellow complexes [55]. In contrast to uncoated samples that increase quickly from day 2 to the end of the storage period, a slight b* increase was observed for coated samples, thanks to the beneficial effect of coating against microbial spoilage. Moreover, the addition of hydroxyapatite in alginate-based edible coating had a better colour protection effect on fresh pork meat. Finally, a constant increase in total colour was observed during the cold storage period for coated pork samples, while a significantly (p < 0.05) higher ∆E value was obtained for uncoated ones. WHC and TPA Water-holding capacity (WHC) is an important attribute related to fresh meat quality, influencing the freshness, cooking yield and sensory palatability, as well as nutritional profile [56]. As reported in Table 2, water losses gradually increased for all pork samples during storage time, ranging from 1.41% to 12.03% for uncoated samples and from 0.93% to 9.43% and from 0.41% to 8.06% for LACTO-QUE and HA/LACTO-QUE samples, respectively. This behaviour could be due to the water-barrier properties of alginate-based coatings that prevent exudation and meat dehydration [5]. The comparison between coated pork samples pointed out a significantly (p < 0.05) higher WHC in HA/LACTO-QUE than coatings loaded with free quercetin and lactoferrin. These results agree with our previous studies [7,35], highlighting the capability of HA structure to lead to a strong water-holding capacity. Texture parameters measured of coated and uncoated pork samples are reported in Table 3. As can be observed, the alginate layer on the surface of pork fillets influences the texture parameters of pork fillets showing significant differences in comparison with uncoated samples. The hardness increased significantly at 15 storage days only in control and LACTO-QUE samples. As previously stated, the increased hardness in meat products may be correlated to water leakage because of the evaporation of water from the meat surface and the reduced capability of meat proteins to hold water [13]. The decrease in springiness values was registered at the end of the storage period for all pork fillet samples. The lowering in springiness could be explained by a lower-elasticity meat structure arising from water leaks. Values of cohesiveness, gumminess and chewiness did not show significant (p < 0.05) differences among samples during the cold storage. C 0.11 ± 0.00 aA 0.13 ± 0.00 aB 0.09 ± 0.00 aC 0.13 ± 0.00 aD 0.12 ± 0.00 aE 0.08 ± 0.00 aF LACTO-QUE 0.14 ± 0.00 bA 0.13 ± 0.00 aB 0.15 ± 0.10 aB 0.06 ± 0.02 bB 0.12 ± 0.10 aB 0.09 ± 0.00 bC HA/LACTO-QUE 0.14 ± 0.00 bA 0.14 ± 0.00 bA 0.09 ± 0.00 aB 0.08 ± 0.00 bC 0.07 ± 0.10 aC 0.12 ± 0.00 cD Different letters (a, b, c, . . . ) reveal significant differences (p < 0.05) among the samples for each storage time, and different letters (A, B, C, . . . ) reveal significant differences (p < 0.05) for each treatment during the storage time. Sensory Evaluation Changes in sensory properties (colour, odour, taste and overall acceptability) of coated (LACTO-QUE; HA/LACTO-QUE) and control (C) pork meat samples before and after cooking are shown in Table 4. The scores of samples showed a similar downward trend that, as expected, decreased with the increase in storage time. On the fourth day, the colour of uncoated samples was significantly (p < 0.05) lower than coated pork samples, reaching unacceptable scores on the seventh day. Between coated pork samples, LACTO-QUE reached a score of 3 on the 15th day, while HA/LACTO-QUE samples remained acceptable during the whole storage period. Until day 4 there were no significant (p < 0.05) differences between coated and control pork fillet samples, for both raw and cooked pork fillets. However, from the seventh day, the odour score of coated samples was significantly (p < 0.05) higher than those of uncoated ones. At the end of the storage period, HA/LACTO-QUE samples showed values higher than the acceptability threshold. The overall acceptability of uncoated cooked pork samples was insufficient on the seventh day of storage. The coating with alginate coating charged with LACTO and QUE allowed pork fillets to reach overall acceptability scores of up to 11 storage days, while the addition of HA/LACTO-QUE made pork samples acceptable for the whole investigated storage period. These results were consistent with the values of TVB-N and TVC reported in the sections discussed above. As regards the taste evaluated at time 0 for the cooked pork fillets no significant differences (p < 0.05) were observed among the control and coated samples. In particular, the cooking, due to the high temperature, caused the thermal breakdown of the alginate coating, though characterised by a neutral taste [7]. Conclusions An alginate-based coating activated with hydroxyapatite/lactoferrin/quercetin complexes was developed and applied to fresh pork meat to extend its shelf life. The morphological analysis confirmed the adsorption of both bioactive compounds into the hydroxyapatite network, while in-vitro studies showed a homogeneous release of quercetin glycoside compounds and lactoferrin through the coating, reaching equilibrium in 70 h and 30 h, respectively. The activated alginate-based coating showed a high capability to slow down the growth of the main microorganism responsible for the spoilage of fresh meat products, during storage for 15 days at 4 • C, reducing the production of the volatile basic nitrogen compounds. Moreover, the comparison among samples pointed out a positive effect of the coating charged with hydroxyapatite/lactoferrin/quercetin complexes to slow down the changes in hardness during the storage time as well as the sensory attributes for both uncooked and cooked pork fillets. Finally, the results of the sensory evaluation showed that the presence of edible coating did not affect the visual and taste attributes in raw and cooked fillets, making the proposed active edible coating a potential application for the shelf-life extension of fresh meat products.
v3-fos-license
2021-10-27T13:23:09.119Z
2021-10-27T00:00:00.000
239890787
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2021.730519/pdf", "pdf_hash": "a669a44756b31a7800547d307134e779c7d8c266", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:249", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "a669a44756b31a7800547d307134e779c7d8c266", "year": 2021 }
pes2o/s2orc
Evaluation of the MGISEQ-2000 Sequencing Platform for Illumina Target Capture Sequencing Libraries Illumina is the leading sequencing platform in the next-generation sequencing (NGS) market globally. In recent years, MGI Tech has presented a series of new sequencers, including DNBSEQ-T7, MGISEQ-2000 and MGISEQ-200. As a complex application of NGS, cancer-detecting panels pose increasing demands for the high accuracy and sensitivity of sequencing and data analysis. In this study, we used the same capture DNA libraries constructed based on the Illumina protocol to evaluate the performance of the Illumina Nextseq500 and MGISEQ-2000 sequencing platforms. We found that the two platforms had high consistency in the results of hotspot mutation analysis; more importantly, we found that there was a significant loss of fragments in the 101–133 bp size range on the MGISEQ-2000 sequencing platform for Illumina libraries, but not for the capture DNA libraries prepared based on the MGISEQ protocol. This phenomenon may indicate fragment selection or low fragment ligation efficiency during the DNA circularization step, which is a unique step of the MGISEQ-2000 sequence platform. In conclusion, these different sequencing libraries and corresponding sequencing platforms are compatible with each other, but protocol and platform selection need to be carefully evaluated in combination with research purpose. Illumina is the leading sequencing platform in the next-generation sequencing (NGS) market globally. In recent years, MGI Tech has presented a series of new sequencers, including DNBSEQ-T7, MGISEQ-2000 and MGISEQ-200. As a complex application of NGS, cancer-detecting panels pose increasing demands for the high accuracy and sensitivity of sequencing and data analysis. In this study, we used the same capture DNA libraries constructed based on the Illumina protocol to evaluate the performance of the Illumina Nextseq500 and MGISEQ-2000 sequencing platforms. We found that the two platforms had high consistency in the results of hotspot mutation analysis; more importantly, we found that there was a significant loss of fragments in the 101-133 bp size range on the MGISEQ-2000 sequencing platform for Illumina libraries, but not for the capture DNA libraries prepared based on the MGISEQ protocol. This phenomenon may indicate fragment selection or low fragment ligation efficiency during the DNA circularization step, which is a unique step of the MGISEQ-2000 sequence platform. In conclusion, these different sequencing libraries and corresponding sequencing platforms are compatible with each other, but protocol and platform selection need to be carefully evaluated in combination with research purpose. INTRODUCTION With the launch of the Human Genome Project, next-generation sequencing (NGS) technology has had a huge impact on the biological field in the past 20 years (Consortium, 2015;Yang et al., 2015;Goodwin et al., 2016). Different companies and research institutions have developed various sequencing approaches and platforms, such as Roche's 454 sequencing platform, Illumina's sequencing by synthesis (SBS) technology, and PacBio's single-molecule nanopore sequencing technology (Rivas et al., 2015;Goodwin et al., 2016). Among them, the sequencers or sequencing platforms developed by the Illumina Company have a dominant position in the sequencing market due to their high throughput and high sequencing accuracy. Over time, the development of machine hardware and the diversification of bioinformatics analysis software tools have led to drastic reductions in sequencing costs and increases in convenience and usability, even for new developed techniques like single cell sequencing (Yang et al., 2020a;Xu et al., 2020). For example, NGS technology plays a vital role in analyzing somatic mutations that occur in multiple tumor types. The Cancer Genome Atlas (TCGA) (Weinstein et al., 2013) and International Cancer Genome Consortium (ICGC) (Hudson et al., 2010) have sequenced thousands of tumors from more than 50 cancer types and summarized the significant genetic somatic mutations that occur during the process of tumorigenesis (Alexandrov et al., 2013). These data have played an extremely important role in promoting cancer genome research and development (He et al., 2020a;He et al., 2020b;Liu et al., 2021). When MGI launched their sequencers, they indicated that they were compatible with the sequencing libraries constructed based on Illumina protocols, that is, that the MGISEQ platform could sequence the Illumina libraries. In our study, we used the same capture DNA libraries constructed based on the Illumina protocol for sequencing with the Illumina NextSeq 500 and MGISEQ-2000 sequencing platforms. We found that the two platforms had high consistency in the hotspot mutation analysis and that there was a significant loss of the 101-133 bp fragments on the MGISEQ-2000 sequencing platform but not in the capture DNA libraries based on the MGISEQ protocol. We hypothesized that this might be related to fragment selection or low ligation efficiency during the DNA circularization step, a step that is unique to the MGISEQ-2000 sequence platform. Hence, although the selection of sequencers and platforms is becoming increasingly diversified and all theoretically compatible and applicable to each other, the choice of platform for practical applications may need to be further evaluated according to the research purpose and library characteristics. Table 1. Sample Collection and Experimental Groups We randomly selected 204 (75%: 204/272) samples to construct capture libraries based on the Illumina protocol and performed data analysis. The remaining samples were divided into two groups of 34 samples (12.5%: 34/272) using different capture panels and constructing capture libraries based on the MGISEQ protocol for sequencing and data analysis, respectively. Data Normalization and Statistics As the volume of sequencing data and read length of the Illumina and MGISEQ-2000 platforms were different (Supplementary Table S1), we "normalized" all 272 sample sequencing datasets, that is, each sample had the same read length and read number. We used seqtk (version: 1.0-r73-dirty) (https:// github.com/lh3/seqtk) to "normalize" the raw sequencing data. We used a in-house perl program to caculate the number of reads, Q20 ratio and GC content (Supplementary Table S2). Data Preprocessing and Analysis The normalized data were cleaned by Trimmomatic (version: 0.39) (Bolger et al., 2014), which filtered out the adapter contamination reads and low-quality reads and the parameter's setting was ILLUMINACLIP: , and the BAM format file was obtained. We used FreeBayes (version: 1.0.2) (Garrison and Marth, 2012) to detect SNP/InDel mutations (parameters: -j -m 10 -q 20 -F 0.001 -C 1). The mutations were annotated from the ANNOVAR database (Wang et al., 2010). Fragment size distribution was summarized from the paired-end alignment information (column ninth) in the BAM format file. Statistical analysis used the statistical functions in Microsoft Excel 2019 and R software (version 3.2.5). Data Quality Control Parameters Were Significantly Different Between the Illumina and MGISEQ-2000 Sequencing Platforms We compared the Q20 rate, GC content, mean depth and capture efficiency of 204 samples generated based on the Illumina library protocol, which were captured by the IDT 38-hotspot gene panel and sequenced on the Illumina and MGISEQ-2000 sequencing Supplementary Table S3), respectively. We found that all of the quality control parameters had significant differences, with p-values of 4.87e-85, 1.15e-4, 0.0326 and 0.0035, respectively, in the two-tailed heteroscedasticity t-test analysis. We thought that these differences could be due to the sequencing principles, the algorithm used for base recognition or the sequencing platform characteristics. For example, the Nextseq500 platform treated all unrecognized bases as G, while HiSeq-2000, MGISEQ-2000 and other previous four-color imaging sequencers treated these bases as N. Therefore, the GC content tended to be higher in the Illumina NextSeq500 results than in the others. (Figure 2A). Furthermore, no significant difference (R 2 0.8422, p-value 0.9652) in mutation frequency was observed between the Illumina and MGISEQ-2000 platform data. ( Figure 2B). MGISEQ-2000 sequencing platform data based on Illumina libraries showed a significant loss of the 101-133 bp fragment. Hotspot Mutations Showed High Insert fragment size and distribution were evaluated and analyzed for all 204 samples. As we used the same sample library for sequencing, the theoretical difference only existed in Illumina's bridge PCR amplification and MGISEQ-2000s DNB circularization. ( Figure 3A) (Goodwin et al., 2016;Chen et al., 2019;Korostin et al., 2020). Combining all 204 sample data for fragment size analysis, our results revealed a significant loss of 101-133 bp fragments in the MGISEQ-2000 platform data, with a t-test p-value of 3.3072e-17 ( Figure 3B), while other fragment sizes, such as 134-500 bp (t-test p-value 0.7264), did not show a difference. Although significant differences were found in the Q20 rate, GC content and other quality control statistics, these should be attributable to the sequencer system characteristics and should not have a great impact on the fragment size distribution. Therefore, the loss of the 101-133 bp fragment size may be related to the DNA cyclization step, that is, there may be fragment size selection in the circularization step or enrichment bias for longer DNA molecules and low ligation efficiency for shorter DNA molecules. Then, we extracted 101-133 bp and 134-500 bp fragment size information from BAM files for each sample and analyzed the sequencing depth distribution of three common cancer genes, ALK receptor tyrosine kinase (ALK), epidermal growth factor receptor (EGFR) and erb-b2 receptor tyrosine kinase 2 (ERBB2). The results showed that 69.12% (141/204) of samples had 101-133 bp fragment size loss, while the sequencing depth distribution of 134-500 bp fragments was consistent with the overall total sequencing depth, indicating that the phenomenon was not due to stochasticity in specific genes ( Figure 3C). The sequencing depth distribution of all samples was in the Supplementary Figures by each sample. As we know, the use of FFPE or hemolyzed samples may have a great influence on the distribution of DNA fragment size. Therefore, we performed statistical analysis on the quality of 204 samples with and without 101-133 bp loss. First, we defined the sample quality levels with DNA agarose gel electrophoresis as A, B, C, D or E ( Figure 4A). Then, all samples in each grade were subgrouped according to whether the 101-133 bp fragment size was lost. We found that the sample proportions of A, D and E levels were consistent in the two groups, while B and C levels were quite different. The proportions of B [C] level samples in the 101-133 bp loss group and 101-133 bp nonloss group were 25.53% (36/141) [26.24% (37/141)] and 41.27% (26/63: 6) [9.52% (6/63)], respectively ( Figure 4B). Therefore, our results showed that the circularization step of MGISEQ-2000 not only biased the selection of DNA fragment size but also may have a greater impact on samples with quality grade B or C. Fragment Size Loss had no Probe Preference and was not Obvious in the Database of MGISEQ-2000 Libraries. To verify whether the phenomenon was related to capture-probe preference, we analyzed the fragment size distribution of the sequencing data from 34 samples that were captured with an Agilent 519 gene panel and sequenced separately by Illumina Nextseq500 and MGISEQ-2000. As shown in Figure 5A, the same 101-133 bp fragment size loss was found. In addition, we constructed 34 other libraries according to the experimental protocols of MGISEQ and Illumina and generated data on their sequencing platforms. We also analyzed the fragment size distribution and found that the fragment size (peak 183 bp) distribution on the Illumina platform had a "left offset" compared to that (peak 214 bp) on the MGISEQ-2000 platform. The fragment size distribution curve of the MGISEQ data was smooth, and there was no obvious 101-133 bp fragment size loss ( Figure 5B). DISCUSSION In recent decades, next-generation sequencing technology has undergone rapid development. With the greatly reduced sequencing cost, increasing scientific research and technical product development are being applied to NGS. In particular, to meet the needs of precision medicine and big data mining, the number and scale of cancer omics research and clinical projects are constantly increasing (Yang et al., 2020b;Zeng et al., 2020). For a large number of samples, the expenses and costs borne are unaffordable; thus, sequencing costs are still the bottleneck for large-scale NGS applications. At present, Illumina sequencers dominate the high-throughput sequencing market, but MGI sequencers based on DNB technology have gradually become more popular worldwide. Recently, several studies have compared the performance of BGI-500 and the Illumina HiSeq machine and showed that both of them could produce high-quality data in various applications. However, a comparison of their quality for capture panel sequencing (except WES), which is widely used in tumor research, has not been published. In this study, we compared the data produced from the same library by different sequencing platforms. For the library preparation step, Illumina used bridge PCR technology, while MGI achieved single-molecule template amplification by DNB circularization amplification. We applied both the Illumina (Nextseq500 and MiSeqDx) platform and MGISEQ (MGISEQ-2000) platform to the same library constructed by the Illumina protocol. Theoretically, any difference in sequencing data should have been caused by the differences between bridge PCR and circularization amplification or the consequent sequencing system differences. Comparison of the data analysis results revealed the disadvantage of fragment size selection and short fragment size ligation efficiency in the circularization step. These results suggest that the sequencing data based on Illumina library preparations and in which sample types with shorter fragment sizes (such as hemolyzed plasma samples) or a more complex distribution of DNA fragment sizes (such as FFPE samples with longer storage times) are used may encounter short DNA fragment size loss on the MGISEQ sequencing platform. Therefore, we should evaluate the compatibility of sequencing libraries and sequencing platforms for scientific research that focuses on the distribution of fragment size, especially for small RNA (Fehlmann et al., 2016), cell-free DNA (cfDNA) and circulating tumor DNA (ctDNA) research (Underhill et al., 2016;. Although the sequencing library is basically compatible with different sequencing platforms, appropriate experimental systems and sequencing platforms should be selected based on the research purpose and sample type. Otherwise, there may be an unexpected impact on the sequencing results. Our data showed the results of only target capture panel sequencing; the assessment of other sequencing applications requires further investigation. Considering that the alignment algorithm may also have an impact on the fragment size distribution analysis, we replaced the BWA "aln" algorithm mentioned in the article with the BWA "mem" algorithm. The "mem" algorithm is much looser than the "aln" algorithm, and it can perform local alignment and splicing. The "mem" algorithm allows multiple different parts of the sequencing reads to have their own optimal matches, resulting in multiple optimal alignment positions for the reads and greatly improving the alignment rate. After comparing and analyzing the combined data with 204 samples of the IDT 38-hotspot gene panel and 34 samples of the Agilent 519 gene panel by using the "mem" algorithm, we found that the number of reads in the 101-133 bp fragment size from the MGISEQ-2000 platform data was significantly improved (Supplementary Figure S1), but there were still significant differences, with t-test p-values of 0.0277 and 0.0252, respectively. The conclusion was consistent with that based on the "aln" algorithm. We also found that the data without the 101-133 bp fragment size loss were derived from different sequencing read lengths of the Illumina Nextseq500 and MGISEQ-2000 platforms, while the data with the same sequencing read length showed the 101-133 bp fragment size loss. To investigate whether the data with or without the phenomenon were related to the sequencing read length, we reanalyzed and compared data with the same number of sequencing reads but not read length, and found that the results were consistent with the previous conclusion. Since the 101-133 bp fragment size loss was concentrated in the data with long read length (150 bp) but not in the data with short read length (100 bp), we hypothesized that the phenomenon may also be related to the sequencing read length. We will conduct more in-depth research on this point in our future work. In summary, the MGISEQ-2000 platform has good compatibility with Illumina sequencing libraries, but the DNB circularization step may cause fragment size selection or have low ligation efficiency for short DNA fragment sizes. For the accuracy of downstream data analysis, we recommend that different sequencing platforms should be used with their official experimental systems and kits. If the experiment needs to change between different platforms, for cost considerations or other reasons, the selected platform should be evaluated carefully with respect to the purpose of the research or actual needs, as it may have a significant impact on outcomes. In the future, it would be interesting to compare the performances of two platforms in specific applications like cancer diagnosis (He et al., 2020b;Peng L.-H. et al., 2020), prognosis (Peng et al., 2020c;Song et al., 2020;Zhou et al., 2020), evolution inference (Yang et al., 2013;Yang et al., 2014), drug repositioning (Peng et al., 2015;Zhou et al., 2019;, and so on. However, it is out of the scope of this study. DATA AVAILABILITY STATEMENT The data has been uploaded to NCBI -BioProject 744584. AUTHOR CONTRIBUTIONS GT, JL and BH designed the study, collected, analyzed and interpreted the data, and wrote the article. XuS and ZY performed the experiment. RZ, SZ, TL, XiS, YS, WW and PB reviewed and modified the article. All authors approved the final version of the article.
v3-fos-license
2021-09-01T15:07:45.892Z
2021-06-28T00:00:00.000
237381522
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B2-2021/471/2021/isprs-archives-XLIII-B2-2021-471-2021.pdf", "pdf_hash": "32c7365778242a30a95d7b9c38ecdcc858953990", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:250", "s2fieldsofstudy": [ "Computer Science", "Environmental Science", "Engineering" ], "sha1": "310bb888284dbdf7311b293d686b1b84e0d2324f", "year": 2021 }
pes2o/s2orc
UNSUPERVISED OBJECT-BASED CLUSTERING IN SUPPORT OF SUPERVISED POINT-BASED 3D POINT CLOUD CLASSIFICATION : The number of approaches available for semantic segmentation of point clouds has grown exponentially in recent years. The availability of numerous annotated datasets has resulted in the emergence of deep learning approaches with increasingly promising outcomes. Even if successful, the implementation of such algorithms requires operators with a high level of expertise, large quantities of annotated data and high-performance computers. On the contrary, the purpose of this study is to develop a fast, light and user-friendly classification approach valid from urban to indoor or heritage scenarios. To this aim, an unsupervised object-based clustering approach is used to assist and improve a feature-based classification approach based on a standard machine learning predictive model. Results achieved over four different large scenarios demonstrate the possibility to develop a reliable, accurate and flexible approach based on a limited number of features and very few annotated data. INTRODUCTION The current 3D research activities -which extends to various domains and applications -are dominated by classification endeavours. The semantic segmentation (or more commonly classification) task is a significant challenge for 3D unstructured datasets acquired with active or passive sensors (Xie et al., 2020). Recognising elements composing a scene is a crucial step, especially for Digital Twins (Stojanovic et al., 2018), Smart Cities (Nys et al., 2020) and Building Information Modelling (Bassier et al., 2017). In this context, there is a great demand for automated processes that can speed-up and improve the reliability of existing classification frameworks. However, we can safely say that there are no reliable and generalised methods for all the different scales and scenarios one can encounter. A classification method can hardly fulfil all domains since the semantic definitions attached to objects can differ depending on the domain. Most semantic segmentation methods aim to advance performances in one specific context, such as indoor structural recognition (ceiling, wall, floors, chair, etc.) (Dai et al., 2017;Stojanovic et al., 2019) or outdoor classification (street, building, car, vegetation, etc.) (Özdemir et al., 2019;Hu et al., 2020). This is also because modern approaches mainly rely on supervised methods based on neural networks (Guo et al., 2020), which necessitate annotated context-specific datasets such as the ones provided by Armeni et al. (2017) and Tan et al. (2020). These approaches are often fully supervised, rarely unsupervised and require a high level of expertise on top of high computing resource demands. Aim and structure of the paper This paper aims at an easy-to-implement and user-friendly supervised method generalisable to several contexts and domains. To achieve this, we explore how unsupervised objectbased features (Poux and Billen, 2019;Poux and Ponciano, 2020) can help a supervised point-based classification (Grilli et al., 2019;. The goal is to combine two different classification approaches to maximise results' accuracy, minimise human efforts and deliver a 3D classification method that is case-and context-independent while usable by non-experts. Experimental results are conducted on heterogeneous datasets (Section 1.2), including multi-scale urban areas (aerial LiDAR and photogrammetry), indoor buildings (RGB-D sensor) and architectural scenarios (terrestrial photogrammetry). The achieved results demonstrate the method's reliability and replicability. In the age of deep learning, the suggested method relies on a standard machine learning algorithm (Random Forests) to achieve fast and accurate point cloud classification, using reduced annotated samples and a minimal number of automatically computed features. Thus, primary purpose of this study is not the direct comparison with state-of-the-art methods but rather a study to evaluate how to improve 3D classification results by merging features and methods used in and Grilli et al. (2019). Following a summary of related works in Section 2, we define our methodology in Section 3. Section 4 presents the results over four heterogeneous scenarios to demonstrate the efficiency of the presented approach. Finally, in Section 5, we give some closing remarks as well as suggestions for future work. Considered scenarios The presented method was evaluated on the following scenarios: • Large scale urban point cloud (700 m x 700 m), derived with a hybrid aerial sensor over the city centre of Bordeaux (Toschi et al., 2021) RELATED WORKS The main methods for 3D classification reported in the most recent literature can be divided into two big macro-categories: machine and deep learning approaches. The feature engineering phase is one of the primary distinctions between standard machine learning methods and advanced deep learning methods. In the first case, the operator studies and selects the features, whereas in the second case, neural networks learn features after being fed large amounts of annotated data. According to the application's purpose of this research, this section is first focused on the point cloud feature selection, then on the existing approaches based on a combination of clustering and point-based classification. Feature selection. Establishing the features to be used in the model is a critical step in the supervised classification analysis. Most of the similar studies depend on geometric features to classify the points of a considered point cloud based on their local neighbourhood. The neighbourhood to be used can be defined using either an established radius, which can be spherical (Lee and Schenk, 2002) or cylindrical (Filin and Pfeifer, 2005) or a K number of nearest neighbours (KNN) (Linsen and Prautzsch, 2001). The sampling rate resulting from data acquisition and the items of interest influence the choice of an acceptable value (radius or K). For this reason, all these neighbourhood types have been and are still broadly explored in literature within single or multi-scale approaches (Weinmann et al., 2015). On the other, multi-scale approaches have proved to the most efficient, whether used for spherical/cylindrical neighbourhoods ( Weinmann et al. (2014Weinmann et al. ( , 2015Weinmann et al. ( , 2017a. Classification methods. The semantic segmentation methodsdenoted classification in this article-can vastly differ depending on the feature set provided as an input to the machine learning classifier. In the literature, we usually distinguish point-based classifiers (that reason from a per-point feature set) from segment-based classifier (per-segment labelling). The later usually relies on a segmentation step, where the point cloud is partitioned into subsets of points called 'segments'. In addition to neighbourhood definitions found at the point-level to achieve point-based classification, such as shown in Bremer et al. (2013), other characteristics can describe each segment to guide the process. The result is a set of internally homogeneous segments, i.e. groups of points representing the basic units for classification. In many cases, segmentation procedures aim to produce relatively small segments (over-segmentation), representing only object parts (sub-objects) rather than the final objects of interest directly. In Chehata et al. (2009) first, a supervoxel-based segmentation is used to segment point cloud data, then different machine learning algorithms are tested to label the point cloud. Luo et al. (2018) proposed a supervoxel-based classification; their method used Conditional Random Field matching to classify supervoxels. Sun et al. (2018) used a Random Forest classifier to classify point cloud based on supervoxels. Some authors rely on a region growing algorithm for segmentation of point cloud flowed by an object-based classifier such as SVM (Yang et al., 2017) or a Bagged Tree Classifier (Bassier et al., 2020). use segment-based shape analysis relying on semantic rules. This article investigates the merging of both clustering and pointbased classification to develop a user-friendly approach. A small number of similar attempts have been proposed in the literature, such as presented in Weinmann et al. (2017b) where segmentbased shape analysis relies on semantic rules. This approach motivates the gains of a "higher level" understanding of the scene translated into features that can help achieve better inference. Other works which relies on "segment features" in point-based classification frameworks can be found in Guinard et al. (2017), Landrieu et al. (2017) and Landrieu and Simonovsky (2018). METHODOLOGY This section first describes the different steps of our framework (Section 3.1) and then explains the features used in our combined classification approach (Section 3.2). Framework The combined approach presented in this study follows these main steps: • Apply unsupervised clustering, following the approach presented in to segment the datasets. • Extract of a small set of geometric and covariance-based features which are effective for heterogeneous scenarios: the feature selection and their computation within heuristic neighbours is automatically performed to by-pass the otherwise laborious feature design process (Section 3.2). • Manually annotate a reduced portion of the point cloud, facilitated and supported by the clustering results ( Figure 1): although the datasets used in this paper contains fully annotated point clouds, the data were divided into training (30%) and test (70%) sets ( Figure 2). Our idea is that, when training is not available, the training's size should be as limited as possible and the annotation step rapid and user friendly. • Assess the achieved point-wise classification outcomes through quality metrics extracted for the entire test set: among the several metrics existing in the literature (Goutte and Gaussier, 2005), the Overall Accuracy (OA) is used to evaluate the classifier's ability to predict labels based on all observations. In addition, it is considered the F1-score, as it's a good measure of how well the classifier performs, being an average of Precision and Recall. Feature engineering We aim to design a reduced number of meaningful features that can be used and adapted in different scenarios for point cloud classification. These features can then be fed into standard classifiers to train machine learning predictive models. Three main categories of features are combined: a) radiometric (RGB values), b) clustering and c) geometric features. Clustering features (Figure 3). Clustering methods have two major advantages: (i) they don't use prior knowledge on discriminating variables and (ii) they find answers directly in the data. This allows exploring fed variables and highlight unsuspected (or suspected) relationships. The clustering features are computed following an unsupervised scheme, where the point cloud is partitioned into subsets of neighbouring points called segments Poux and Ponciano, 2020;Bassier et al., 2020). We aim at a set of internally homogeneous segments that will host the cluster features at four different aggregation levels. The procedure aims to yield relatively small segments, representing only object parts (sub-objects) rather than the final objects of interest directly, which, sorted, constitute the Cluster feature Level 1. Then adjacent segments with similar properties are merged to spatially contiguous objects (Cluster feature level 2) by considering the results of the first clustering pass instead of the initial point cloud as input. Such a step-wise procedure, based on an initial over-segmentation, permits reducing the risk of combining multiple real-world objects in one segment (undersegmentation). The principle of the approach is to limit any domain knowledge and parameter tweaking to provide a fully unsupervised clustering featuring. Finally, level 3 is based on a k-means clustering using the Hartigan's rule (Chiang and Mirkin, 2010), whereas level 4 is a graph-based centrality-measure of the clusters weighted over the first three levels. The initial parameters involved in the definition of these multilevel clustering features are automatically extracted through an automatic heuristic determination of three RANSAC-inspired clustering parameters: • a distance threshold for neighbourhood definition (ε); • the threshold for the minimum number of points needed to form a valid planar region (τ); • the decisive criterion for adding points to a region (α). (Figure 4). Covariance features (Blomey et al., 2014), or eigen-based features, are commonly used in segmentation and classification procedures due to their ability to provide in-depth information on the geometrical layout of the reconstructed scene. The most common covariance features include (Table 1) (2) Geometric features Omnivariance In the same way, in this paper, we aim to identify a reduced number of geometric features that can be used in any possible environment, notwithstanding a fast computation. The number of covariance features used in the presented approach is reduced to the only two Omnivariance and Surface Variation, chosen because of their ability to distinguish macroelements and entities (Teruggi et al., 2020). In order to demonstrate the effectiveness of the reduced selection, the classification experiments were also carried out using the entire set of features (Section 4). In addition, a height-based feature (Distance from Ground) and a normal-based one (Verticality) are considered. In particular, we have noticed that the feature Verticality is typically needed, independently from the scenario, to differentiate precisely horizontal and vertical artefacts. Directly related, the use of a height-based feature like the Distance from ground (Δz component) becomes essential to distinguish the different horizontal elements (i.e., street and roof). It has to be underlined that the selected features are extracted within a spherical radius ε (offered by the Clustering features) in a multi-scale approach. Experimentally we observed that a maximum number of 4ε was optimal for all scenarios. EXPERIMENTS AND RESULTS For each case study, the Random Forest classifier was trained with different combinations of features to have an internal comparison between the proposed method and the ones we combined. . Besides, all datasets were treated with and without their radiometric attributes to further test the approach's reliability in different conditions. Tables 2 shows results coming from the above-mentioned feature combinations in the four considered scenarios (Section 1.2). It can be seen that the proposed approach, which combines clustering features and a few selected geometric features, leads to an improvement in results. It is also noticeable that the combined approach performed better when radiometric features were included. Besides, we can see that the better improvements were achieved for the urban scenarios. In fact, quite similar accuracy values were reached between the standard multi-scale (Approach B) and the combined approach for the indoor and architectural datasets. However, from a qualitative point of view, classification results look much "cleaner" when cluster and geometric features are combined ( Figure 5 and 6). In addition, it has to be considered that only 17 features were used for the proposed approach, against the 73 of the standard one. For more details about the quality of the results, please check Figures 7-10, comparing hand-annotated and predicted point clouds. Finally, in Tables 3-6, all the per-class F1 scores are reported. Table 2. Classification metrics achieved in the four scenarios using different features. CONCLUSIONS The paper presented a combined approach, based on clustering and covariance features, for point cloud classification based on a traditional machine learning predictor. Four heterogeneous datasets were considered, featuring different type of classes and scenarios. Experiments proved that unsupervised object-based features help supervised point-based classification. Therefore, the combined method offers reduced labelling efforts, speeds up classification processing, improves accuracy, requires low computational power and is generalisable to various scenarios, making it suitable for daily work in various fields. As future work, we plan to compare the presented approach with other state-of-the-art methods for benchmarking purposes, including deep learning methods.
v3-fos-license
2021-09-01T15:10:27.560Z
2021-06-24T00:00:00.000
237882722
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://myukk.org/SM2017/sm_pdf/SM2613.pdf", "pdf_hash": "4b95b5b053adc14780f332b7e0691ca64edf758c", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:251", "s2fieldsofstudy": [], "sha1": "c47bdd3eb115d4636c4ed86df273e6ac33695a99", "year": 2021 }
pes2o/s2orc
X-ray-induced Luminescence Properties of Nd-doped GdVO 4 Nd-doped GdVO 4 single crystals were synthesized and their photoluminescence and scintillation properties were evaluated. The Nd-doped samples showed scintillation due to the 4f–4f transitions of Nd 3+ with peaks at around 900, 1060, and 1320 nm. In addition, we evaluated the relationship between the scintillation signal intensity and the X-ray exposure dose rate for dosimetric application. All the Nd-doped samples demonstrated high sensitivity with a dynamic range from 0.006 to 60 Gy/h. Introduction Scintillators are functional materials that convert absorbed radiation energy into thousands of low-energy photons and emit UV, visible, and near-infrared (NIR) light immediately. (1) The range of applications of scintillators is very wide, for example, security, (2) medical imaging, (3) cosmic ray detection, (4) and environmental monitoring. (5) Various performances are required for scintillating materials such as high light yield, fast decay, high effective atomic number, low afterglow level, and chemical stability. However, there are no perfect materials satisfying all these requirements. Thus, suitable scintillators are selected according to their purpose, and many scintillator materials have been developed with different chemical compositions such as crystals, (6,7) ceramics, (8) glasses, (9,10) plastics, (11,12) and organic-inorganic compounds. (13) Because scintillators are usually used in combination with photomultiplier tubes (PMTs), which have sensitivity in the UV-visible range, scintillators emitting UV-visible light have been developed. On the other hand, in recent years, scintillators emitting NIR photons have attracted much attention since NIR photons have unique characteristics. NIR photons (700-1500 nm) have a high penetration power into the human body without causing damage, (14)(15)(16) and scintillators emitting NIR photons are promising for use in radiation-based bioimaging applications. (17) Moreover, scintillators emitting NIR photons are considered to work effectively in a high-dose environment. In general, the combination of a scintillator and optical fiber has been used for monitoring high radiation doses. However, optical fibers are damaged by high radiation doses, resulting in strong absorption of UV and visible photons. (17) Moreover, Cherenkov radiation appears in the UV-blue range in a high-dose environment such as a nuclear reactor. (18,19) If scintillator materials emitting UV-blue light are used in the above situation, the Cherenkov radiation will overlap with scintillation signals, leading to incorrect radiation measurements. However, scintillators emitting NIR photons would be advantageous in such a measurement configuration because the NIR scintillation signal can be easily separated from the noise generated by Cherenkov radiation and less radiation damage will occur in the NIR range. Motivated by the above ideas, we have studied scintillating materials emitting NIR photons (700-1650 nm) including fluorides, sesquioxides, oxide garnets, and oxide perovskites. (20)(21)(22)(23) To expand the investigation on NIR-emitting scintillators, we synthesized Nd-doped GdVO 4 single crystals by the floating zone (FZ) method and evaluated their photoluminescence (PL) and scintillation properties in this study. Nd-doped GdVO 4 single crystals were studied because the vanadates are well-known phosphor and laser materials, (24)(25)(26)(27) and the scintillation properties of Nd-doped YVO 4 have already been studied. (28) However, to our knowledge, there have been no studies on the NIR scintillation properties of Nd-doped GdVO 4 . Materials and Methods , and Nd 2 O 3 (4N) were prepared as raw material powders to synthesize a series of Nd-doped GdVO 4 samples. The nominal concentrations of Nd were 0.1, 0.3, 1.0, 3.0, and 10 mol%, and the Gd 3+ site was substituted with Nd 3+ . A non-doped sample was also prepared for comparison. After mixing the powders, a cylindrical rubber balloon was filled with the powder mixture, and then the balloon was formed into a rod shape by hydrostatic pressure. To obtain a ceramic rod, the shaped rod was sintered at 1100 °C for 8 h in air. The obtained ceramic rod was grown into a single crystal in an FZ furnace (Canon Machinery FZD0192). During the growth, the rotation rate was 20 rpm and the pull-down rate was 2.5-5.0 mm/h. To obtain the crystal structure, powder X-ray crystallography was performed using an X-ray diffractometer (Miniflex 600, Rigaku). A microfocus X-ray tube was used, where the target material was Cu. During the measurement, the X-ray tube voltage was set to 40 kV and the current was 15 mA. A PL excitation and emission contour map (PL map) was obtained and the PL quantum yield (QY) was measured using a Quantaurus-QY device (C11347, Hamamatsu). The contour map ranges for the PL excitation and emission were 250-800 and 300-950 nm, respectively, and the measurement interval for the excitation wavelength was 10 nm. The PL decay time profile was obtained using a Quantaurus-τ device (C11367, Hamamatsu). Here, we selected 575-625 and 900 nm as the excitation and monitoring wavelengths, respectively. The scintillation spectra of the single-crystal samples were obtained using an X-ray generator (XRB80N100/CB, Spellman) and a spectrometer (Andor DU492A, which covered the range of 650-1650 nm). The setup of this measurement system was reported previously. (29) The scintillation decay time profile was obtained using an afterglow characterization system. (30) The PMT used in this system covered the spectral range from 400 to 900 nm. The voltage applied to the pulse X-ray source was 40 kV. The obtained decay time curves of both the PL and scintillation were approximated by least-squares fitting with an exponential decay function. To evaluate the suitability of the samples for use as detectors, the relationship between the scintillation signal intensity and the X-ray exposure dose rate from 0.006 to 60 Gy/h in the NIR range was determined using the measurement system shown in Fig. 1. The X-ray tube was supplied with a 40 kV bias voltage and several different tube currents (5.2, 0.52, and 0.052 mA) to change the X-ray exposure dose rate. The InGaAs PIN photodiode (Hamamatsu Photonics, G12180-250A) used in this system covered the spectral range from 950 to 1700 nm. The NIR photons emitted from the sample were guided to the InGaAs PIN photodiode through an optical fiber (Thorlabs, FP600ERT) with 5 m length and 600 μm core diameter. The PIN photodiode was mounted on a heat dissipator (Hamamatsu Photonics, A3179) and was cooled to 253 K using a temperature controller (Hamamatsu Photonics, C1103) to reduce thermal noise. The electric signals from the PIN photodiode were measured using an ammeter (Keysight, B2985A). Results and Discussion After the crystal growth, crystal rods of typically 4 mm diameter and 20 mm length were obtained, which were broken into pieces for characterization. Figure 2 shows all the samples used in this study (from left to right, non-doped, 0.1, 0.3, 1.0, 3.0, and 10 mol%). The samples changed from colorless to purple with increasing Nd concentration. The diffuse transmittance of all the samples was 30-60% for wavelengths longer than 350 nm. and the Nd-doped samples showed some absorption lines due to 4f-4f transitions of Nd 3+ . (31) Figure 4(a) shows powder X-ray diffraction (XRD) patterns of all the samples, with the diffraction pattern of GdVO 4 (Inorganic Crystal Structure Database #015607) also shown for comparison. Since the diffraction patterns of all the samples were in good agreement with the reference pattern, we confirmed that all the samples had the zircon-type structure, which belongs to the I4 1 /amd space group of a tetragonal crystal system. The enlarged diffraction patterns are shown in Fig. 4(b). The diffraction peaks around 25° shift to smaller angles as the Nd/Gd ratio increases, which is due to the increase in the lattice constant. doped sample showed emission around 400-500 nm due to the transition from triplet states of VO 4 3-. (32,33) As shown in Fig. 4(b), strong emission was confirmed around 900 nm owing to the 4f-4f transitions of Nd 3+ . (34,35) The QY values of the Nd-doped samples were calculated by integrating the signal intensity from 800 to 950 nm in order to focus on the NIR emission. The QY values of the 0.1, 0.3 1.0, 3.0, and 10 mol% Nd-doped samples were found to be 27.2, 20.9, 23.3, 3.3, and 0.7% with typical errors of ±2%, respectively. The QY value was much lower for the 3.0 mol% Nd-doped sample than for the 1.0 mol% Nd-doped sample. Thus, concentration quenching may have arisen at a Nd concentration of around 3.0 mol%. PL decay curves of the Nd-doped samples are shown in Fig. 6. Here, the excitation and monitoring wavelengths were 575-625 and 810 nm, respectively. All the decay curves followed single-component simple exponential decay. For the 0.1-1.0 mol% Nd-doped samples, the PL decay times were around 100 μs. These decay times are typical for the 4f-4f transitions of Nd 3+ and agree with previously reported values. (36) In contrast, the decay times of the 3.0 and 10 mol% Nd-doped samples were lower. This is considered to be due to concentration quenching of samples, and this tendency was consistent with the QY values. Figure 7 shows the X-ray-induced scintillation spectra of all the samples in the NIR range with the intensity normalized. All the Nd-doped samples exhibited three emission peaks at around 900, 1060, and 1320 nm. The intensity of the emission peak around 1060 nm was the highest, and this emission is commonly used in laser applications. (37) These emissions were due to the electronic transitions of Nd 3+ 4 F 3/2 → 4 I 9/2 (900 nm), 4 F 3/2 → 4 I 11/2 (1060 nm), and 4 F 3/2 → 4 I 13/2 (1320 nm). (32,38,39) The X-ray-induced scintillation decay time profiles of the Nd-doped samples are illustrated in Fig. 8. To determine the decay time constants, the decay curves of all the Nd-doped samples were approximated by the sum of two exponential decay functions. The first decay component was regarded to be a tail due to the instrumental response (~1.8 μs), and the second decay component was considered to be the signal from the sample. The decay times of the 0.1-1.0 mol% Nd-doped samples were typical values for the 4f-4f transitions of Nd 3+ , (40,41) and the 3.0 and 10 mol% Nd-doped samples had very short decay times. These tendencies were similar to those of the PL. For these reasons, the origin of the emission of scintillation was ascribed to the 4f-4f transitions of Nd 3+ , and it was concluded that the 3.0 and 10 mol% Nd-doped samples suffered from concentration quenching. To evaluate the suitability of the samples for use as detectors, the relationship between the average scintillation signal intensity from 960 to 1700 nm and the X-ray exposure dose rate was evaluated in the dose rate range from 0.006 to 60 Gy/h. The obtained results are shown in Fig. 9. All the Nd-doped samples exhibited high sensitivity in the dynamic range from 0.006 to 60 Gy/h. Conclusions A series of GdVO 4 single crystals doped with different concentrations of Nd were synthesized by the FZ method, and their PL and scintillation properties were evaluated. According to its PL map, the non-doped sample exhibited emission around 400-500 nm due to the transition from the triplet state of VO 4 3− , and all Nd-doped samples showed emissions around 900 nm due to the 4f-4f transitions of Nd 3+ . The PL decay curves of the Nd-doped samples could be approximated by a single exponential decay function. The X-ray-induced scintillation spectra of all the Nddoped samples exhibited emission peaks at around 900, 1060, and 1320 nm. To evaluate the potential of the doped single crystals for dosimetric applications, the relationship between the scintillation signal intensity and the X-ray exposure dose rate was evaluated from 0.006 to 60 Gy/h in the NIR range, and the Nd-doped samples exhibited high sensitivity from 0.006 to 60 Gy/h.
v3-fos-license
2024-06-13T15:44:45.862Z
2024-06-01T00:00:00.000
270420425
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1420-3049/29/12/2741/pdf?version=1717838906", "pdf_hash": "3da737c0809f51460bfc940ccca7b264aa219543", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:252", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "sha1": "816b75e9968fe64cc1140fe9450b97d8dd168dab", "year": 2024 }
pes2o/s2orc
Supercritical Carbon Dioxide Extraction of Coumarins from the Aerial Parts of Pterocaulon polystachyum Pterocaulon polystachyum is a species of pharmacological interest for providing volatile and non-volatile extracts with antifungal and amebicidal properties. The biological activities of non-volatile extracts may be related to the presence of coumarins, a promising group of secondary metabolites. In the present study, leaves and inflorescences previously used for the extraction of essential oils instead of being disposed of were subjected to extraction with supercritical CO2 after pretreatment with microwaves. An experimental design was followed to seek the best extraction condition with the objective function being the maximum total extract. Pressure and temperature were statistically significant factors, and the optimal extraction condition was 240 bar, 60 °C, and pretreatment at 30 °C. The applied mathematical models showed good adherence to the experimental data. The extracts obtained by supercritical CO2 were analyzed and the presence of coumarins was confirmed. The extract investigated for cytotoxicity against bladder tumor cells (T24) exhibited significant reduction in cell viability at concentrations between 6 and 12 μg/mL. The introduction of green technology, supercritical extraction, in the exploration of P. polystachyum as a source of coumarins represents a paradigm shift with regard to previous studies carried out with this species, which used organic solvents. Furthermore, the concept of circular bioeconomy was applied, i.e., the raw material used was the residue of a steam-distillation process. Therefore, the approach used here is in line with the sustainable exploitation of native plants to obtain extracts rich in coumarins with cytotoxic potential against cancer cells. Introduction Pterocaulon is a genus consisting of 18 species, some of them used in popular medicine, mainly as infusions and decoctions, for treating skin, liver and respiratory diseases.Coumarins are considered the main active substances responsible for purported benefits in the treatment of certain ailments [1,2]. Previous studies conducted with Pterocaulon polystachyum DC showed that this species is rich in coumarin [1][2][3][4] and also produces essential oils.The essential oil obtained by hydrodistillation presents activity against Acanthamoeba polyphaga.Furthermore, regarding amebicidal activity, the hexane extract of this plant demonstrates activity against A. castellanii [1].The plant has also been investigated for cytotoxic activity.The petroleum ether extract of P. polystachyum tested against human promonocytic U-937 cells reduced proliferation and induced differentiation of proliferation. The current market for drugs is geared toward obtaining end products through organic synthesis involving solvents that are generally harmful to health and undesirable for the environment.In order to circumvent this issue, green techniques such as supercritical fluid extraction (SFE) began to be widely investigated [5][6][7].Such techniques have been successfully employed to obtain essential oils [8], flavonoids [9], coumarins [10], and other natural compounds [11,12]. Supercritical extraction is dependent on operating conditions, that is, variables such as pressure, temperature, solvent flow, granulometry of the solid material, and presence of co-solvent, impact the yield and selectivity of the extraction [13].In this sense, experimental designs that help in understanding and optimizing the response of a system or process have often been used for supercritical extraction [14].The dependence of the response on factors such as temperature and pressure [15], co-solvent flow [16,17], extraction time [18], and the effects of raw material pretreatment [19] has been investigated.The response variable, depending on the purpose of the investigation, is the process yield [20] or the selectivity of a target component [21]. In addition to identifying suitable conditions for a process, the mathematical modeling of extraction processes is an important procedure used to improve knowledge of the behavior of a process in order to better predict the performance of extraction units in functional dimensions and different operating conditions [22,23].Different factors have been taken into account while developing mathematical models to depict supercritical extraction.Many processes that employ this technique are based on a bed consisting of the raw material through which the supercritical fluid passes.Therefore, models based on the mass balance in the phases present in the extraction beds have been proposed in the literature [24][25][26] and widely used in the simulation of extractive processes [27][28][29]. Plants of the genus Pterocaulon (Asteraceae) have been studied by our research group, and the presence of coumarins in the P. balansae and P. lorentzii SFE extracts was evinced [30,31]. Considering that both essential oil and coumarins exhibit relevant activities, and following current trends in developing technologies intended to maximize the use of natural resources and reduce waste, the aim of this study was to take full advantage of the aerial parts of P. polystachyum, previously subjected to extraction of essential oil by hydrodistillation. The present study was designed to maximize the total extract yield of non-volatile compounds from the aerial parts of P. polystachyum by investigating the influence of extractive process variables such as pretreatment by microwaves and pressure and temperature of the supercritical fluid.Mathematical modeling was carried out to generate information that could assist in the design of supercritical coumarin extraction units.Finally, the extracts obtained were evaluated for the presence of coumarins and cytotoxic activity against bladder tumor cells (T24). Extraction and Response Surface Method The results of the experiments are showed as Supplementary Material and the response surface were generated by Minitab ® 19.1.The symbols PE, TE, and TP stand for extraction pressure, extraction temperature, and microwave pretreatment temperature, respectively.The response variable was the total extract yield. Initially, a quadratic model was generated, consisting of three linear, three quadratic and three interaction components of pressure, temperature, and pretreatment temperature.This model presented an R 2 of 88.13%, indicating that it is a good representation of the obtained experimental data.The adjusted R 2 value was 66.75%, and the factor considered significant (p < 0.05) was the linear component of the extraction pressure.On the other hand, the model itself was not considered significant at a significance level of 95% and was not able to predict the response outside the considered region.For these reasons, we chose to use the stepwise regression method to improve the quality of the model.This method consists in adding or removing terms iteratively, aiming to discard terms that are not statistically significant and whose presence may contribute to an increase in error and to impair the model predictive capacity [32][33][34]. The statistical data (ANOVA) exhibited in Table 1 indicate that the stepwise regression model is statistically significant (p = 0.006) and inform that the components PE and TE present values of p < 0.05.The factors that most contribute to the model are the extraction pressure and temperature variables, responsible for more than 50% of the response.The values for the parameters R 2 and R 2 adj were 85.08 and 73.89.According to Myers et al. [35] a high value of R 2 does not necessarily mean that the regression model is good, since the addition of variables will always increase the value of R 2 , even if the variable is not significant.The same does not occur with the adjusted R 2 value, which decreases as non-significant terms are added to the model [36,37].For the stepwise regression model, the adjusted R 2 was 73.89%, which means that there is a better dependency between the dependent variable (extract yield) and the chosen independent terms (PE, TE, and TP) when compared to the conventional quadratic model.The S-value is another indication of how well the model fits the response obtained experimentally and is measured in the units of the response variable.The smaller it is, the better the response description is.The value of S for the stepwise model was 1.2748, while for the quadratic model it was 1.4384, which corroborates the better quality of the presented stepwise adjustment model.The stepwise regression model generated in non-coded units is expressed by equation 1 for the overall extract yield (%): The surfaces generated from the design of experiments and the model generated by regression are presented below.The graphs were created using Statistica 10.0 software and the values of the variables that are not on the graph were maintained at the value corresponding to the central point (PE = 200 bar; TE = 50 • C; TP = 90 • C). From the observation of the surfaces, it is possible to perceive an increase in the extract yield with the increase in the pressure and temperature variables (Figure 1a-c).In Figure 1a, where the temperature is fixed, it is observed that at the maximum pretreatment temperature, the pressure influence is smaller when compared to the region where the pretreatment temperature is minimum comparing the slopes in the two surface edges.When the extraction pressure is kept constant (Figure 1c) the slope of the response surface is observed in the direction of increasing yield as TP decreases. regression are presented below.The graphs were created using Statistica 10.0 software and the values of the variables that are not on the graph were maintained at the value corresponding to the central point (PE = 200 bar; TE = 50 °C; TP = 90 °C). From the observation of the surfaces, it is possible to perceive an increase in the extract yield with the increase in the pressure and temperature variables (Figure 1a-c).In Figure 1a, where the temperature is fixed, it is observed that at the maximum pretreatment temperature, the pressure influence is smaller when compared to the region where the pretreatment temperature is minimum comparing the slopes in the two surface edges.When the extraction pressure is kept constant (Figure 1c) the slope of the response surface is observed in the direction of increasing yield as TP decreases.The optimal conditions were obtained for the three variables considered in the experimental design using the Minitab ® optimization tool.The values obtained were 240 bar and 60 °C for the extraction conditions and 30 °C for the optimal pretreatment condition, data that are consistent with what was observed on the response surfaces presented in Figure 1, as already discussed.The average value of the yield between three experiments can then be compared to the value predicted by the polynomial obtained by regression, as in Equation ( 1).The average experimental yield was 10.46% and the The optimal conditions were obtained for the three variables considered in the experimental design using the Minitab ® optimization tool.The values obtained were 240 bar and 60 • C for the extraction conditions and 30 • C for the optimal pretreatment condition, data that are consistent with what was observed on the response surfaces presented in Figure 1, as already discussed.The average value of the yield between three experiments can then be compared to the value predicted by the polynomial obtained by regression, as in Equation ( 1).The average experimental yield was 10.46% and the standard deviation was 2.05%, which corresponds to approximately 0.5 g of extract.The yield calculated by the regression model was 12.56%, a value that is located within the region comprising the experimental deviations. Mathematical Modeling The curve for the optimal condition is shown in Figure 2 together with the adjustment of the proposed mathematical models.The three models showed satisfactory adherence to the experimental data, model 2 being the one that best represents the experimental data, given the high R 2 value. Mathematical Modeling The curve for the optimal condition is shown in Figure 2 together with the adjustment of the proposed mathematical models.The three models showed satisfactory adherence to the experimental data, model 2 being the one that best represents the experimental data, given the high R 2 value.Table 2 presents the values for the parameters provided by the mathematical models for the optimal condition of supercritical fluid extraction.The values of the parameters for the diffusion coefficient obtained for Model 1 are similar in magnitude to those reported in the literature for supercritical fluid extraction [38], and also the values found for the surface mass transfer coefficient showed the same order of magnitude found by Almeida et al. [39]. The internal mass transfer coefficient, in Model 2, resulted in an order of magnitude of 10 −8 m/s.This same order was found by other authors investigating different plant species [40]. The parameter values of Model 3 are relatively high when compared to the values obtained by Almeida et al. [39].However, Falcão et al. [38] obtained higher values for kp, with an order of magnitude of 10 −2 , being closer to that obtained in this work.The relatively high order of magnitude of parameters (10 −3 ) obtained for Model 3 may be associated with the high amount of extract and the speed with which it is extracted from the plant matrix.A study by Silva et al. [41] found an order of magnitude of 10 −3 for the partition coefficient in the same way as in this article.Table 2 presents the values for the parameters provided by the mathematical models for the optimal condition of supercritical fluid extraction.The values of the parameters for the diffusion coefficient obtained for Model 1 are similar in magnitude to those reported in the literature for supercritical fluid extraction [38], and also the values found for the surface mass transfer coefficient showed the same order of magnitude found by Almeida et al. [39].The internal mass transfer coefficient, in Model 2, resulted in an order of magnitude of 10 −8 m/s.This same order was found by other authors investigating different plant species [40]. The parameter values of Model 3 are relatively high when compared to the values obtained by Almeida et al. [39].However, Falcão et al. [38] obtained higher values for k p , with an order of magnitude of 10 −2 , being closer to that obtained in this work.The relatively high order of magnitude of parameters (10 −3 ) obtained for Model 3 may be associated with the high amount of extract and the speed with which it is extracted from the plant matrix.A study by Silva et al. [41] found an order of magnitude of 10 −3 for the partition coefficient in the same way as in this article. Analysis of the Extracts Obtained in the Experimental Design The samples obtained in the experimental design were analyzed by thin-layer chromatography (TLC).The chromatogram of the SFE extracts obtained from plant material previously subjected to steam distillation and subsequently exposed to microwaves revealed the presence of several spots with intense blue fluorescence at 365 nm, characteristic of coumarins (Figure 3).Pterocaulon polystachyum is a species known for providing extracts rich in coumarins.At least two dozen different coumarins have already been reported for this plant [1]. Analysis of the Extracts Obtained in the Experimental Design The samples obtained in the experimental design were analyzed by thin-layer chromatography (TLC).The chromatogram of the SFE extracts obtained from plant material previously subjected to steam distillation and subsequently exposed to microwaves revealed the presence of several spots with intense blue fluorescence at 365 nm, characteristic of coumarins (Figure 3).Pterocaulon polystachyum is a species known for providing extracts rich in coumarins.At least two dozen different coumarins have already been reported for this plant [1].Supercritical fluid extraction is an alternative for extraction of thermolabile compounds [42,43].In this sense, techniques that promote the heating of the vegetal matrix can contribute to the alteration of the chemical composition of the extracts, as is the case with steam distillation and microwaves.For this reason, samples of extracts using the same supercritical fluid extraction condition but which did not pass through the mentioned steps were also analyzed by UFLC. Retention times and UV absorption spectra were compared with those obtained in previous analyzes of other extracts of species of the genus Pterocaulon under the same conditions.Except for the compound at the retention time of 6.4 min, in sample C, the chromatograms, from a qualitative point of view, are very alike.In Figure 4, the three samples (A, B, and C) showed peaks with a retention time (Tr) of approximately 2.9 (peak 1) and 4.5 min (peak 2), attributed to 7-(2,3-dihydroxy-3-methylbutyloxy)-6methoxycoumarin (obtusinin) and 5-methoxy-6,7-methylenedioxy-coumarin, respectively.Comparing these chromatograms with those obtained by Barata-Vallejo [44], Supercritical fluid extraction is an alternative for extraction of thermolabile compounds [42,43].In this sense, techniques that promote the heating of the vegetal matrix can contribute to the alteration of the chemical composition of the extracts, as is the case with steam distillation and microwaves.For this reason, samples of extracts using the same supercritical fluid extraction condition but which did not pass through the mentioned steps were also analyzed by UFLC. Retention times and UV absorption spectra were compared with those obtained in previous analyzes of other extracts of species of the genus Pterocaulon under the same conditions.Except for the compound at the retention time of 6.4 min, in sample C, the chromatograms, from a qualitative point of view, are very alike.In Figure 4, the three samples (A, B, and C) showed peaks with a retention time (Tr) of approximately 2.9 (peak 1) and 4.5 min (peak 2), attributed to 7-(2,3-dihydroxy-3-methylbutyloxy)-6-methoxycoumarin (obtusinin) and 5-methoxy-6,7-methylenedioxy-coumarin, respectively.Comparing these chromatograms with those obtained by Barata-Vallejo [44], it is possible to infer that the peak with Tr of 5.6 min (peak 3) is related to the presence of 7-(2,3-epoxy-3-methylbutyloxy)-6-methoxycoumarin. The compound indicated by peak 4, present in a small amount in the supercritical extract of the plant material without prior treatment, could not be characterized.The UV profile is somewhat similar to that of 5-methoxy-6,7-methylenedioxy-coumarin (Figure 5).However, it was impossible to infer its possible structure.Also, based on Barata-Vallejo's [44] chromatogram, is possible to suggest the presence of 7-(3-methyl-2butenyloxy)-6-methoxy-coumarin in the three extracts (Rt 7.6 min) (peak 5), since this coumarin was found in the dichloromethane extract of the plant, being one of the major peaks in the chromatogram.To confirm the identity of this compound, the extract obtained from the plant without pretreatment was submitted to column chromatography.The isolated compound was subjected to spectroscopic analysis, which allowed confirmation of the structure of 7-(3-methyl-2-butenyloxy)-6-methoxy-coumarin.This compound, trivially named prenyletin-methyl-ether, has previously been isolated from this plant [1].The structures of the coumarins are exhibited in Figure 6. Molecules 2024, 29, x FOR PEER REVIEW 7 it is possible to infer that the peak with Tr of 5.6 min (peak 3) is related to the presen 7-(2,3-epoxy-3-methylbutyloxy)-6-methoxycoumarin.The compound indicated by peak 4, present in a small amount in the supercr extract of the plant material without prior treatment, could not be characterized.The profile is somewhat similar to that of 5-methoxy-6,7-methylenedioxy-coumarin (Figu However, it was impossible to infer its possible structure.Also, based on Barata-Vall [44] chromatogram, is possible to suggest the presence of 7-(3-methyl-2-butenylox methoxy-coumarin in the three extracts (Rt 7.6 min) (peak 5), since this coumarin Although the analysis was qualitative, it is possible to verify in the chromatogram that when the plant material was pretreated, there was slightly better selectivity for 5methoxy-6-7-methylenedioxycoumarin (peak 2), a compound that has already demonstrated activity against glioma cells [45].In the extract obtained from the plant without pretreatment, the main product is prenyletin-methyl-ether (peak 5), with 5-methoxy-6-7-methylenedioxycoumarin present in smaller quantities.Therefore, if 5-methoxy-6-7-methylenedioxycoumarin is the target compound, pretreatment methods are indicated, since in addition to it being possible to obtain essential oils, the compound appears in greater quantity. Molecules 2024, 29, x FOR PEER REVIEW 8 of 16 found in the dichloromethane extract of the plant, being one of the major peaks in the chromatogram.To confirm the identity of this compound, the extract obtained from the plant without pretreatment was submitted to column chromatography.The isolated compound was subjected to spectroscopic analysis, which allowed confirmation of the structure of 7-(3-methyl-2-butenyloxy)-6-methoxy-coumarin.This compound, trivially named prenyletin-methyl-ether, has previously been isolated from this plant [1].The structures of the coumarins are exhibited in Figure 6.Althougah the analysis was qualitative, it is possible to verify in the chromatogram that when the plant material was pretreated, there was slightly better selectivity for 5methoxy-6-7-methylenedioxycoumarin (peak 2), a compound that has already demonstrated activity against glioma cells [45].In the extract obtained from the plant without pretreatment, the main product is prenyletin-methyl-ether (peak 5), with 5methoxy-6-7-methylenedioxycoumarin present in smaller quantities.Therefore, if 5- found in the dichloromethane extract of the plant, being one of the major peaks in the chromatogram.To confirm the identity of this compound, the extract obtained from the plant without pretreatment was submitted to column chromatography.The isolated compound was subjected to spectroscopic analysis, which allowed confirmation of the structure of 7-(3-methyl-2-butenyloxy)-6-methoxy-coumarin.This compound, trivially named prenyletin-methyl-ether, has previously been isolated from this plant [1].The structures of the coumarins are exhibited in Figure 6.Althougah the analysis was qualitative, it is possible to verify in the chromatogram that when the plant material was pretreated, there was slightly better selectivity for 5methoxy-6-7-methylenedioxycoumarin (peak 2), a compound that has already demonstrated activity against glioma cells [45].In the extract obtained from the plant without pretreatment, the main product is prenyletin-methyl-ether (peak 5), with 5methoxy-6-7-methylenedioxycoumarin present in smaller quantities.Therefore, if 5- Cell Viability Analysis (MTT) The effect of the extract obtained through supercritical CO 2 on the cytotoxicity in bladder tumor cells (T24) was assessed.T24 cells were exposed to increasing concentrations from 6 to 200 µg.mL−1 for 24 h, and cell viability is illustrated in Figure 7.It was possible to observe a significant reduction in cell viability between concentrations of 6 and 12 µg/mL.No significant differences were observed among the concentrations of 25, 50, 100, and 200 µg/mL for the extract obtained through supercritical fluid.The IC 50 values of extracts were calculated using linear regression from cell viability data and extract concentrations, both on a base 10 logarithmic scale [45].The IC 50 represents the concentration required to reduce cell viability by 50%.For the extract obtained via supercritical fluid, the calculated IC 50 value was 6.45 µg/mL. (Lab-Kits, model WM-MD6M), with 1200 W power and operating frequency of 2450 MHz.The pretreatment temperatures considered in the experimental planning were defined in order to seek a mild treatment (T < 150 • C).The power was defined as 50% of the total power, and the time in which the sample was kept at the temperature chosen for each pretreatment was 1 min. Extraction with Supercritical Fluid The extractions were carried out in the pilot unit of supercritical extraction, a description of which is found in Scopel et al. [48].The solvent flow rate used in the extractions was 1000 g/h and the extraction time was set at 2 h, enough time for the plant to deplete according to previous experiments.At the end of each extraction, the flasks were weighed to obtain global extract yield data (Figure 1).The experiments were carried out in an extractor vessel with an internal volume of 100 mL and the plant mass used was 23 g.The solvent:feed ratio used was 86.9 g:g.The presence of coumarins in the extracts was determined by thin-layer chromatography (TLC). Experimental Design for SFE: Box-Behnken Technique The use of experimental design techniques that help to understand and optimize the response of a system or process has often been used for supercritical extraction [49].In this study, the experiments were planned using the Box-Behnken technique in Minitab ® software, where three factors were evaluated: extraction pressure, extraction temperature, and microwave pretreatment temperature. Before starting the experiment itself, using microwaves as a pretreatment, the plant material resulting from steam distillation (after drying and grinding) was subjected to a sequential extraction with supercritical CO 2 following a procedure presented by Torres et al. [33], where a scan of pressure conditions (80, 120, 160, 200, 240, 280, and 300 bar) at a constant temperature of 40 • C was carried out in order to identify adequate pressure conditions to obtain extracts containing coumarins.The extracts resulting from this process were analyzed by TLC.In all the fractions obtained at different pressures, the presence of compounds with strong fluorescence characteristic of coumarins was verified.Therefore, the conditions applied were suitable for obtaining these compounds.Once the appropriate parameters had been established, the extractions following the experimental planning indicated in Table 3 (experimental section) were carried out.The temperature range was chosen based on information from previous work with similar plant species [32,33].Given that total extract mass values had been obtained in the experiments, Minitab ® was used as an optimization tool.Subsequently, for the optimal condition found, the extraction curve was constructed.For this purpose, the flasks were weighed at intervals of 5 min in the first 20 min of extraction and then at intervals of 10 min. Mathematical Modeling The mathematical modeling of supercritical extraction is a procedure that has been extensively investigated because it is decisive in monitoring and developing units, whether on a laboratory, pilot, or industrial scale.The aim was to evaluate the performance of three mathematical models supported by different hypotheses in the representation of the extraction process. Model 1 proposed by Crank [50] assumes that diffusion is the phenomenon that governs mass transfer and is described by Fick's second law in rectangular, unidimensional coordinates and subject to a convective boundary condition [51].It considers that the solute is uniformly distributed in the solid particles and that the transfer rate to the fluid surrounding the particle is directly proportional to the difference in solute concentration between the surface of the particle and the fluid that flows externally.In this model, the diffusion coefficient of the solute in the solvent, D, and the surface mass transfer coefficient, k c , are unknown and will be determined by the parameter adjustment technique.This type of model has been successfully used by some authors to model supercritical extraction [48,48,52]. Model 2 is based on the proposal made by Sovová [25], which considers the solute uniformly distributed in two types of solid particles, intact and broken.Intact particles are associated with structures with difficult access to solute and ground particles with easy access to solute.Based on these hypotheses, Silva et al. [53] considered extraction with negligible external mass transfer resistance and presented mathematical equations for the mass of extract obtained as a function of time for each of the two stages of the extractive process.The authors [53] added an expression for the time associated with exhaustion of the easily accessible solute and rewrote the model with this consideration.The unknown parameters of this model are the mass transfer coefficient in the solid phase, ks, the solute mass fraction in the fluid phase at the saturation condition, Y*, the mass of easily accessible solute, M*, and time to start the extraction of difficult-to-access solute, τ. Model 3, proposed by Reverchon [24], is based on the differential mass balance for the solid phase and for the fluid in a fixed extraction bed.The model takes into account that a linear relationship describes the experimental data at equilibrium, the axial dispersion is negligible, and the solvent density and the flux along the bed are constant.The equations are solved numerically, and the parameters k TM , the global mass transfer coefficient (s −1 ), and k p , the volumetric partition coefficient of the extract between the solid and fluid phase at equilibrium, are determined by the parameter adjustment technique.This model has been successfully used in the representation of supercritical extraction data for different plant species and for different raw material structures considering different geometries [39,41,54,55].The estimation of the parameters in all chosen models was performed by minimizing the sum of the squared errors between the average values obtained experimentally in triplicate and via mathematical models.The software used to estimate the parameters was Matlab ® and EMSO [56]. Analysis of the Extracts The extracts obtained were solubilized in dichloromethane and applied to a silica gel 60 plate for analysis by TLC.The mobile phase used consisted of chloroform and methanol in a 98:2 ratio.After elution, the plate was visualized under UV light at a wavelength of 365 nm. The samples were also analyzed by ultrafast liquid chromatography (UFLC) carried out according to the methodology developed by Medeiros-Neves [1].The dry extracts were treated with acetone followed by filtration with filter paper to remove the waxes. Acetone was removed by evaporation under reduced pressure.Subsequently, an aliquot of approximately 2.5 mg of extract was dissolved in 1 mL of acetonitrile with the aid of ultrasound and diluted in an aqueous solution of acetonitrile (1:1) to 10 mL.The sample was filtered again through a filter suitable for injection into the chromatograph (2.2 µm).The equipment used was a Shimadzu SPD-M20A equipped with a diode array detector (DAD).Monitoring and processing of the output signal were performed by Shimadzu LC-solution Multi-PDA Software (LC-solution Version 1.25 SP4).A Shim-pack XR ODS column with a length of 100 mm, an internal diameter of 2.0 mm, and a particle size of 2.2 µm and a C18 SecurityGuard™ ULTRA pre-column (Phenomenex, Torrance, CA, USA) were used.The composition of the mobile phase used was 0.1% formic acid (v/v) and acetonitrile.The injection volume was 5 µL, the flow was set at 0.55 mL/min for 8 min, and the analysis temperature was set at 55 • C. The wavelength was set to 327 nm. Cell Cytotoxicity by MTT Assay and Statistical Analysis Cell viability assessment was carried out at the Laboratory of Applied Pharmacology (LAFAP) within the School of Health and Life Sciences at PUCRS utilizing the MTT colorimetric method.T24 cells were plated at a density of 5 × 10 3 cells per well on a 96-well plate and left to incubate for 24 h.Subsequent to the incubation period, the cells underwent treatment with various concentrations of extract acquired through supercritical fluid extraction (ranging from 6 to 200 µg.mL−1 ) for 24 h.Post-treatment, the culture medium was removed, cells were rinsed with PBS (pH = 7.2-7.4),and MTT solution (100 µL) was added, followed by an incubation period of 3 h.Formazan crystals were dissolved in 100 µL of DMSO, and the optical density was gauged at 570 nm utilizing a SpectraMax Plus plate reader.The absorbance value was linearly correlated with the number of viable cells exhibiting active mitochondria.Outcomes were articulated as the percentage of absorbance in treated cells in comparison to controls.Relative cell viability was expressed as a percentage (%) relative to untreated control cells. The statistical analysis employed was a one-way analysis of variance (ANOVA), followed by the Tukey post hoc test.Findings are depicted as the standard error of the mean.GraphPad Prism 8.0 ® software was employed for graphical representation.p values below 0.05 were deemed indicative of statistical significance. Conclusions The optimal conditions found for supercritical extraction performance with regard to global yield were 240 bar and 60 • C for the operating conditions of the extraction vessel and microwave pretreatment temperature of 30 • C. The RSM analysis indicated pressure and temperature to be the variables of greatest significance on yield.The effect of the temperature applied in the microwave treatment was not statistically significant.The mathematical models used to simulate mass transfer were adequate and showed high adherence to experimental data.Model 2 performed slightly better than the other models.Both models 2 and 3 assume that external resistance is negligible, which suggests that transport of the extract within the plant material is difficult to attain.Although microwave pretreatment was not significant, steps from other types of pretreatment methods would be important to investigate to improve internal mass transfer.The analyses of the extracts allowed the characterization of the compounds 7-(2,3-epoxy-3-methyl-butyloxy)-6-methoxycoumarin, 5-(2,3-epoxy-3-methylbutyloxy)-6,7-methylenedioxycoumarin, 7-(3-methyl-2-butenyloxy)-6-methoxy-coumarin, and 5-methoxy-6,7-methylenedioxy-coumarin, the latter with known cytotoxic activity against glioma tumors.The extract rich in coumarins, obtained through supercritical fluid extraction, demonstrated cytotoxicity for a bladder tumor cell lineage (T24).These results corroborate previous research on the cytotoxic potential of coumarins identified against cancer cells.The originality of this study lies in the application of the circular bioeconomy concept in the processing of aerial parts of P. polystachyum, resulting from the use of steam distillation process residue as raw material to obtain coumarins.This research also concludes that it is viable to obtain two products, essential oil by steam ex-traction and coumarin-rich extract by supercritical extraction, using the same raw material, as well as enabling the introduction of a green technology, CO 2 supercritical extraction, replacing the use of organic solvents, to obtain non-volatile extracts with cytotoxic activity against cancer cells. Figure 1 . Figure 1.Response surfaces for effects of two independent variables on the global yield of the extract obtained by supercritical extraction: (a) extraction pressure (P) and pretreatment temperature (TP); (b) extraction temperature (TE) and extraction pressure (P); (c) pretreatment temperature (TP) and extraction temperature (TE). Figure 1 . Figure 1.Response surfaces for effects of two independent variables on the global yield of the extract obtained by supercritical extraction: (a) extraction pressure (P) and pretreatment temperature (TP); (b) extraction temperature (TE) and extraction pressure (P); (c) pretreatment temperature (TP) and extraction temperature (TE). Figure 2 . Figure 2. Curve of the optimal condition for supercritical fluid extraction: experimental data and adjustment of mathematical models. Figure 2 . Figure 2. Curve of the optimal condition for supercritical fluid extraction: experimental data and adjustment of mathematical models. Figure 3 . Figure 3. TLC of the 13 different extracts under UV light (365 nm).Due to the similarity presented between the extracts obtained by different experimental design conditions, a single sample was randomly chosen and qualitatively analyzed by ultrafast liquid chromatography (UFLC).The sample analyzed was that obtained at 160 bar, 50 °C, and microwave pretreatment at 150 °C (sample C).Another sample obtained with the same supercritical fluid extraction condition but without microwave exposure and without previous extraction of essential oil (sample A) and another without microwave treatment (sample B) were also analyzed by UFLC to verify the possible influence of pretreatments on the composition of the extract.Supercritical fluid extraction is an alternative for extraction of thermolabile compounds[42,43].In this sense, techniques that promote the heating of the vegetal matrix can contribute to the alteration of the chemical composition of the extracts, as is the case with steam distillation and microwaves.For this reason, samples of extracts using the same supercritical fluid extraction condition but which did not pass through the mentioned steps were also analyzed by UFLC.Retention times and UV absorption spectra were compared with those obtained in previous analyzes of other extracts of species of the genus Pterocaulon under the same conditions.Except for the compound at the retention time of 6.4 min, in sample C, the chromatograms, from a qualitative point of view, are very alike.In Figure4, the three samples (A, B, and C) showed peaks with a retention time (Tr) of approximately 2.9 (peak 1) and 4.5 min (peak 2), attributed to 7-(2,3-dihydroxy-3-methylbutyloxy)-6methoxycoumarin (obtusinin) and 5-methoxy-6,7-methylenedioxy-coumarin, respectively.Comparing these chromatograms with those obtained by Barata-Vallejo[44], Figure 3 . Figure 3. TLC of the 13 different extracts under UV light (365 nm).Due to the similarity presented between the extracts obtained by different experimental design conditions, a single sample was randomly chosen and qualitatively analyzed by ultrafast liquid chromatography (UFLC).The sample analyzed was that obtained at 160 bar, 50 • C, and microwave pretreatment at 150 • C (sample C).Another sample obtained with the same supercritical fluid extraction condition but without microwave exposure and without previous extraction of essential oil (sample A) and another without microwave treatment (sample B) were also analyzed by UFLC to verify the possible influence of pretreatments on the composition of the extract.Supercritical fluid extraction is an alternative for extraction of thermolabile compounds[42,43].In this sense, techniques that promote the heating of the vegetal matrix can contribute to the alteration of the chemical composition of the extracts, as is the case with steam distillation and microwaves.For this reason, samples of extracts using the same supercritical fluid extraction condition but which did not pass through the mentioned steps were also analyzed by UFLC.Retention times and UV absorption spectra were compared with those obtained in previous analyzes of other extracts of species of the genus Pterocaulon under the same conditions.Except for the compound at the retention time of 6.4 min, in sample C, the chromatograms, from a qualitative point of view, are very alike.In Figure4, the three samples (A, B, and C) showed peaks with a retention time (Tr) of approximately 2.9 (peak 1) and 4.5 min (peak 2), attributed to 7-(2,3-dihydroxy-3-methylbutyloxy)-6-methoxycoumarin (obtusinin) and 5-methoxy-6,7-methylenedioxy-coumarin, respectively.Comparing these chromatograms with those obtained by Barata-Vallejo[44], it is possible to infer that the peak with Tr of 5.6 min (peak 3) is related to the presence of 7-(2,3-epoxy-3-methylbutyloxy)-6-methoxycoumarin.The compound indicated by peak 4, present in a small amount in the supercritical extract of the plant material without prior treatment, could not be characterized.The UV profile is somewhat similar to that of 5-methoxy-6,7-methylenedioxy-coumarin (Figure5).However, it was impossible to infer its possible structure.Also, based on Barata-Vallejo's[44] chromatogram, is possible to suggest the presence of 7-(3-methyl-2butenyloxy)-6-methoxy-coumarin in the three extracts (Rt 7.6 min) (peak 5), since this coumarin was found in the dichloromethane extract of the plant, being one of the major peaks in the chromatogram.To confirm the identity of this compound, the extract obtained from the plant without pretreatment was submitted to column chromatography.The isolated compound was subjected to spectroscopic analysis, which allowed confirmation of the structure of 7-(3-methyl-2-butenyloxy)-6-methoxy-coumarin.This compound, trivially Figure 4 . Figure 4. (A) UFLC chromatogram profile of the extract obtained from plant material subjected to SFE, (B) SFE extract obtained from plant material previously subjected to steam distillation (C) SFE extract obtained from plant material previously subjected to steam distillation subsequently exposed to microwaves. Figure 4 . Figure 4. (A) UFLC chromatogram profile of the extract obtained from plant material subjected only to SFE, (B) SFE extract obtained from plant material previously subjected to steam distillation and (C) SFE extract obtained from plant material previously subjected to steam distillation and subsequently exposed to microwaves. Figure 5 .Figure 6 . Figure 5. UV absorption spectra (327 nm) of the compounds indicated by peaks 1-5 in the chromatograms shown in Figure 4A-C. Figure 5 . Figure 5. UV absorption spectra (327 nm) of the compounds indicated by peaks 1-5 in the chromatograms shown in Figure 4A-C. Figure 5 .Figure 6 . Figure 5. UV absorption spectra (327 nm) of the compounds indicated by peaks 1-5 in the chromatograms shown in Figure 4A-C. Table 1 . ANOVA of the model generated by regression by the stepwise method. DF: degrees of freedom; Adj SS: adjusted sum of squares; F: F-statistics; p: p-value.TE, PE, and TP correspond to extraction temperature, extraction pressure, and microwave pretreatment temperature, respectively. Table 2 . Parameters of the mathematical models for supercritical fluid extraction in optimal conditions. Table 2 . Parameters of the mathematical models for supercritical fluid extraction in optimal conditions.
v3-fos-license
2020-11-12T09:10:13.374Z
2020-11-07T00:00:00.000
228871970
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-3263/10/11/444/pdf", "pdf_hash": "ffca076e7e8062959bd2a3af28b6a564498ece43", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:257", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "d3bee5d745df6c003b58654b66a347fe84d43bb1", "year": 2020 }
pes2o/s2orc
Identification of Problems Associated with the Usage of Friction (Koeppe) Hoists Based on Geodetic Measurements : The hoist assembly based on the Koeppe friction is a commonly used solution in mining. However, it has some disadvantages. A few centimeters o ff set of the groove axis can lead to excessive abrasion of linings on the Koeppe friction and pulleys. As a consequence, the mines are forced to bear the direct and indirect costs of replacing the linings such as the cost of materials and service as well as the cost of extended machine and shaft downtime. Last year, the authors undertook a geodetic inventory of the condition of two hoisting machines with a Koeppe winder. Terrestrial laser scanning enhanced with precision total station measurements were performed. Additionally, elements particularly important for the performed analysis (inclination of hoisting machine and rope wheels shafts) were determined by the precision leveling technique. Obtained results were verified using measurements on Szpetkowski’s tribrach. Appropriate selection of the measurement methods in both analyzed examples allowed us to determine the causes of destruction of each hoist assembly component. Based on precise geodetic data, guidelines have been defined for rectification (twisting and shifting the rope pulleys), which seems unavoidable despite the lack of unambiguous legal regulations. Introduction The shaft hoist assembly, which consists of the hoisting machine, the headgear, and the shaft, is the core of underground mine manufacturing. Exceeding the value of tower inclination, rope friction angles, or excessive inclination of the pulley shafts and hoisting machine may lead to critical failures [1]. The measuring service periodically or continuously monitors the condition of the hoist assembly geometry, indicating the rectification process. Many studies related to different aspects of mine hoist and friction safety [2][3][4][5][6][7][8] have been conducted at mines In this article, the authors focused on a comprehensive geodetic inventory with the determination of the measuring methodology of the hoisting device with a propeller (the Koeppe system) and two levels of rope pulleys associated with it (upper and lower) [9]. The process of conducting geodetic control measurements on the example of two hoisting assemblies with the Koeppe system is presented. The geodetic control methods used so far have been usually based on various plumbing methods. Few automation trials have allowed for the determination of one or two elements of the geometric control of the shaft hoist assembly. However, to make a full inventory of the mining shaft, several complementary classical measurement techniques have to be carried out. As a consequence, such an Materials and Methods In December 2019, several hoisting assemblies were inventoried by the surveying team consisting of AMC company (Leawood, KS, USA) and AGH University of Science and Technology representatives with extensive experience in this type of research (e.g., [1]). Two of the measured objects had problems associated with the usage of Koeppe friction. The measurements were performed using a panoramic laser scan with the phase scanner FARO FOCUS 3D. The device allows information to be obtained about the measured object with a resolution of 5 mm/10 m distance with an accuracy not worse than ±2 mm for a single measuring station. Additionally, elements particularly important for the performed analysis (inclination of hoisting machine and rope wheels shafts) have been determined by the precision leveling technique (with the usage of Trimble DiNi 0.3 and the set of invar rods that ensures leveling accuracy below 0.3 mm/1 km). The internal consistency of the point cloud ( Figure 1) was ensured by the high accuracy measurements of the geodetic network with a total station (Trimble C3 with an accuracy of 2 angle and length of 2 mm + 2 ppm.). Geosciences 2020, 10, x FOR PEER REVIEW 2 of 14 geometric control of the shaft hoist assembly. However, to make a full inventory of the mining shaft, several complementary classical measurement techniques have to be carried out. As a consequence, such an approach significantly extended the performance of the surveys. Currently, the set of classical geodetic methods can be replaced with laser scanning, which allows obtaining as much necessary information as possible in the shortest possible time [1,10]. This approach is visible in every branch of industry where it is necessary to obtain as much precise data as possible in the shortest possible time. Laser scanning could be used in testing the clearance gauge of railway [11], maritime measurements [12], monitoring of roads and bridges [13] or building [14] state, concrete structures [15], and in many other industries. Naturally, this technology has also been developing in the mining industry for over 20 years [16,17]. The scope of the data obtained with this method is incomparably greater than that obtained with traditional methods. Furthermore, the use of scanning may be important from an economic point of view as well as for the safety of employees [18]. The inventory measurements of the hoisting machine and rope wheels, along with the determination of the tower and shaft inclination, can be performed by a four-person measuring team even during one shift. To have a comprehensive inventory, only the precise leveling of the shaft's cable pulleys and winding machine, together with establishing the geodetic network to link the model to the local or global system, is required to be performed with the help of classical methods. Materials and Methods In December 2019, several hoisting assemblies were inventoried by the surveying team consisting of AMC company and AGH University of Science and Technology representatives with extensive experience in this type of research (e.g., [1]). Two of the measured objects had problems associated with the usage of Koeppe friction. The measurements were performed using a panoramic laser scan with the phase scanner FARO FOCUS 3D. The device allows information to be obtained about the measured object with a resolution of 5 mm/10 m distance with an accuracy not worse than ±2 mm for a single measuring station. Additionally, elements particularly important for the performed analysis (inclination of hoisting machine and rope wheels shafts) have been determined by the precision leveling technique (with the usage of Trimble DiNi 0.3 and the set of invar rods that ensures leveling accuracy below 0.3 mm/1 km). The internal consistency of the point cloud ( Figure 1) was ensured by the high accuracy measurements of the geodetic network with a total station (Trimble C3 with an accuracy of 2′′ angle and length of 2 mm + 2 ppm.). Another important issue was the problem of uneven lining wear and a high probability of excessive tower vibrations. Therefore, the dynamic tests of headframe inclination during its operation and emergency breaking was performed. Simultaneously, the rope movement amplitude was also tested. The test of the headframe inclination was carried out with the use of two measurement Another important issue was the problem of uneven lining wear and a high probability of excessive tower vibrations. Therefore, the dynamic tests of headframe inclination during its operation and emergency breaking was performed. Simultaneously, the rope movement amplitude was also tested. The test of the headframe inclination was carried out with the use of two measurement technologies: Geosciences 2020, 10, 444 3 of 13 laser scanning and GNSS measurement in real-time kinematic mode with a receiver located on the shaft tower ( Figure 2) with the base station recording data in the static mode [19]. Geosciences 2020, 10, x FOR PEER REVIEW 3 of 14 technologies: laser scanning and GNSS measurement in real-time kinematic mode with a receiver located on the shaft tower ( Figure 2) with the base station recording data in the static mode [19]. The results of the laser scanning measurements (point clouds) allowed us to develop spatial models of inventoried objects [20] (Figure 3). In addition, subsequent computer-based analyses were performed such as determining the characteristic points located on the axles of hoisting machines and cable pulleys, establishing transverse profiles on shaft towers, and the amplitude of vibration of the winding rope. The obtained base of points was used to prepare the spatial arrangement of the axes of the Koeppe drums, pulleys, shaft tower structure, and ropes. All the obtained data were finally used to determine the friction angles of the ropes on the winding wheels, the inclination of the pulley shafts and the winding wheels, determine the verticality The results of the laser scanning measurements (point clouds) allowed us to develop spatial models of inventoried objects [20] (Figure 3). In addition, subsequent computer-based analyses were performed such as determining the characteristic points located on the axles of hoisting machines and cable pulleys, establishing transverse profiles on shaft towers, and the amplitude of vibration of the winding rope. The obtained base of points was used to prepare the spatial arrangement of the axes of the Koeppe drums, pulleys, shaft tower structure, and ropes. Geosciences 2020, 10, x FOR PEER REVIEW 3 of 14 technologies: laser scanning and GNSS measurement in real-time kinematic mode with a receiver located on the shaft tower ( Figure 2) with the base station recording data in the static mode [19]. The results of the laser scanning measurements (point clouds) allowed us to develop spatial models of inventoried objects [20] (Figure 3). In addition, subsequent computer-based analyses were performed such as determining the characteristic points located on the axles of hoisting machines and cable pulleys, establishing transverse profiles on shaft towers, and the amplitude of vibration of the winding rope. The obtained base of points was used to prepare the spatial arrangement of the axes of the Koeppe drums, pulleys, shaft tower structure, and ropes. All the obtained data were finally used to determine the friction angles of the ropes on the winding wheels, the inclination of the pulley shafts and the winding wheels, determine the verticality of the shaft tower (including during its operation period and emergency breaking), and determine All the obtained data were finally used to determine the friction angles of the ropes on the winding wheels, the inclination of the pulley shafts and the winding wheels, determine the verticality of the shaft tower (including during its operation period and emergency breaking), and determine the verticality of ropes (descending from the level of cable pulleys to the shaft room). All together, these data allowed us to determine whether the inventoried objects could still be safely used according to the law. Moreover, they constitute a database necessary to perform the rectification of individual elements of the hoist assembly and form an excellent source needed to complete the engineering documentation of the mine. Applicable Law Regulations in Relation to the Inventoried Objects Every kind of geodetic survey should be carried out with relation to the applicable law (applicable in the country where the inventoried object is located). Therefore, it is crucial to be aware of the applicable law regulations in a given topic. The calculation of apparent angles of rope friction (measured on a horizontal plane) is associated with calculating the differences of the azimuths (directional angles) of the axes characterizing the winding device. In applicable Polish law, the only reference to the case of hoisting assembles is present in the notation from the Regulation of the Minister of Energy on detailed requirements for the operation of underground mining plants (Annex 4, point 3.11.17): "In the rope wheels on the tower shaft hoists with a winder or a bobbin hoisting machine, the symmetry plane of the pulley groove coincides with the plane defined by the axes of oncoming and converging rope" [21]. In the case of a hoisting device with a Koeppe friction and two rope pulleys (upper and lower) associated with it, the friction angles (and shifts of the proper grooves) are determined separately for the upper rope and the lower rope. Therefore, for the upper rope on the upper wheel, the friction angle is determined as the difference in the direction angles of the longitudinal axis of the pulley and the individual pulling axis associated with that pulley. For this upper rope, the friction angle on the Koeppe winder is determined as the difference in the direction angles of the longitudinal axis (plane) of the Koeppe winder and the pull axis associated with that wheel. The apparent friction angles for the bottom rope are determined analogously. In a situation where the hoisting device is a multi-rope friction winder, the apparent rope friction angles are determined for each pair of corresponding grooves on the drive wheel and cable wheel. Two friction angles are determined for each pair, one for the rope on the drive wheel and one for the rope on the cable wheel. To determine these friction angles, it is necessary to know the azimuths (direction angles) of the pull axis for the corresponding set of grooves on the drive wheel and on the cable wheel. This pull axis is defined as the straight line connecting the center of the rope groove on the drive wheel with the center of the pulley groove. The friction angles of the rope on the driving wheel are therefore the differences between the azimuths of the individual grooves on this wheel and the azimuths of the pull axis associated with each of these grooves. Similarly, rope friction angles on cable wheels are the differences between the azimuths of the grooves on those wheels and the azimuth of the pull axis associated with the individual wheel ( Figure 4). In the case of the Koeppe winder and two rope pulleys connected to it (positioned next to each other), the angles of friction and the displacement of the grooves are determined separately for the underlap rope and the overlap rope. The friction angle is determined as the difference in the direction angles of the longitudinal axis of the pulley and the rope associated with that pulley. For this rope, the friction angle on the Koeppe winder is determined as the difference of the direction angles of the longitudinal axis (plane) of the Koeppe winder and the pull axis. However, it is not possible to comply with the provision contained in the regulation cited above. Therefore, it should be assumed that the drawing axis should be symmetrical (in the middle between) to the planes of the pulleys ( Figure 5). • O17-hoisting machine suspension axis-vertical line passing through the center of the rope cross-section at the point where the rope exits the rope pulley. The first inventoried object was an A-form double backstay headframe with a multi-rope friction Koeppe winder and two levels of wheels (Figures 6 and 7). Multi-rope friction winders are tower mounted, with either cages or skips, and provided with a counterweight. Four ropes are used, operating in parallel and sharing the total suspended load. The hoisting assembly is located in the east of France and supports the closed potassium salt mine. Although currently the shaft tower and the shaft itself are not heavily used, in the near future, a significantly higher load of the tower is The second analyzed object was a steel structure shaft tower with a single-rope friction Koeppe winder and pulleys arranged side by side ( Figure 8). The shaft tower is situated in Poland (Lower Silesia), therefore, the authors performed their surveys and analyses according to Polish law. From the beginning of the mine's operation, there were two shafts spaced 20 m apart, with two symmetrical perpendicular two-post head frames connected with one headroom building above these shafts. As a result of the mine reconstruction, one shaft and its shaft tower were removed. As a consequence, the attached headroom building was demolished. One compartment was left in the shaft (skip had been removed) and as a result, the perpendicular strut had been dismantled. Now, the shaft tower The first inventoried object was an A-form double backstay headframe with a multi-rope friction Koeppe winder and two levels of wheels (Figures 6 and 7). Multi-rope friction winders are tower mounted, with either cages or skips, and provided with a counterweight. Four ropes are used, operating in parallel and sharing the total suspended load. The hoisting assembly is located in the east of France and supports the closed potassium salt mine. Although currently the shaft tower and the shaft itself are not heavily used, in the near future, a significantly higher load of the tower is planned, both in terms of travel frequency and weight of transported materials. The mine's power engineering department has diagnosed the problem of uneven lining wear at the upper level of the pulleys. According to the observations, at a higher winding speed of the cage (the maximum allowed speed in this analyzed problem was 4 m/s) and a larger load mass, the overlap rope exceeded normal movement amplitude, and the tower had been vibrating more than regularly. This second aspect should not affect the operation of the tower with a steel structure (especially with the low height of the lower and upper pulleys above the shaft station-24 and 28 m). However, it was also decided to analyze this issue. It is worth mentioning that French law does not raise the issue of using shaft towers, periodic technical inspections, or of critical values of deviations. It is the manager's responsibility to maintain the technical condition of the facilities in an appropriate condition. The mine administrator decided to base the assessment on the Polish regulations and experience of the measuring team. The second analyzed object was a steel structure shaft tower with a single-rope friction Koeppe winder and pulleys arranged side by side ( Figure 8). The shaft tower is situated in Poland (Lower Silesia), therefore, the authors performed their surveys and analyses according to Polish law. From the beginning of the mine's operation, there were two shafts spaced 20 m apart, with two symmetrical perpendicular two-post head frames connected with one headroom building above these shafts. As a result of the mine reconstruction, one shaft and its shaft tower were removed. As a consequence, the attached headroom building was demolished. One compartment was left in the shaft (skip had been removed) and as a result, the perpendicular strut had been dismantled. Now, the shaft tower that was measured looks like one-post head frame and its construction is significantly weakened. The consequences of the above changes can be observed in the inclination of the construction. The second analyzed object was a steel structure shaft tower with a single-rope friction Koeppe winder and pulleys arranged side by side ( Figure 8). The shaft tower is situated in Poland (Lower Silesia), therefore, the authors performed their surveys and analyses according to Polish law. From the beginning of the mine's operation, there were two shafts spaced 20 m apart, with two symmetrical perpendicular two-post head frames connected with one headroom building above these shafts. As a result of the mine reconstruction, one shaft and its shaft tower were removed. As a consequence, the attached headroom building was demolished. One compartment was left in the shaft (skip had been removed) and as a result, the perpendicular strut had been dismantled. Now, the shaft tower that was measured looks like one-post head frame and its construction is significantly weakened. The consequences of the above changes can be observed in the inclination of the construction. Due to the occurring and diagnosed problem (lining abrasion during the operation), both cases in France and Poland are similar. In simplified terms, it can be assumed that the corresponding shaft axles on the propeller plate and the cable pulleys are not collinear. The issue has been temporarily solved by dredging the groove, the place where the rope is located during the exact measurement/inspection. However, after a while, the rope rolls further and creates a new groove. As a consequence, oversize clearances and asymmetrical tensions are created in the grooves on the drive wheel and in the pulleys, which results in a different distribution of forces than were originally planned. This is an extremely dangerous situation, because the uneven loading of the drive wheel and rope wheels can not only affect lining wear, but also the structural unforeseen loading of the head and shaft tower as well as abnormal vibrations. In the worst case scenario of buckling of the shaft and displacement of the descent points of the ropes to the shaft, this in turn can lead to asymmetrical guiding of the shaft cage and critical failures related to it. Due to the occurring and diagnosed problem (lining abrasion during the operation), both cases in France and Poland are similar. In simplified terms, it can be assumed that the corresponding shaft axles on the propeller plate and the cable pulleys are not collinear. The issue has been temporarily solved by dredging the groove, the place where the rope is located during the exact measurement/inspection. However, after a while, the rope rolls further and creates a new groove. As a consequence, oversize clearances and asymmetrical tensions are created in the grooves on the drive wheel and in the pulleys, which results in a different distribution of forces than were originally planned. This is an extremely dangerous situation, because the uneven loading of the drive wheel and rope wheels can not only affect lining wear, but also the structural unforeseen loading of the head and shaft tower as well as abnormal vibrations. In the worst case scenario of buckling of the shaft and displacement of the descent points of the ropes to the shaft, this in turn can lead to asymmetrical guiding of the shaft cage and critical failures related to it. Results To diagnose the problem and show the value of the groove offsets, very precise geometry analysis is crucial. It is necessary to identify not only the points of departure of the ropes from the propeller shaft and the pulleys, but also the twisting of these elements related to each other and to their shafts. Thus, on very short bases, drawing azimuths are even extrapolated to several dozen meters. This process requires measuring many more points, not just characteristic ones, as it is done using the total station measurements, or the earlier used "ordinate and cut" method. This approach clearly indicates the usage of laser scanning, which allows a point cloud to be generated, representing the entire object and its surroundings in a high resolution. Then, based on the modeling of the point cloud, a spatial model of the shaft axis, wheel surfaces (Figure 9a), and propeller shaft, ropes, and the tower itself with its core (Figure 9b) is created. As mentioned earlier, most of the elements crucial for the analysis were determined twice using different measuring methods (Table 1). An error in determining a single point was adopted to carry out an accuracy analysis in order to compare the used measurement methods. However, while relating the total station and laser scanning methods, it must be said that determining a point with a total station does not guarantee the perfect defining of the object. It is not possible to perfectly pick out the axis of the rope or the groove in the field, but it is possible to do this on a spatial model (point cloud) of a given object. Moreover, the measurement time itself was analyzed and thus the time of excluding the object from motion. This is a very important aspect in the context of shaft facilities, the closure of which generates logistical problems and costs. The last important queries were to compare the economic aspect of processing the data from every method and specifying the information that each method provides for individual analyses. Table 1 summarizes the averaged errors in determining the coordinates of points or azimuths of pairs of points used for the analysis of subsequent objects, at the same time marking the data sources used for the analysis. As mentioned earlier, most of the elements crucial for the analysis were determined twice using different measuring methods (Table 1). An error in determining a single point was adopted to carry out an accuracy analysis in order to compare the used measurement methods. However, while relating the total station and laser scanning methods, it must be said that determining a point with a total station does not guarantee the perfect defining of the object. It is not possible to perfectly pick out the axis of the rope or the groove in the field, but it is possible to do this on a spatial model (point cloud) of a given object. Moreover, the measurement time itself was analyzed and thus the time of excluding the object from motion. This is a very important aspect in the context of shaft facilities, the closure of which generates logistical problems and costs. The last important queries were to compare the economic aspect of processing the data from every method and specifying the information that each method provides for individual analyses. Table 1 summarizes the averaged errors in determining the coordinates of points or azimuths of pairs of points used for the analysis of subsequent objects, at the same time marking the data sources used for the analysis. Table 1. Averaged errors in determining the coordinates of points or azimuths of pairs of points used for the analysis of subsequent objects (at the same time marking the sources of data taken for the analysis). The basic analysis of the table in terms of the methods used for obtaining the information clearly indicates that the most universal measurement technology is terrestrial laser scanning. Properly conducted post-processing of the point cloud enables all the data necessary for the mining documentation to be provided, with the reservation of a lower accuracy of determining the inclination of the shafts of the hoisting machine and cable pulleys. Therefore, it is necessary to supplement this part of the survey process with precision leveling. The precise leveling itself is not very time-consuming: depending on the shafts' accessibility, it should take up to 15 min for each element (for the machine shaft and each level of the pulleys). Construction Element/Measuring Method The technologies with the highest coverage in the above table were the laser scanning and total station measurements. They allowed us to obtain information about most of the elements required in the inventory process. Nevertheless, the method used up to now to determine most of the elements (tacheometric) has been creating many issues. Elements such as the axis of the rope, the groove, or the plane of the drive disc are represented only by several points. As a result, a created point cloud contains a small number of supernumerary observations. Individual axes are fitted using averaging methods into these thin point clouds. The laser scanning gives a similar accuracy of measurement, but each element is represented by millions of points, so you can use methods of suppressing outliers and modeling the element with analytical methods. In fact, only the parts that should be analyzed are measured with a laser scanner. The access to the shaft pit, the levels of the rope pulleys, or the hoisting machine building is often very limited and combining the scans by methods of fitting them into spheres, planes, or characteristic points (and even a cloud to cloud register) would be too inaccurate. In addition, it is usually required to precisely reference measurements, and thus to determine characteristic elements such as rope azimuths, drawing azimuths, azimuths of shaft axes, azimuths of the planes of rope pulleys, and the driving drum in a global or local (mine) system. Therefore, it is necessary to connect successive scanner stations or groups of scanner stations (only and independently are measured: hoisting machine, rope pulley levels, and shaft at the shaft room level). This is done by measuring the geodetic network (using the multi-tripod method to minimize the impact of re-centering) and-already in reflectorless mode-the black and white reference targets. In flat terrain such as the transition from a shaft yard to a shaft room or to a hoisting machine building, the angle measuring errors are much smaller than in steep target measurements carried out when measuring the warp or determining the discs at the level of the rope pulleys. The error in determining the characteristic points using the tacheometric method is therefore the same as the error generated by the exact alignment algorithm. From the propagation of errors, it can be assumed that the error in determining the coordinates of points by laser scanning can be equal to the square root of the error in determining the point within one group of scans (point clouds) and the error in determining the reference targets that fit these scans to the model [22]. The same applies to determining the azimuths of ropes and the grooves of the drive disc and rope pulleys. The accuracy of determining the azimuths after the geodetic network is elevated to the levels of the pulleys is lower, and the errors determined by the laser scanning method are correspondingly greater according to the propagation of errors. The inclination of the headframe can be carried out independently by making a measurement with a laser scanner around the tower, or referring it to the local or global system using geodetic methods. With a correctly performed measurement and averaging and making accurate sections, the value of the inclination (and its path) of the headframe axis was 6 mm. Discussion In the first analyzed case (France), the upper rope wheels required rectification, but the entire object was inventoried. Thanks to the geodetic points network situated at the shaft room and its surroundings, it was possible to determine the coordinates of black and white scanning targets. These targets were situated at each level of the pulleys, shaft station, and the building with the winder drum and were used to register the point cloud and georeferencing. In addition, the coordinates of characteristic points at the tower shaft construction were determined with reflectorless measurement for better control of the results. Obtained point clouds were post processed. Azimuths of the planes and grooves of the pulleys and the propeller shaft, and also rope and shafts axes were obtained using the spatial model located in the local coordinate system. The azimuth values allowed the determination of rope friction angles on the pulleys and the winding pulley. A digital 3D model of the object was made, which enables a determination of the axis shifts of the grooves. The lower wheels converged with the drive wheel disc at 13 mm, and the upper wheels turned out to be twisted relative to the propeller. Ropes coming out of the winding pulley grooves perfectly hit the upper rope wheels, but when twisted, they showed non-parallelism. The extension of the axis of the grooves of the upper pulleys for the winding pulley showed an error of 92 and 128 mm. Therefore, for rectification, it is required to slightly twist and move the arrangement of the upper pulleys while maintaining the position of the current points of departure of the rope on the winding machine. Changes in the points of descent of the rope to the shaft indicates that its deflection from the vertical will improve. The second case analyzed (in Silesia) comes down to the verification of the condition that the draw axis should be symmetrically placed in the middle between the rope wheel planes properly twisted to the machine. After analyzing previous surveys and discussions with the mine's power engineering department and surveying-geological department, it turned out that in the context of the displacement of the grooves relative to each other, the deflection of the tower does not play a role because it shows a vector directed toward the winding machine. The research showed that the deflection of the object was within the scope of the permit (1/500 tower height in accordance with the relevant Regulation [21]). The measured displacement of the groove axis on the drive wheel related to the draw axis at the wheel level was 166 mm, whereas the displacement of the groove axis on the drive wheel in relation to the pull axis at the level of the propeller wheel was 425 mm for the underlap rope's wheel and 47 mm for the overlap rope's wheel ( Figure 10). The geometry of the object also indicates that the inclination values of ropes going down to the shaft may be problematic in the future. Therefore, the values indicate that the shaft tower should be rectified first. Only after that, should the whole hoist assembly be re-measured and the pulley assembly rectified adequate to the results of the inventory. Conclusions The Polish law [21] imposes appropriate deadlines for inventorying the geometry of the hoist assembly. Every five years, the inclination of the shaft tower, geometrical relationships between the pulleys, and the winding machine as well as the straightness of the guides and the shaft geometry should be determined. In the opinion of the team of authors, such a study should be combined in the scope of checking the inclination of the shaft tower together with the shaft and winding machine due to the possibility of conducting a wider, holistic analysis. In addition, it is worth mentioning that many mines are deciding to measure the tower inclination more often, even every year. Thanks to the technological progress in the field of hardware and software, the appropriate selection of measurement methods allows faster and more accurate inventories. The presented examples propose a modern approach to the documentation of the mine object process. Moreover, the collected data can be later used for renovation, extension, or decommissioning design work. The context of a hoist assembly with friction Koeppe winder is particularly demanding. A several centimeters axes offset can affect the geometry and fluidity of the entire construction. As mentioned in this article, the costs of the improper operation of such a system can be very high, and the usage tedious. It is therefore particularly important to systematically control geometrical relationships and conduct possible rectification. The limitations resulting from the accuracy of the measurement with a laser scanner and combining subsequent scans indicate the need to use methods such as precision leveling and tacheometric measurement of the matrix and connection points. As shown in the article, the errors from the combination of the above methods are several orders lower than the limit values for the use of objects. Conclusions The Polish law [21] imposes appropriate deadlines for inventorying the geometry of the hoist assembly. Every five years, the inclination of the shaft tower, geometrical relationships between the pulleys, and the winding machine as well as the straightness of the guides and the shaft geometry should be determined. In the opinion of the team of authors, such a study should be combined in the scope of checking the inclination of the shaft tower together with the shaft and winding machine due to the possibility of conducting a wider, holistic analysis. In addition, it is worth mentioning that many mines are deciding to measure the tower inclination more often, even every year. Thanks to the technological progress in the field of hardware and software, the appropriate selection of measurement methods allows faster and more accurate inventories. The presented examples propose a modern approach to the documentation of the mine object process. Moreover, the collected data can be later used for renovation, extension, or decommissioning design work. The context of a hoist assembly with friction Koeppe winder is particularly demanding. A several centimeters axes offset can affect the geometry and fluidity of the entire construction. As mentioned in this article, the costs of the improper operation of such a system can be very high, and the usage tedious. It is therefore particularly important to systematically control geometrical relationships and conduct possible rectification. The limitations resulting from the accuracy of the measurement with a laser scanner and combining subsequent scans indicate the need to use methods such as precision leveling and tacheometric measurement of the matrix and connection points. As shown in the article, the errors from the combination of the above methods are several orders lower than the limit values for the use of objects.
v3-fos-license
2018-04-12T00:00:33.000Z
2017-11-12T00:00:00.000
118959068
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.97.104007", "pdf_hash": "0162576687cc6806e09005124b95c3783238840d", "pdf_src": "ArXiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:259", "s2fieldsofstudy": [ "Physics" ], "sha1": "0162576687cc6806e09005124b95c3783238840d", "year": 2017 }
pes2o/s2orc
5-dimensional Myers-Perry Black Holes Cannot be Over-spun by Gedanken Experiments We apply the new version of gedanken experiment designed recently by Sorce and Wald, to over-spin the 5-dimensional Myers-Perry black holes. As a result, the extremal black holes cannot be over-spun at the linear order. On the other hand, although the nearly extremal black holes could be over-spun at the linear order, this process is shown to be prohibited by the quadratic order correction. Thus no violation of the weak cosmic censorship conjecture occurs around the 5-dimensional Myers-Perry black holes. Introduction When a singularity is not hidden behind a black hole horizon, such as to be seen by a distant observer, then it is called a naked singularity.The weak cosmic censorship conjecture (WCC) claims that naked singularity cannot be formed generically through gravitational collapse with physically reasonable matter [1].Even though there is still no general proof for this conjecture for the 4-dimensional asymptotically flat spacetime, the supporting evidence has been accumulated and discussed for a few decades [2].Especially in 1974, Wald suggested a gedanken experiment to test WCC by examining whether the black hole horizon can be destroyed by injecting a point particle [3].As a result, such a gedanken experiment turns out to be in favor of WCC. However, there are two crucial assumptions underlying in the aforementioned gedanken experiment.First, the black hole in consideration is extremal in its initial state.Second, the analysis is performed only at linear order of the point particle's energy, angular moment, and charge.The violation of WCC occurs when one releases either of these two assumptions.In particular, as initiated by Hubeny in 1999 [4], one can show that a nearly extremal Kerr-Newman black hole can be both over charged and over spun [5,6,7,8,9].In addition, when one takes into account the higher order terms in the energy, angular momentum, and charge of the test particle, an extremal Kerr-Newman black hole can even be destroyed [10].But nevertheless, these results may not indicate a true violation of WCC.Instead, in all of these situations, the test particle assumption may not be valid any more, so WCC may be restored when one carefully takes into consideration the self-force and finite-size effects [11,12,13,14,15]. Motivated by this, Sorce and Wald have recently designed a new version of gedanken experiment [16].Rather than analyzing the motion of the particle matter to obtain the condition for it to be absorbed by the black hole, they apply Wald formalism to completely general matter and obtain the first order variational inequality for the mass of the black hole by simply requiring the null energy condition on the horizon for the general matter, which reduces to that obtained in the old version of gedanken experiment when one regards the particle matter as the limiting case of the general matter1 .Moreover, when the initial black hole is non-extremal, they also obtain a lower bound for the second order variation of the mass of the black hole, which somehow incorporates both the self-force and finite-size effects and can be used to prove that no violation of the Hubeny type can ever occur.This result further strengthens the belief in the validity of WCC in the 4-dimensional asymptotically flat spacetime. But nevertheless, the 4-dimensional black holes have a lot of remarkable properties.It is natural to ask whether these properties are general features of black holes or whether they are unique to the world being 4-dimensional.For example, neither the uniqueness theorem nor the spherical topology of the horizon persists for the black holes in higher dimensions [18,19,20].Regarding WCC, fully non-linear numerical simulation has indicated that not only the 5-dimensional black strings and black rings but also the 6-dimensional Myers-Perry black holes can be destroyed by perturbations, with the horizons pinching into a generic formation of naked singularity [21,22,23].However, to the best of our knowledge, so far there has no numerical evidence for the similar formation of naked singularity by the perturbation of the 5-dimensional Myers-Perry black holes.This leads naturally to a restricted version of WCC in the 5-dimensional asymptotically flat spacetime, namely the generic perturbations around the 5-dimensional Myers-Perry black holes give rise to no formation of naked singularity.Note that the gedanken experiment, no matter whether it is the new version or old one, does not appeal to the sophisticated full-blown numerical relativity, so it is rewarding to check the validity of our restricted version of WCC by gedanken experiments.Actually it has been shown in [24] that the 5-dimensional Myers-Perry black holes can not be over-spun by the old gedanken experiment.But to obtain the above result analytically, not only does the scenario considered in [24] restrict into either singly rotating or equally rotating black holes but also focuses exclusively on the test particle falling in along the equator.As alluded to before, compared to the old one, the new gedanken experiment does not require us to analyze the motion of bodies to determine what kind of trajectories will or will not be captured by the black hole horizon, so it is desirable to check the validity of such a WCC around the 5-dimensional Myers-Perry black holes in a more general circumstance by performing such a new gedanken experiment.This is the purpose of the current paper.As a result, the 5-dimensional general Myers-Perry black holes can not be over-spun by a generic matter perturbation, thus our restricted version of WCC holds in the 5-dimensional asymptotically flat spacetime. The structure of this paper is organized as follows.In Section 2, we shall review the well-established Iyer-Wald formalism for any diffeomorphism covariant theory in any dimension, in particular, the first and second order variational identities.In Section 3, we restrict ourselves to the 5-dimensional Einstein theory and introduce the 5-dimensional Myers-Perry black holes.Here, taking into account that the relevant quantities for the 5-dimensional Myers-Perry black holes are presented in the previous literature without an explicit derivation, we relegate such a derivation to Appendix A and B. In addition, we also rewrite these quantities in a convenient way for the later calculation.Then in Section 4, we follow the idea in [16] to present the set-up for the new version of gedanken experiment, in particular, the first order perturbation inequality, as well as the second order perturbation inequality for the optimal first order perturbation of non-extremal black holes.With the above preparation, we conduct such a gedanken experiment to over-spin the extremal and nearly extremal 5-dimensional Myers-Perry black holes in Section 5. We conclude our paper in the last section with some discussions. Iyer-Wald Formalism and Variational Identities Compared to the ordinary Lagrangian scalar L constructed locally out of the metric g ab , its Riemann curvature, and other matter fields ψ as well as their symmetrized covariant derivatives, we prefer to start from a diffeomorphism covariant theory in an n-dimensional spacetime M with a Lagrangian n-form L = Lǫ a1a2...an , where ǫ a1a2...an is the canonical volume element associated with the metric g ab [25].If we denote φ = (g ab , ψ) as all dynamical fields, then the variation of the Lagrangian gives rise to where the equations of motion read E = 0, and the (n − 1)-form Θ is called the symplectic potential form.The symplectic current (n − 1)-form is defined in terms of a second variation of Θ as Associated with an arbitrary vector field χ a be any smooth vector field on the spacetime M , one can further define a Noether current (n − 1)-form as A straightforward calculation gives which indicates J χ is closed when the equations of motion are satisfied.Furthermore, it is shown in [26] that the Noether current can always be expressed as where Q χ is called the Noether charge and C χ = χ a C a is called the constraint of the theory, which vanishes when the equations of motion are satisfied.Now by keeping χ a fixed and comparing the variations of ( 3) and ( 5), we end up with In what follows, we shall focus exclusively on the case in which φ represents the exterior solution of a stationary black hole with ξ a the horizon Killing field satisfying L ξ φ = 0, where ( ∂ ∂ϕ I ) a are Killing vector fields with closed orbits, and Ω I are the corresponding angular velocities of the horizon.Then the variation of ( 6) gives rise to Suppose that Σ is a hypersurface surface with a cross section B of the horizon and the spacial infinity as its boundaries, then it follows from (6) that where we have resorted to the fact that the variation of the ADM conserved quantity H χ conjugate to an asymptotic Killing vector field χ a if it exists is given by with M the ADM mass conjugate to ( ∂ ∂t ) a and J I the ADM angular momenta conjugate to ( ∂ ∂ϕ i ) a .Similarly, it follows from (8) that where we have used the definition of the canonical energy of the perturbation δφ on Σ 3 5-Dimensional Einstein Theory and Myers-Perry Black Holes For our purpose, we now specialize to the 5-dimensional Einstein theory, i.e., Rǫ. Whence we have with G ab Einstein tensor, and the symplectic potential 4-form The corresponding symplectic current reads where with Taking L χ g ab = ∇ a χ b + ∇ b χ a into consideration and by a straightforward calculation, we can further obtain the Noether current as By comparing it with (5), one can readily identify the Noether charge and As to a 5-dimensional spacetime which is asymptotically flat in the sense that in a Lorentzian coordinate system {x} of flat metric η ab the metric behaves as near the spatial infinity, one can show there exists a 4-form B such that the ADM mass is given by where r a = ( ∂ ∂r ) a and h ij is the spatial metric with the index raised and the tensor traced both by the background Euclidean metric δ ij .On the other hand, it is easy to see that the ADM angular momentum is given simply by The higher dimensional generalization of asymptotically flat stationary Kerr black hole solution to the vacuum Einstein equation was first obtained by Myers and Perry [27], and its 5-dimensional version reads with where ϕ I ∈ [0, 2π] and θ ∈ [0, π 2 ].As shown in Appendix A, the parameters µ and a I are related to the ADM mass and angular momenta respectively as Without loss of generality, we shall constrain a I to be non-negative in later discussions.The spacetime singularity is located at Ξ = 0 as the squared Riemann tensor is given by While Π − µr2 = 0 is simply the coordinate singularity, and its roots can be expressed as follows which are real if and only if where the largest root r H designates the black hole event horizon with the area As calculated out in Appendix B, the corresponding angular velocity and surface gravity of the horizon are given by In particular, corresponds to the extremal Myers-Perry black holes 2 .On the other hand, for the Myers-Perry metric describes a naked singularity. For our later convenience, we would like to rewrite the condition for the existence of the horizon in terms of the ADM mass and angular momenta as By the same token, the relevant quantities associated with the horizon can be expressed as where Obviously, α → 0 corresponds to the near extremal limit. Null Energy Condition and Perturbation Inequalities As the new gedanken experiment designed in [16], the situation we plan to investigate is what happens to the above Myers-Perry black holes when they are perturbed by a one-parameter family of the matter source according to Einstein equation around λ = 0 with T ab (0) = 0. Without loss of generality but for simplicity, we shall assume all the matter goes into the black hole through a finite portion of the future horizon.With this in mind, we can always choose a hypersurface Σ = H ∪ Σ 1 such that it starts from the very early cross section of the unperturbed horizon B 1 where the perturbation vanishes, continues up the horizon through the portion H till the very late cross section B 2 where the matter source vanishes, then becomes spacelike as Σ 1 to approach the spatial infinity.In addition, we would like to work with the Gaussian null coordinates near the unperturbed horizon as where v = 0 denotes the location of the unperturbed horizon, u is the affine parameter of future directed null geodesic generators of r = 0 surface for any metric in the family, π a and q ab are orthogonal to k a = ( ∂ ∂u ) a and l a = ( ∂ ∂v ) a .As one can show, this choice of coordinates follows [29] B1 if we further choose the bifurcate surface of the unperturbed horizon as B 1 in what follows when the black hole in consideration is non-extremal.With the above preparation, ( 9) reduces to where ǫ is the induced volume element on the horizon, satisfying ǫ eabcd = −5k [e ǫabcd] .Now if the null energy condition is satisfied such that δT ab k a k b | H ≥ 0, we have the first order perturbation inequality as when the first order perturbation is optimal, namely saturates the above inequality, it obviously requires δT ab k a k b = 03 .Whence the first order perturbation of Raychaudhuri equation tells us that δϑ = 0 on the horizon if we choose a gauge in which the first order perturbed horizon coincides with the unperturbed one.Then it follows from (11) that Here ǫabc = k d ǫdabc is the induced area volume on the cross section of the horizon.In addition, we have employed k e δg ef | H = 0 in the second step, borrowed the result from [29] for E H (φ, δφ) in the third step, and used the reasonable assumption that our black hole is linearly stable in the fourth step [30], such that the first order perturbation will drive the system towards another Myers-Perry black hole at sufficiently late times, leading to the vanishing δσ cd at B 2 .In the last step, we have again resorted to the null energy condition for the second order perturbation of matter source on the horizon.Now we are left out to calculate E Σ1 (φ, δφ).To achieve this, we follow the trick invented in [16], and write E Σ1 (φ, δφ) = E Σ1 (φ, δφ MP ), where δφ MP is induced by the variation of a family of Myers-Perry black holes with δM and δJ I chosen to be in agreement with the firs order variation of the above optimal perturbation by the matter source.Note that for this family, we have Thus applying (11) to this family, we have Note that ξ a = 0 at the bifurcation surface B 1 of a non-extremal black hole, thus we can further employ (40) to obtain where with Therefore we end up with our second order perturbation inequality which, as demonstrated in [16], has incorporated the self-force and finite-size effects. 5 Gedanken Experiments to Over-spin a 5-Dimensional Myers-Perry Black Hole In this section, we will explore the gedanken experiments to over-spin both an extremal black hole and a nearly extremal black hole by the physical process described above. For an extremal black hole, the inequality (35) is saturated, i.e., (35) will be violated if we can perturb the black hole so that However, when the black hole is extremal, the angular velocity becomes Then our first order perturbation inequality tells us that (52) cannot be satisfied, thus an extremal 5-dimensional Myers-Perry black hole cannot be over-spun by our gedanken experiment.Now let us turn to the nearly extremal Myers-Perry black hole, which is characterized by the small α compared to √ 32M 3 .To proceed, we define a function of λ as for the aforementioned one-parameter family of perturbation by our gedanken experiment with f (0) = α 2 . If we can find an appropriate small value of λ so that f (λ) < 0, then our nearly extremal black hole will be over-spun.We shall assume the first order perturbation is optimal, i.e., and expand f (λ) to the quadratic order in both λ and α as where and If the O(λ 2 ) term is ignored, then it is not hard to see that it is possible to make f (λ) < 0 such that our black hole can be over-spun.However, if we take into account the O(λ 2 ) term, then miraculously we have for the optimal first order perturbation.Thus we can conclude that when the second order correction is taken into consideration, a nearly extremal 5-dimensional Myers-Perry black hole cannot be over-spun either. Conclusion We have performed the new version of gedanken experiment to check the restricted version of WCC in the 5-dimensional asymptotically flat spacetime by trying to over-spin the 5-dimensional Myers-Perry black holes.As a result, no violation of such a WCC is found at the linear order for an extremal 5-dimensional Myers-Perry black hole.While for a nearly extremal 5-dimensional Myers-Perry black hole, we find that the violation of Hubeny type occurs most dangerously under the optimal first order perturbation but our WCC is restored miraculously by the second order perturbation inequality.Our result indicates that the 5-dimensional Myers-Perry black holes, once formed, will never be over-spun classically.
v3-fos-license
2023-07-11T16:38:51.454Z
2023-12-01T00:00:00.000
259612631
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://strathprints.strath.ac.uk/86793/1/Zhang_etal_IEEE_TAES_2023_Escape_zone_based_optimal_evasion_guidance.pdf", "pdf_hash": "1e36c7a7a886f90ff8197358e52938082f023e2d", "pdf_src": "IEEE", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:260", "s2fieldsofstudy": [], "sha1": "5571919b155bd1461d37350d925d1766355fdab5", "year": 2023 }
pes2o/s2orc
Escape-Zone-Based Optimal Evasion Guidance Against Multiple Orbital Pursuers The orbital evasion problem is getting increasing attention because of the increase of space maneuvering objects. In this article, an escape-zone-based optimal orbital evasion guidance law for an evading spacecraft on near circular reference orbit is proposed against multiple pursuing spacecraft with impulsive thrust. The relative reachable domain is introduced first and approximated as an ellipsoid propagating along the nominal trajectory under the short-term assumption. The escape zone for the impulsive evasion problem is presented herein as a geometric description of the set of terminal positions for all the impulsive evasion trajectories that are not threatened by the maneuvers of pursuers at the maneuver moment. A general method is developed next to calculate the defined escape zone through finding the intersection of two relative reachable domain approximate ellipsoids at arbitrary intersection moment. Then, the two-sided optimal strategies for the orbital evasion problem are analyzed according to whether the escape zone exists, based on which the escape value is defined and used as the basis of the proposed orbital evasion guidance scheme. Finally, numerical examples demonstrate the usefulness of the presented method for calculating escape zone and the effectiveness of the proposed evasion guidance scheme against multiple pursuing spacecraft. I. INTRODUCTION As the number of near-Earth space objects increases, there is growing attention to the importance of the orbital evasion problem.The problem of orbital evasion originates from the problem of orbital rendezvous and interception, in which the spacecraft needs to evade other maneuvering or nonmaneuvering space objects that may be about to collide through its own control [1]. Early studies on orbital evasion were mostly carried out against nonmaneuvering space objects.In these research, the evasion strategies were normally generated by maximizing an optimization index, such as the terminal miss distance [2], [3] or the collision probability [4].However, these evasion strategies solved by the one-sided optimization cannot be used against a maneuvering space object since the possible maneuvers of the space object were not considered in these research. Taking into account the possible maneuvers of the space objects, the traditional collision avoidance problem will become an orbital pursuit-evasion (OPE) problem.The differential game method [5] with maneuver assumptions for both pursuer and evader is more suitable to deal with this two-sided optimal control problem. Since the differential game theory [6], [7] was put forward, many studies on OPE games have been carried out.Analytical methods, for example, the closed-form solution of barrier [8], and numerical methods, such as the semidirect collocation nonlinear programming method [9], the multiple shooting method [10], and the dimension-reduction method [11], are two widely used types of methods for solving an OPE game.Moreover, the nonlinear control for OPE game was realized using the state-dependent Riccati equation method in [12].Two optimal guidance methods, namely, the Cartesian model and the spherical model, for the long-distance OPE game were proposed in [13].The PE differential game for satellites with continuous thrust was investigated from the viewpoint of reachable domain in [14].Many other works have focused on solving the saddle point solution of the game quickly and efficiently [15], [16], [17], [18], as well as the more complex dynamic environment [19], [20] and information structure [21]. It should be noted that all of the research abovementioned focused on the two-player OPE games involving one evading spacecraft and one pursuing spacecraft with continuous thrust [22].Almost few studies have been conducted on the orbital evasion strategy against multiple pursuers.Meanwhile, considering the fact that impulsive thrust is still the main form of spacecraft maneuver nowadays, it is required to investigate the impulsive evasion strategy.Chandrakanth and Scheeres [23] conducted research aimed at the two-player impulse OPE game, while the assumption of two-impulse transfers is not suitable for short-distance PE.Overall, there is still a gap in the research concerning the impulsive evasion strategy for the short-distance scenario against multiple pursuing spacecraft. According to the initial relative distance between the pursuing spacecraft and evading spacecraft, the orbital evasion problems can be divided into two scenarios [23]: long-distance evasion scenario and short-distance evasion scenario.In contrast, the short-distance evasion scenario, which is closely related to the relative motion state between the pursuing spacecraft and evading spacecraft, is the final stage of the complete orbital evasion process, and has the characteristics of strong adversarial and high real-time requirements.This article works on the short-distance orbital evasion problem against multiple pursuers with impulsive thrust.There are two main contributions of this article.The first contribution of this work is to propose the concept of escape zone (EZ) for an impulsive orbital evasion problem against multiple pursuers, and present an effective method for calculating the EZ at any decision moment by introducing the approximate relative reachable domain (RRD) ellipsoid for orbital impulsive maneuver.Second, a two-sided optimization process based on the calculated EZ is performed to generate the optimal pursuit-evasion strategies, which provides a possible idea for solving impulsive orbital games. The rest of this article is organized as follows.Section II introduces the approximate RRD, and presents the concept of EZ in this article.A method based on the approximate RRD is proposed in Section III to calculate the defined EZ.In Section IV, an escape-zone-based optimal evasion guidance law is proposed based on the escape value (EV) defined through the two-sided optimal strategies analysis.Numerical examples are provided in Section V. Finally, Section VI concludes this article. A. Relative Reachable Domain Approximate Ellipsoid The RRD concept is introduced in this article to determine the potential relative state reach set of a spacecraft under given maneuverability.Different from the concepts in [24] and [25], an RRD denoted by D(X 0 , V max , t) here is described as the set of all possible relative positions that can be transferred to after time t from initial relative state X 0 under the maximum available velocity increment V max .Assuming that the reference orbit is circular, the RRD can be established based on the linearized dynamics, i.e., the CW equations [26], which have been proved sufficiently accurate to model the close range relative motion [25], as ⎧ ⎨ ⎩ (1) where and s = sin(ω t), c = cos(ω t), ω is the mean angular motion of the reference orbit, x 0 , y 0 , z 0 , ẋ0 , ẏ0 , and ż0 are components of the initial relative position vector and relative velocity vector in the reference local-vertical, localhorizontal (LVLH) frame, ξ V and η V are two angles characterizing the direction of the initial impulse vector in the reference LVLH frame. Obviously, the envelope of the RRD described by ( 1) is exactly the boundary of the time-constrained reachable relative state distribution mentioned in [24] and [25] at given time t, without considering the initial position uncertainty.Specifically, the commonality between them is that the fixed-time RRD considered in [24] and [25] will be the same as that in this article if the initial position uncertainties r are set to 0 and the velocity increment applied at the decision moment is considered as the initial velocity uncertainties v.The difference between them is that the work presented in [24] and [25] focuses on solving the accurate inner and outer boundaries of the RRD in any direction within a relative motion time range, while this article focuses on efficiently solving the approximate envelope of the RRD at any given time for quick analysis of the positional relationship between RRDs for different participants in the OPE games. When the relative motion time is much smaller than the period of the reference orbit, the envelope of the RRD at time t can be approximated as where It should be noted that the approximation in (4) occurs only in the x-y reference plane not in the z direction.Specifically, the right side of the first formula in (1) is approximated by(κ xy V max cos η V ) 2 .Let F 1 ( t) and F 2 ( t) denote the former and latter expression, respectively.F 1 (0) = F 2 (0) and lim t→0 |(F 2 − F 1 )/F 1 | = 0 can be proved, which demonstrate the equivalence of F 1 and F 2 at t = 0. Furthermore, a quantitative analysis is given as follows to investigate that this approximation is valid for how small the motion time is. Suppose that the height of the reference orbit is 35 786 km, and that V max = 10 m/s.For each time t taken from 0 s to 7% times the reference orbital period T r , a total of 1 000 000 Monte Carlo runs calculating the absolute value of relative error (ARE) of relative reachable The results of the Monte Carlo runs are shown in Fig. 1.As seen in Fig. 1, the mean ARE is less than 3% (considered as the maximum value for an acceptable approximation error) when t is less than about 6.665%T r = 5741.372s.Similarly, the maximum ARE is less than 3% when t is less than about 4.797%T r = 4132.237s.The results for other orbital heights and impulse magnitudes are also analogous to that in Fig. 1, some of which are given in Tables I and II. Accordingly, the following conclusions are obtained about the approximate envelope defined in (4). 1) The approximate envelope of the RRD at time t is an ellipsoid of revolution (termed RRDE) in the reference LVLH frame, which is centered on the nominal relative position at time t, with a = b = κ xy V max and c = κ z V max as the major and minor semi axes, respectively, as shown in Fig. 2. 2) The error caused by the approximation is acceptable only when t is much smaller than T r (termed shortterm assump-tion).Specifically, t needs to be less than about τ 1 = 6.665%T r or τ 2 = 4.797%T r so as to ensure that the mean ARE or maximum ARE is less than 3%, respectively.3) Under the short-term assumption, a and c both increase monotonically with increasing time t. B. Problem Statement and Escape Zone Definition Consider an orbital evasion problem against N pursuing spacecraft.Each participating spacecraft is controlled by a three-dimensional impulse input at each decision moment, which represents the thrust effect during this decision period.The pursuing spacecraft attempts to satisfy some certain intercept conditions, for example, position matching with the evader.Contrarily, the evader needs to avoid entering this terminal interception range through its own control. Before continuing, the following assumptions are made. 1) All the participating spacecraft are assumed to move in a two-body gravitational field, without considering any forms of perturbations and real-world noises. 2) The maneuverability of the evader is assumed to be stronger than that of the pursuer to ensure escape is possible [27].Specifically, two sides of the game are assumed to maneuver at the same time [28].Meanwhile, the decision periods for the pursuers and evader are the same, but the maximum available velocity increment of the single impulse for the evader is slightly larger than that of the pursuer. 3) The observation information is assumed to be complete, that is, each participating spacecraft can accurately acquire the status of other spacecraft in real time. As shown in Fig. 3, the evasion process starts from the moment t 0 when the minimum distance R min between the pursuers and evader is less than a given alert distance R alert , for example, 100 km, and terminates when R min is less than the minimum valid interception distance R intercept of pursuers (evasion failure) or when R min is larger than R alert again (evasion success).Therefore, the entire pursuitevasion motion can be considered as the close range relative motion.For the above orbital evasion problem, a unified time RRD ellipsoid of the evader (termed UTRRDE) at time t U is proposed first, where t U is a unified time introduced to unify different RRD ellipsoids for different intersection moments herein.There is no strict constraint on the selection of the unified time t U as long as it is larger than the decision moment t 0 when the impulse is applied, i.e., the relative motion time (t Ut 0 ) is positive, to ensure that the geometric parameters a and c of UTRRDE are larger than 0. However, considering the approximation error caused by the RRD ellipsoid, it is suggested to select a unified time less than the maximum valid relative motion time τ 1 = 6.665%T r determined in Section II-A to reduce the guidance error. In this article, the corresponding point M TU on the UTRRDE for a point M on the RRDE E at time t is defined as the terminal relative position point after time t U on the impulsive relative trajectory (termed the characteristic trajectory), along which the evader can transfer to M after time t. The corresponding point M TU 1 of a point M 1 on the intersection curve I 1 of two RRDEs at time t can be calculated as follows: The velocity increment of the characteristic trajectory for M 1 and M TU 1 is computed first by where ) represents the state transition matrix from the initial impulse to the relative position at time t in the reference orbital frame, and x M 1 , y M 1 , and z M 1 are components of the relative position vector of M 1 in the reference LVLH frame.Then, the corresponding point M TU 1 can be obtained by In this regard, the concept of (EZ here proposed for the above-mentioned evasion problem is defined as follows: Let t U t denote the EZ at decision moment t with respect to the unified moment t U , where t U > t.Define the initial reference orbit with the initial orbit of evader.As shown in Fig. 4, t U t is a combined area on the RRDE at time (t U -t), and all transfer trajectories that can reach to this area after time (t U -t) through a single velocity impulse, the magnitude of which is not larger than V max , can escape the interception of pursuers. III. ESCAPE ZONE CALCULATION In this section, a method for finding the intersection of two RRDEs is presented first.The largest threatened area on the RRDE of evader by each pursuing spacecraft is solved then, and the EZ is eventually obtained by successively removing these areas from the entire ellipsoid. A. Finding Intersection of Two RRDEs Let the initial time t 0 = 0. Define the initial reference orbit with the initial orbit of evader, in which way the center of the RRDE of evader will always be located at the origin of the reference orbital frame.Suppose that the RRDE of pursuer P (RRDE P ) intersects that of evader E (RRDE E ) at time t, as shown in Fig. 5.The intersection of two RRDEs represented by the red line I 1 in Fig. 5 can be solved as follows. Let O P and O E denote the center of the RRDE P and RRDE E , respectively.For the plane Q 0 determined by the line O P O E and the Z-axis of the reference frame, the components of its normal vector n 0 in the reference LVLH frame are where x cP , y cP , and z cP are the components of the nominal relative position vector of P at time t in the reference LVLH frame. The plane Q defined by is obtained by rotating Q 0 about the line O P O E by angle α. According to Rodrigues' rotation formula [29], the parameters p, q, and r in ( 10) can be computed by (11) Let a P , c P and a E , c E denote the geometric parameters of RRDE P and RRDE E , respectively.Then, the ellipsoids of P and E can be defined by The plane Q intersects the RRDE P and RRDE E at the curve I P and I E , respectively, as shown by the green lines in Fig. 5. Evidently, the intersection curves I P and I E must be ellipses or circles [30].If and only if the plane Q is the equatorial plane of the RRDE, the intersection curve is a circle.Let By using the Householder transformation [31], the parametric equation [32] of the intersection curve I i can be given by where x cE = y cE = z cE = 0, and ϕ i ∈ [0, 2π ] is the phase parameter of each point on I i .Then, two intersection points M 1 and M 2 of I P and I E , both of which lie on the intersection curve I 1 of two RRDEs, can be calculated by solving the equations x IP = x IE , y IP = y IE , and z IP = z IE .However, this problem is difficult to solve analytically because of the high nonlinearity in (14).In this regard, the numerical solution is obtained here by minimizing the objective function that for design variables ϕ P ∈ [0, 2π ] and ϕ E ∈ [0, 2π ] through the nonlinear optimization method, for example, the Quasi-Newton method [33]. In order to improve the convergence of the abovementioned optimization process, the phases of the intersection points M 1 and M 2 of two circles obtained by the intersection of plane Q and each RRD approximate sphere (RRDS) are used to provide initial guesses forϕ i , as shown in Fig. 6, where the RRDS is a sphere whose center coincides with the center of RRDE and whose radius is equal to a of RRDE.Meanwhile, the Jacobian matrix of the objective function is provided by where and × {a 2 P pq cos ϕ P − [λ P (c P r {[λ P (c P r − λ P ) + (a P q) 2 ] cos ϕ P − a 2 P pq sin ϕ P } + c P λ P 2c P λ P (a P p cos ϕ P + a P q sin ϕ P ) + a E q sin ϕ E ) (a P q cos ϕ P − a P p sin ϕ P ).(17) In this manner, the intersection curve I 1 of the RRDE P and RRDE E at time t can be obtained ultimately by calculating the intersection points of I P and I E for every rotation angle α from 0 to π. Considering that the three main axes of RRDE P and those of RRDE E are parallel respectively, the symmetry can be utilized here to calculate the intersection points of I P and I E for α from π/2 to π by directly using the results for α from 0 to π/2 according to ⎧ ⎨ ⎩ where M 4 (x 4 , y 4 , z 4 ) is the symmetrical intersection point for α from π/2 to π of the intersection point M 3 (x 3 , y 3 , z 3 ) for α from 0 to π/2.Therefore, the computational effort can be eventually reduced by only calculating the intersection points of I P and I E for α from 0 to π/2. Because the calculated intersection curves I 1 for different intersection times are located on different RRDEs, it is hard to compare them quantitatively.To solve this problem, the unified time RRD ellipsoid presented in Section II-B is introduced here.Through ( 6)-( 8), the intersection I 1 for any intersection time can be projected onto the same UTRRDE to obtain the corresponding curve I 2 , as shown in Fig. 7. B. Solving the Largest Coverage on UTRRDE To solve the largest coverage on the UTRRDE swept by the pursuer P during the entire relative motion process, the RRDE of P and the RRDE of the evader E are propagated along the time axis.First, the positional relationship between two RRDEs at the given time t can be judged as follows. Rewrite (12) into the following quadratic equations: where Since A E and A P are positive definite, Q i (X) < 0, (i = P, E) defines the inside of the RRDE i , and Q i (X) > 0 correspond to the outside. Generally, Q E (X) = λ define all level curve ellipsoids [34] of RRDE E , where the minimum (negative) value of λ and λ = 0 define the O E and the RRDE E , respectively.The minimum level value λ 0 and maximum level value λ 1 of the level curve ellipsoids that intersect the RRDE P can be calculated by minimizing Q E (X) subject to the constraint Q P (X) = 0, which can be solved by the method of Lagrange multipliers [35]. Define the Lagrange function as where ε is the introduced Lagrange multiplier.Differentiating (21) yields where the gradient ∇ represents the derivatives in X. Setting (22) equal to zero yields Formally solving for X yields where δ(ε) = det(A E + εA P ) is a cubic polynomial in ε, and Y(ε) is a column vector that has components cubic in ε. Setting (23) equal to zero and replacing X with (25) yields which is a degree six polynomial in ε.The X can be computed by substituting the computed roots [36] of ( 26) into (25), and the corresponding value of Q E (X) can be calculated by (19).The minimum and maximum values of the calculated Q E (X) are exactly λ 0 and λ 1 , respectively.The positional relationship between two RRDEs at time t can be decided by comparing λ 0 and λ 1 with 0: If λ 0 > 0, two RRDEs are separated.If λ 0 < 0 and λ 1 > 0, two RRDEs intersect.If λ 1 < 0, the RRDE P is contained in the RRDE E . Then, three cases about the types of the nominal relative trajectory of P are discussed according to the positional relationships between two RRDEs along the time axis.Let T P denote the largest coverage of P. As mentioned, for the separation case, T P = ∅.For the intersection case, T P is the union of all the areas enclosed by the intersection curve I 2 during the time interval (t ex1 , t ex2 ), as shown in Fig. 9(a).Similarly, for the contained case, T P is the union of all the areas enclosed by the intersection curve I 2 during the time interval (t ex1 , t in1 ) and (t in2 , t ex2 ), as shown in Fig. 9(b). Considering the infinity of numbers of the rotation angle α around the line O P O E and the intersection moment t on the time axis, an ellipsoidal polygon with finite vertices, which all lie on the intersection curve I 2 , is used to replace I 2 , and a time step is selected for discrete calculations along the time axis.Specifically, the larger the number of polygon vertices and the smaller the time step, the higher the solution accuracy but the higher the computational burden.Under this approximation, the union of two intersection curve areas at different intersection time can be calculated by judging the inclusion relationships of the vertices of two ellipsoidal polygons [37]. In this way, T P of each pursuing spacecraft is recorded as a data table, which takes the intersection time t as the data index and the geodetic coordinates of all vertices of the approximate polygon of the intersection curve I 2 at time t as the data value, and is stored in a database for all the pursuers.The boundary of T P is recorded as a series of vertices associated with the intersection times. Eventually, the EZ can be obtained by considering the threats of all the pursuing spacecraft.Let T Pi , (i = 1, 2, …, N) denote the largest coverage of pursuing spacecraft P i solved at the decision moment t.For R intercept = 0, the EZ t U t (see Fig. 10) can be calculated ultimately by removing all these largest coverage from the entire unified time ellipsoid where 0 is the entire UTRRDE at the unified time t U . IV. OPTIMAL EVASION GUIDANCE LAW In this section, the analysis of the two-sided optimal strategies for this orbital evasion problem is presented first, and the concept of the EV is proposed next.Finally, a proposed EV-optimal guidance law is given to avoid interception by multiple pursuing spacecraft. A. Two-Sided Optimal Strategies Analysis In order to ensure the security of the evasion trajectory, the optimal interception strategy of the pursuers for a given evasion trajectory needs to be analyzed. Before discussion, the concepts of the evasion impulse and the pursuit impulse as shown in Fig. 11 are clarified first.An evasion impulse here denoted by V E [r d ,t U ] represents the impulse of the characteristic trajectory with the terminal position r d on the UTRRDE.A pursuit impulse here denoted by V Pi [r d ,t] represents the impulse of the relative transfer trajectory of P i that can make P i intercept the evader after time t, which moves along the characteristic trajectory with the terminal position r d on the UTRRDE.According to Section III, the terminal position r d is exactly located on the intersection curve I 2 of the intersection time t on the UTRRDE. Two cases about the two-sided optimal strategies at decision moment t are discussed as follows. 1) If t U t = ∅, i.e., the calculated EZ is not empty as shown in Fig. 12, then any evasion impulse V E [r d ,t U ] satisfying that r d ∈ t U t leads to a successful evasion, which means that the pursuers cannot intercept the evading spacecraft with abovementioned evasion impulse in a limited time, no matter what strategy they adopt. In this case, the pursuers will attempt to minimize the miss distances with the maneuvered evader intuitively.On the contrary, the evading spacecraft needs to maximize the minimum miss distance of all pursuers among all the successful evasion trajectories. The miss distance M Pi is defined as the minimum distance between the maneuvered pursuer P i and the maneuvered evader E along the time axis, which can be calculated by M Pi = min{|r Pi (t ) − r E (t )||t ∈ [0, τ ]}, where r Pi (t) and r E (t) represent the relative position vector of the maneuvered pursuer P i and the maneuvered evader E at time t in the reference LVLH frame, respectively.Obviously, the optimal strategy of P i is to adopt the maneuver impulse that can minimize M Pi .Actually, according to the definition of the largest coverage on the UTRRDE, the minimum miss distance must fall on the situation in which the terminal position of the characteristic trajectory is located on the largest coverage T Pi since T Pi stands for the boundary of the maximum available velocity increment of P i .Therefore, the optimal pursuit impulse V Pi [r d2 ,t] can be solved by searching r d2 and t on T Pi to minimize the miss distance M Pi , which can be represented as an optimization problem as follows: min (28) The minimum miss distance of all pursuers is finally obtained by M min = min{M P1 , M P2 , . . ., M PN }.As mentioned, the optimal evasion impulse V E [r d1 ,t U ] can be obtained by searching r d1 on the EZ to maximize M min , which can be represented as min 2) If t U t = ∅, i.e., the EZ does not exist as shown in Fig. 13 because the UTRRDE has been completely covered by the threat areas of all pursuers, then the pursuers can always find an appropriate strategy to intercept the evader, no matter what maneuver impulse the evader adopts. In this case, the pursuers will attempt to minimize the interception time for each given evasion impulse while the evading spacecraft attempts to maximize the minimum time to be intercepted among all the possible evasion trajectories. With the help of the established database in Section III, the minimum time t min Pi to be intercepted by P i for a given evasion trajectory pointing to a certain terminal position r d2 on the UTRRDE can be computed by searching in the database for the minimum intersection time satisfying that the terminal position r d2 is inside the area enclosed by intersection curve I 2 at this intersection time.Then, the optimal impulse of P i can be determined as V Pi [r d 2 , t min Pi ].Let t min P denote the minimum time among all the valid t min Pi .The optimal evasion impulse V E [r d1 ,t U ] can be obtained by searching r d1 on 0 to maximize t min P , which can be represented as min B. Escape-Value-Optimal Guidance Law According to the two-sided optimal strategies, the EV is defined here as a scalar describing the effect to escape of an evasion trajectory pointing to the terminal position r d on the UTRRDE, which can be computed by where t min P is the minimum time to be intercepted of the evader for the given evasion impulse V E [r d ,t U ] when the EZ is empty, T SI is the dimension of time, M min is the minimum miss distance of all pursuers for the given evasion impulse V E [r d ,t U ] when the EZ is not empty, and L SI is the dimension of distance. The EV described previously can be used as the basis for a closed-loop evasion guidance scheme for an evading spacecraft against multiple pursuing spacecraft. In this scheme, at each decision moment of the evader, the evasion impulse is determined to obtain the largest EV, which generally implies the greatest evasion possibility.Corresponding to two different cases in the EV calculation, when the EZ exists, the optimal evasion impulse is desired to achieve the largest minimum miss distance of all pursuers.In addition, when the EZ does not exist, the evasion impulse is still optimized for acquiring the maximum time to be intercepted by the pursuers, which means that every nonoptimal maneuver of the pursuing spacecraft will result in the increase of the interception time and may eventually lead to successful evasion.Such a guidance scheme could be implemented as follows. 1) Obtain the current relative states of all N pursuers.2) For the pursuing spacecraft P i , propagate the nominal relative motion from the current state to time T r /2 using the nonlinear relative dynamic model [38], which is a fairly sufficient choice to ensure that all externally and internally tangent moments of two RRDEs can be taken into account.3) Choose a small time step t s for the RRDE propagation.Calculate λ 0 , λ 1 and judge the positional relationship between the RRDE Pi and RRDE E along the nominal trajectory by ( 19)- (26).Determine the type (separation, intersection, or contained) of the nominal trajectory for P i as in Fig. 8, and record the time t ex1 and t ex2 for the intersection case or time t ex1 , t in1 , t in2 and t ex2 for the contained case. 4) For the separation case, let the largest coverage T Pi be empty, and skip to step 7).5) For the intersection or contained case, calculate all the intersection curves I 1 of the RRDE Pi and RRDE E from 0 to τ 1 with time step t s using the method in Section III-A, and project them onto the UTRRDE through ( 6)-( 8) to obtain the corresponding curves I 2 .Save all the curves I 2 into the database for P i .6) Calculate the union of all I 2 to obtain T Pi .7) Repeat the procedure from steps 2) to 6) until the largest coverage of all pursuers has been calculated.8) Calculate the EZ through (27).9) Determine the optimal evasion maneuver by searching r d on the EZ (if the EZ is not empty) or on 0 (if the EZ is empty) to minimize the EV computed by using (31).10) This procedure, given by the preceding steps, is continued until the terminal conditions of the evasion process are satisfied.If R min is larger than R alert , then successful evasion has been attained. It should be noted that this guidance scheme provides a conservative evasion strategy for the evading spacecraft, in which all the pursuing spacecraft are considered "smart" enough, i.e., will adopt the optimal pursuit strategy.In other words, the provided evasion strategy is a greedy and locally optimal solution rather than the global optimal solution for the entire game with more than one round.However, thanks to the calculated EZ, this local optimal evasion strategy can be quite concise, fast, and effective.In addition, considering the uncertain antagonism of the pursuers, it is extremely difficult to accurately determine the total number of rounds of the entire game when the strategies of pursuers are unknown, while the guidance scheme could guarantee the basic proceeds of the evading spacecraft in the real-time games. V. NUMERICAL EXAMPLES In this section, the effectiveness of the proposed evasion guidance scheme is verified by a numerical simulation of an orbital evasion scenario against several pursuing spacecraft. An evading spacecraft E and four pursuing spacecraft P 1 , P 2 , P 3 , P 4 are included in this orbital evasion scenario, which starts from the initial time t 0 = 0 s when the minimum distance R min between the pursuers and evader is less than the given alert distance R alert = 100 km of evader.The orbital elements at t 0 , including semimajor axis a 0 , eccentricity e 0 , inclination i 0 , right ascension of ascending node 0 , argument of perigee ω 0 , and mean anomaly M 0 , are given in Table III. The parameters of mass and maneuverability of the spacecraft are given in Table IV.As mentioned in Section II-B, the minimum time intervals T d between two impulses of pursuers and that of the evader are set to be equal, and the maximum available velocity increment 2.4 m/s for each impulse of the evader is set to be larger than that 2 m/s of the pursuer, which conforms to the assumption presented in Section II-B that the maximum available velocity increment of the single impulse for the evader is slightly larger than that of the pursuer. Define the initial reference orbit with the initial orbit of evader.The initial nominal trajectories of pursuers in the reference LVLH frame are portrayed in Fig. 14 to Table III.Suppose that the minimum valid interception distance of pursuers R intercept = 1 km.Then, the initial interception time along the nominal trajectory can be computed as about 4296.284s for P 3 and 4295.958s for P 4 , which indicates the necessity of evasion for E. Two cases of different pursuing strategies are considered: case 1) the optimal pursuing strategy described in Section IV-A; and case 2) the minimum zero effort miss interception strategy solved by the differential evolution algorithm [39] at each decision moment with the constraint of the maximum available velocity increment V maxP for each impulse. The evading spacecraft is guided through the proposed guidance scheme in both cases.All the spacecraft are simulated to move in a two-body gravitational field in the numerical examples.The results and analysis of the numerical examples are presented as follows. For case 1, the evasion trajectory of evader and the pursuit trajectories of pursuers in reference frame are portrayed in Fig. 15 portrays the time history of the distance between each pursuing spacecraft and the evading spacecraft.As shown in the figures, the evading spacecraft guided through the proposed scheme successfully escape from all the pursuing spacecraft at 15715.374 s, at which R min is larger than R alert again. It is shown in these figures that the entire evasion process can be approximately considered as two stages separated at about 6000 s in case 1.In the first stage, the evading spacecraft attempted to break through the encirclement of the pursuing spacecraft, after which the evader focused on keeping away from all the pursuers through its stronger maneuverability in the second stage.The relative position history shown in Fig. 20 and the pursuit-evasion velocity increment history in the inertial frame shown in Fig. 21 also confirm this ratiocination.The detailed transfer trajectories within 6000 s are portrayed in Fig. 22, where the dashed lines denote the initial nominal trajectories of the pursuers. All the intersections I 2 during the intersection time intervals and the largest coverage T P1 calculated for P 1 at t 0 in the reference LVLH frame are shown in Fig. 23 all I 2 and T P1 on the UTRRDE.Similarly, the intersections and the largest coverage for P 2 , P 3 , and P 4 at t 0 are portrayed in Figs.24-26, respectively.As shown in these figures, the results of the largest coverage coincide with the conclusion obtained in Section III-B.As a result, the EZ at t 0 is ultimately obtained through (27).The geodetic coordinates of the EZ at t 0 are shown by the gray area in Fig. 27.Obviously, the EZ is equal to the remaining area after removing all the calculated largest coverage of pursuers from the entire UTRRDE.In the above simulations, the number of polygon vertices to replace I 2 was set to 360 with a step of 1°for the rotation angle α, and the time step for discrete calculations was set to approximately 5.562 s corresponding to the intersection time interval equally divided into 200 segments, which is preliminarily determined by the intersection time range of two RRDSs.Under such parameter configurations, the computing time for a data table is about 10.934 s on an Intel(R) Core(TM) i7-7700 on a 3.60 GHz CPU. As for the two-sided optimal strategies, considering that the EZ exists at t 0 , the optimal evasion impulse is desired to achieve the largest minimum miss distance M min of all pursuers.According to Section IV-A, the maximum M min is obtained as 1.596692 km when the longitude of the terminal evasion position r d on UTRRDE is −3.376190°and the latitude is 37.528571°.Correspondingly, Fig. 28 = −3°and latitude = 38°, which is close to the optimization results. For case 2, the evasion trajectory of evader and the pursuit trajectories of pursuers are portrayed in Figs.29-32.Fig. 33 portrays the time history of the distance between each pursuing spacecraft and the evading spacecraft.As shown in the figures, the evading spacecraft successfully escape from all the pursuing spacecraft at 14353.583 s, which is smaller than 15715.374s in case 1.In addition, as seen in Fig. 33, the minimum distance between pursuers and evader during the entire evasion process is 13.020359 km in case 2, which is larger than that of 6.462936 km in case 1.This is intuitive and can be accounted for by the nonoptimality of the pursuing strategy in case 2 since the pursuers did not consider any possible maneuver of the evader in this case. It should be noted that although the total time of the evasion process is relatively larger than τ , the effect of guidance can still be guaranteed because the reference orbit for optimal impulse calculation is updated to the current orbit of evader at every decision moment to make sure the RRDE of evader is always located at the origin of reference frame while calculating the EZ, and the maneuver period T d is much smaller than τ . VI. CONCLUSION In this article, an escape-zone-based optimal evasion guidance law was proposed for an evading spacecraft on near-circular reference orbit against multiple pursuing spacecraft.The concept of EZ was presented for the impulse orbital evasion problem first.A method based on the approximate RRD ellipsoid was proposed to calculate the EZ against multiple pursuers for each maneuver moment.The EV was defined according to the two-sided optimal strategies analysis and eventually used as the basis for the closed-loop evasion guidance scheme. The proposed evasion guidance scheme provides a conservative EV-optimal evasion strategy for the evading spacecraft by considering the optimal intercept strategies of pursuers such that to obtain the largest minimum miss distance of all pursuers when EZ exists or the maximum time to be intercepted by pursuers when EZ is empty.Finally, the numerical examples of an orbital evasion scenario with one evader and four pursuers were provided to verify the effectiveness of the presented method for EZ calculation and the proposed guidance scheme.The simulation results showed that the evader successfully escaped from all the pursuers in both cases against the optimal and nonoptimal pursuing strategies.Moreover, a better performance of evasion process, including a larger minimum distance and a smaller evasion time, against the nonoptimal pursuing strategy was found than that against the optimal pursuing strategy. APPENDIX A. Derivation for Approximate RRD Considering a short-distance relative motion scenario with the circular reference orbit, after time t of applying the impulse V with the components [ V x V y V z ] in the reference orbital frame, the components of the relative position of the spacecraft based on CW equations satisfy where ω is the mean angular motion of the reference orbit, x 0 , y 0 , z 0 , ẋ0 , ẏ0 , and ż0 are components of the initial relative position vector and relative velocity vector in the reference LVLH frame, and ξ V and η V are two angles characterizing the direction of the initial impulse vector in the reference LVLH frame. Move the terms related to the initial state in ( 32) and (33) to the left side of the equations.Taking the square of both sides and adding the two equations of x and y gives Then, (34) and (35) can respectively be written as Equations ( 38) and (39), i.e., (1) in Section II-A, are the calculation formulas for the 3-D RRD at time t.Let F 1 (t) denote the right side of (38).When the relative motion time t is much smaller than the period T r of the reference orbit, use F 2 (t ) = [κ 2 + (κ 1 + κ 3 + κ 4 + κ 5 )/2]( V cos η V ) 2 to approximate F 1 (t).In this way, the approximate envelope of the RRD can be obtained as The rationality of the above approximate process is proved from the following two aspects.First, the equivalence of F 1 (t) and F 2 (t) in the neighborhood of t = 0 for the above-mentioned approximation process is demonstrated as follows. B. Guidance Scheme for Pursuing Spacecraft in Simulation Case 1 of Section V The guidance scheme for the pursuing spacecraft P i in simulation case 1 of Section V could be implemented as follows. 1) Obtain the current relative state of the evading spacecraft and calculate the state relative to evader.2) Propagate the nominal relative motion from the current state to time T r /2 using the nonlinear relative dynamic model.3) Choose a small time step t s for the RRDE propagation.Calculate λ 0 , λ 1 and judge the positional relationship between the RRDE Pi and RRDE E along the nominal trajectory by ( 19)- (26).Determine the type (separation, intersection, or contained) of the nominal trajectory for P i as in Fig. 8, and record the time t ex1 and t ex2 for the intersection case or time t ex1 , t in1 , t in2 , and t ex2 for the contained case.4) For the separation case, let the largest coverage T Pi be empty, and skip to step 7).5) For the intersection or contained case, calculate all the intersection curves I 1 of the RRDE Pi and RRDE E from 0 to τ 1 with time step t s using the method given in Section III-A, and project them onto the UTRRDE through ( 6)-( 8) to obtain the corresponding curves I 2 .Save all the curves I 2 into the database for P i .6) Calculate the union of all I 2 to obtain T Pi .7) Determine the optimal pursuit maneuver by searching r d2 and t on T Pi to minimize the miss distance M Pi or directly as V Pi [r d2 ,t min Pi ]. 8) This procedure, given by the preceding steps, is continued until the terminal conditions of the orbital game are satisfied.If R min is smaller than R intercept , then successful interception has been attained. . Projections of these transfer trajectories on the X-Y plane, X-Z plane, and Y-Z plane of the reference LVLH frame are shown in Figs.16 -18, respectively.Fig. 19 Fig. 19 . Fig. 19.Distance history between each P and E in case 1. (a) by the blue dotted lines and the red solid line, respectively.Meanwhile, Fig. 23(b) portrays the geodetic coordinates of Fig. 23 . Fig. 23.Calculated I 2 and T P1 for P 1 of intersection type at t 0 . Fig. 24 . Fig. 24.Calculated I 2 and T P2 for P 2 of intersection type at t 0 . Fig. 25 . Fig. 25.Calculated I 2 and T P3 for P 3 of contained type at t 0 . illustrates the heat map of M min calculated for all the terminal evasion positions generated by traversing the longitude ∈[−180°, 180°] and latitude ∈[−90°, 90°] with the step of 1°.The maximum M min calculated is 1.576265 km when longitude Fig. 28 . Fig. 28.Heat map of M min calculated for different r d on EZ. Fig. 31 X Fig. 31 X-Z projection of transfer trajectories in case 2. Fig. 33 . Fig. 33.Distance history between each P and E in case 2. TABLE I Results for More Reference Orbital Heights TABLE II Results for More Impulse Magnitudes distances are performed with different ξ 1 ) Case 1, separation: If λ 0 > 0 holds for any time t, then the largest coverage does not exist since the two RRDEs do not intersect at any time [see Fig. 8(a)].2) Case 2, intersection: If λ 1 > 0 holds for any time t, and there exist time t such that λ 0 < 0, then the two RRDEs intersect in a period of time between t ex1 and t ex2 , where t ex1 and t ex2 represent the first and the second externally tangent time of two RRDEs, respectively [see Fig. 8(b)].Therefore, the largest coverage is a continuous area on the UTRRDE swept by P during the time interval (t ex1 , t ex2 ). 3) Case 3, contained: If there exist time t such that λ 1 < 0, then the two RRDEs intersect in two time intervals between which the RRDE P is totally contained in the RRDE E , as shown in Fig. 8(c).Therefore, the largest coverage contains two separate areas on the UTRRDE successively swept by P during the time interval (t ex1 , t in1 ) and (t in2 , t ex2 ). TABLE III Initial Orbital Elements of Spacecraft TABLE IV Mass and Maneuverability Parameters of Spacecraft
v3-fos-license
2018-03-02T22:39:13.294Z
2018-01-30T00:00:00.000
3531051
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.7554/elife.32572", "pdf_hash": "0aa35a98501cacaae4fad83f534b42d3ab8b924d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:262", "s2fieldsofstudy": [ "Biology" ], "sha1": "9ef987f677f12dd6e05d480e274e7bfd1ab27f99", "year": 2018 }
pes2o/s2orc
COX16 promotes COX2 metallation and assembly during respiratory complex IV biogenesis Cytochrome c oxidase of the mitochondrial oxidative phosphorylation system reduces molecular oxygen with redox equivalent-derived electrons. The conserved mitochondrial-encoded COX1- and COX2-subunits are the heme- and copper-center containing core subunits that catalyze water formation. COX1 and COX2 initially follow independent biogenesis pathways creating assembly modules with subunit-specific, chaperone-like assembly factors that assist in redox centers formation. Here, we find that COX16, a protein required for cytochrome c oxidase assembly, interacts specifically with newly synthesized COX2 and its copper center-forming metallochaperones SCO1, SCO2, and COA6. The recruitment of SCO1 to the COX2-module is COX16- dependent and patient-mimicking mutations in SCO1 affect interaction with COX16. These findings implicate COX16 in CuA-site formation. Surprisingly, COX16 is also found in COX1-containing assembly intermediates and COX2 recruitment to COX1. We conclude that COX16 participates in merging the COX1 and COX2 assembly lines. Introduction Aerobic organisms preferentially produce ATP through oxidative phosphorylation. The respiratory chain generates the proton gradient across the inner membrane that drives ATP production by the F 1 F o ATP-synthase. The oxidative phosphorylation system localizes to the inner membrane of mitochondria and comprises of five multi-subunit protein complexes, termed complexes I to V. With the exception of complex II, all complexes are comprised of nuclear-and mitochondrial-encoded subunits. Nuclear-encoded subunits are translated in the cytosol and imported into mitochondria, while the mitochondrial-encoded proteins are translated by membrane-associated mitochondrial ribosomes that associate with the inner membrane for cotranslational protein insertion into the mitochondrial inner membrane. Cytochrome c oxidase (COX) is the terminal protein complex of the electron transport chain. COX1, COX2 and COX3 are mitochondrial-encoded subunits that form the core of the complex to which nuclear-encoded proteins associate. The formation and biogenesis process of this complex requires a plethora of chaperone-like factors, termed assembly factors Ghezzi and Zeviani, 2012). Malfunction of many of these assembly factors and the concurrent defects in the assembly process have been linked to severe human disorders that usually affect tissues with high-energy demands, such as neurons, skeletal, and cardiac muscle (Carlson et al., 2003;Fernández-Vizarra et al., 2002;Ghezzi and Zeviani, 2012;Gorman et al., 2016). Assembly of the cytochrome c oxidase complex initiates with the synthesis and membrane integration of COX1. Subsequently, imported nuclear-encoded subunits and COX2 and COX3, associate with the COX1-containing assembly module in a sequential manner. Mitochondrial ribosomes selectively translating COX1 mRNA initially associates with the early assembly factor C12ORF62 (hCOX14), MITRAC12 (hCOA3) and CMC1 forming an assembly intermediate termed MITRAC (Bourens and Barrientos, 2017;Carlson et al., 2003;Mick et al., 2012;Ostergaard et al., 2015;Richter-Dennerlein et al., 2016). MITRAC promotes translation and co-translational membrane insertion of COX1 through association with OXA1L (Richter- Dennerlein et al., 2016;Su and Tzagoloff, 2017). Furthermore, MITRAC12 provides stability to the newly synthesized COX1 protein. The nuclear-encoded subunits COX4 and COX5A are thought to associate with COX1 prior to recruitment of COX2 into this assembly module Mick et al., 2012). The assembly process of COX2 initiates with OXA1L-and COX18-mediated membrane insertion (Fiumera et al., 2007;Sacconi et al., 2005;Soto et al., 2012;Su and Tzagoloff, 2017). A relay of metallochaperones in the intermembrane space mediates copper insertion into the C-terminus of COX2 and concomitant formation of the Cu A (Bourens et al., 2014;Carlson et al., 2003;Fiumera et al., 2009;Khalimonchuk and Winge, 2008). The copper relay is initiated by COX17, which is crucial for copper delivery to both COX1 and COX2. COX11 mediates the transfer of copper from COX17 to COX1, while delivery of copper to COX2 involves SCO1, SCO2 and COA6 (Baertling et al., 2015;Ghosh et al., 2016;Pacheu-Grau et al., 2015;Stroud et al., 2015). A chaperone FAM36A acts in the early steps of COX2 maturation, to provide stability to the newly synthesized protein and act as a scaffold for the metallochaperone (Bourens et al., 2014;Mick et al., 2012). The copper delivery process is proposed to be sequential, with SCO2 acting upstream of SCO1 (Baertling et al., 2015;Calvo et al., 2012;Leary, 2010;Stiburek et al., 2009;Valnot et al., 2000). Although the exact mechanism of metalation is still unknown, COA6 appears to cooperate with the SCO proteins in this process. Eventually, COX2 associates with early COX1-containing assembly intermediates and thus the two biogenesis pathways merge. COX16 is a conserved protein initially identified in yeast as required for the biogenesis of cytochrome c oxidase. However, the function of this protein remains ill defined (Carlson et al., 2003;Ghosh et al., 2014). Based on our finding that COX16 copurifies with assembly intermediates of COX1, we set out to assess the function of COX16 in human mitochondria. As expected, utilizing a human COX16 knock out cell line, we show that COX16 is required for cytochrome c oxidase biogenesis. Surprisingly, our analyses demonstrate that COX16 specifically interacts with newly synthesized COX2. In the COX2 biogenesis process, COX16 is required for SCO1 but not SCO2 association with COX2, implicating COX16 in CuA site formation. Patient mimicking amino acid exchanges in SCO1 and COA6 impact COX16 association with these metallochaperones. Moreover, COX16 facilitates COX2 association with the MITRAC assembly intermediate containing COX1. We conclude that COX16 is a constituent of the Copper-insertion machinery and escorts COX2 to the MITRAC-COX1 module for progression of cytochrome c oxidase assembly. COX16 interacts with the MITRAC complex In S. cerevisiae, Cox16 has been implicated in the biogenesis of cytochrome c oxidase (Baertling et al., 2015;Carlson et al., 2003). Human COX16 has so far not been analyzed for its function. Recent work on ScCox16 suggested a role in Cox1 biogenesis (Stiburek et al., 2009;Su and Tzagoloff, 2017). In agreement with this suggestion, we identified human COX16 in affinity purified MITRAC12-containing complexes by quantitative mass spectrometry using stable isotope labeling by amino acids in cell culture (SILAC) (Mick et al., 2012;Valnot et al., 2000). Accordingly, we identified COX16 among proteins that copurified specifically with COX1-containing assembly intermediates. To confirm the mass spectrometric data, we performed immunoisolation of MITRAC12, C12ORF62 and MITRAC7, all representing assembly factors for COX1 at different stages of the assembly process, from solubilized mitochondria. Established MITRAC components, such as COX1, COX4-1, as well as mitochondrial ribosomes, were efficiently copurified with the baits ( Figure 1A). However, we observed that COX16 copuridfied solely with MITRAC12. Thus, we Alignment of the human (H. sapiens) COX16 amino acid sequence to its yeast (S. cerevisiae) homolog using ClustalW. Predicted transmembrane spans are highlighted in gray; asterisk (*) indicates similar residues; colon (:) indicates identical residues. (C) Submitochondrial localization analysis of COX16 using protease protection assays. Wild type and hypotonically swollen mitochondria were treated with proteinase K (PK). Samples were analyzed by SDS-PAGE and western blotting. Asterisk (*), non-specific signal. (D) Membrane association of COX16 was analyzed using mitochondria that were subjected to carbonate extraction or detergent lysis. T, total; S, soluble fraction; P, pellet. DOI: https://doi.org/10.7554/eLife.32572.002 confirmed the mass spectrometric data but revealed that COX16 apparently only associates with a specific COX1 assembly state and is not a constitutive COX1-associated factor. In Saccharomyces cerevisiae, Cox16 is an integral innermembrane protein with a mitochondrial targeting sequence at the N-terminus (Kim et al., 2012;Su and Tzagoloff, 2017;Tiranti et al., 1999). However, a comparison between the yeast and human primary sequence showed that human COX16 lacks a predictable Nterminal presequence ( Figure 1B). Since, the human COX16 does not complement the yeast mutant strain (Carlson et al., 2003) we tested the submitochondrial localization of COX16. To this end, we performed hypo-osmotic swelling and carbonate extraction experiments. The recovery of COX16 in each sample was determined by western blot analysis using an antiserum directed against the C-terminus of the protein. COX16 was present in isolated mitochondria and only became accessible to protease treatment when the outer membrane was disrupted ( Figure 1C). Since COX16 was resistant to carbonate extraction ( Figure 1D) and has a predicted single transmembrane span, we concluded that Cox16 is an inner mitochondrial membrane protein with its C-terminus facing the intermembrane space (IMS). COX16 is required for cytochrome c oxidase biogenesis To assess the function of COX16, we generated a knockout of COX16 using CRISPR/Cas9-mediated disruption of both the alleles in HEK-293T cells. The first exon of the gene was targeted and the selected knockout clone was a compound heterozygote for 26-nucleotide deletion (16_42del) and a single nucleotide insertion (16_17insT). A premature stop codon was introduced in all alleles as a result of the mutations, leading to a complete lack of COX16 in these cells. We carried out steady state analyses of mitochondrial proteins isolated from wild type and COX16 -/cells. COX16 -/mitochondria displayed a marked reduction in the levels of the mitochondrially-encoded COX subunit COX2 and late assembling nuclear-encoded subunits such as COX6C (Figure 2A). In the course of our analyses, we also noted that the amount of FAM36A in mitochondria was varied between different mitochondrial preparations. Based on the comparison of several experiments, the slight reduction of FAM36A seen here did not appear to be linked to the presence or absence of COX16. In agreement with the observed reduction of COX proteins in the absence of COX16, COX16 -/cells also displayed growth retardation on glucose-and galactose-containing media ( Figure 2B). When we analyzed mitochondrial protein complexes by Blue Native-PAGE (BN-PAGE) and western blotting, we observed that mature cytochrome c oxidase was drastically reduced in the COX16 -/cells. As a loading control, the membranes were probed for VDAC. It is interesting to note that most of the COX1 in COX16 -/cells comigrated with the MITRAC assembly intermediate complex ( Figure 2C). Moreover, we separated solubilized respiratory chain complexes by BN-PAGE and performed in-gel activity staining for complexes I, IV, and V. While the activities of complex V were similar between wild type and COX16 knockout mitochondria, complex IV activity appeared significantly reduced and complex I activity slightly increased at the level of the supercomplex ( Figure 2D). When mitochondria were solubilized in DDM to dissociate supercomplexes, we observed that the absolute amount of mature cytochrome c oxidase was drastically affted in the COX16 -/cells and that COX1 was mainly present in faster migrating complexes ( Figure 2E). For a quantitative assessment, we measured cytochrome c oxidase activity and quantified the amount of enzyme by ELISA. In COX16 knockout cells, the cytochrome c oxidase amount was reduced to~50% compared to the wild type control ( Figure 2F, left). This reduction directly correlated with the reduction of complex IV activity to~65%, as compared to the control ( Figure 2F, right). In view of all these results, we concluded that loss of COX16 in HEK-293T cells lead to a severe reduction of cytochrome c oxidase. COX16 is required for COX2 assembly To analyze the effect of absence of COX16 on mitochondrial protein synthesis, mitochondrial translation products were pulsed labeled with [ 35 S]methionine. However, no significant differences were observed in the synthesis of either COX1, COX2, or any other protein ( Figure 3A). This was further supported by quantifications of the labeled proteins. Since protein synthesis was apparently not affected, we addressed whether the reduction of COX1 or COX2 in COX16 -/cells was a result of reduced protein stability. Pulse chase analysis was carried out to follow the fate of newly synthesized COX1 or COX2 over the course of 24 hr ( Figure 3B). (Surprisingly, we observed enhanced synthesis and stability of ATP6 and ATP8 in the absence of COX16.) Newly synthesized subunits are usually incorporated into the mature complex in the chosen time frame Leary et al., 2009). Interestingly, COX2 showed a marked reduction in stability in the absence of COX16 ( Figure 3C). A similar but non significant reduction was observed for COX1; however, proteins such as ND3 remained unaffected. To address whether the observed reduction in COX2 amounts correlated with a defect in the assembly process, labeled mitochondria-encoded proteins that assembled into mitochondrial protein complexes were analyzed by BN-PAGE followed by separation on the second dimension using SDS-PAGE. These analyses revealed a notable reduction of newly synthesized COX2 in mature cytochrome c oxidase ( Figure 3D). To further analyze mitochondrial protein complexes with respect to the presence of nuclear-encoded proteins, mitochondria from wild type and COX16 -/cells were analysed by 2D-BN/SDS-PAGE and western blotting. In agreement with our previous result, the methionine for 1 hr. Subsequently, the medium was replaced and cells were further cultured in standard medium (chase) for 3, 6, 12 and 24 hr. Cell extracts were analyzed by SDS-PAGE and digital autoradiography. (C) Quantifications using ImageQuant software of the indicated mitochondrial translation products from (B). The values represented were normalized to ND1 (mean ± SEM and n = 3; *p=0.029, **p=0.042, ***p=0.024, ns = non Figure 3 continued on next page ratios of COX1 present in the mature monomeric protein complex IV and in MITRAC complexes changed significantly in absence of COX16. This finding suggested that COX1 accumulated in MITRAC complexes in these cells. Similarly, COX2 was barely visible in the mature cytochrome c oxidase. Based on these findings, we concluded that a lack of COX16 impacts the maturation of COX1containing assembly intermediates. COX16 is required for SCO1 interaction with COX2 To clarify, if COX16 participated in the biogenesis of COX1 directly, we assessed association of COX16 with mitochondrial translation products. Therefore, we performed immunoisolations of COX16 from wild type cells after radiolabeling of mitochondrial translation products. Unexpectedly, in these analyses COX16 solely associated with the newly synthesized COX2 ( Figure 4A). Accordingly, COX16 is involved in the biogenesis of the COX2 assembly module. To this end, we addressed if absence of COX16 disturbed the assembly of COX2. Therefore, we performed immunoisolation of the COX2-specific metallochaperones SCO1, SCO2, and COA6 from wild type and COX16 -/mitochondria. COX16 coisolated with SCO1 and COA6 in immunoisolations from wild type cells, indicating that these proteins form a complex ( Figure 4B). Remarkedly, we did not observe coisolation of COX16 with SCO2. While a lack of COX16 lead to loss of COX2 association with SCO1 and COA6, the association between SCO2 and COX2 remained unaffected. To address the fate of newly synthesized COX2 in the absence of COX16, we carried out immunoisolations of SCO1, SCO2, COA6, and FAM36A after radiolabeling of mitochondrial translation products. Interestingly, only the association of SCO1 with newly synthesized COX2 was affected in the absence of COX16 ( Figure 4C and D). The binding of other COX2 chaperones such as SCO2, COA6, and FAM36A was unaffected by the absence of COX16. In contrast to the analysis shown in Figure 4B, these immunoprecipitations were performed from whole cells. Under these conditions, we do not observe a SCO2 signal in the COA6 eluates, probably due to the reduced amounts of protein. Based on this, we addressed if the interaction between COX16 and COX2 metallochaperones and the scaffold FAM36A depended on the presence of COX2. To this end, we performed immunoisolation of SCO1, SCO2, COA6, and FAM36A from mitochondria isolated from non-treated cells or cells treated with thiamphenicol, a specific inhibitor of mitochondrial translation (Banci et al., 2008;Mick et al., 2012). Under conditions of thiamphenicol treatment, associations between COX16 and any of the tested COX2-associated chaperones were lost indicating that newly synthesized mitochondrial-encoded proteins, likely COX2, are required for the interactions ( Figure 4E). It is crucial to note that thiamphenicol treatment did not affect the steady state levels of COX2, supporting the idea that the loss of interaction observed in the experiment was due to the absence of newly synthesized COX2. Pathogenic mutations have been reported for both SCO1 and COA6 -the two proteins with which COX16 apparently predominantly associates. SCO1 patients suffer from hypertrophic cardiomyopathy, neonatal hepatopathy, and ketoacidotic comas, whereas mutations in COA6 cause fatal infantile cardioencephalomyopathy (Baertling et al., 2015;Banci et al., 2007;Calvo et al., 2012;Cobine et al., 2006;Stiburek et al., 2009;Valnot et al., 2000). Since the clinical presentation of the patients differed markedly, it was intriguing to assess if COX16-association was selectively disturbed. Therefore, we transiently expressed C-terminally FLAG-tagged COA6 or SCO1 along with variants harboring individual pathogenic substitutions -W59C (Ghosh et al., 2014) and W66R (Baertling et al., 2015) in case of COA6, G132S (Stiburek et al., 2009) and P174L (Valnot et al., 2000) in SCO1. We performed immunoisolations of the FLAG-tagged proteins. Interestingly, we Protein complexes from wild-type (WT) and COX16 knockout (COX16 -/-) mitochondria were extracted under non-denaturing conditions and separated by BN-PAGE, followed by a second dimension SDS-PAGE and western blot analysis (top). Mitochondrial translation products were labeled with [ 35 S]methionine, prior whole cell lysis and complexes separation as described above (bottom). The proteins were detected by using indicated antibodies or by digital autoradiography (COX2, ATP6). Intensity curves (right) for COX1 signals from the western blotting and COX2 from the autoradiogram were calculated using ImageJ. Numbers in the gray regions denote area under intensity curves. For COX1 (top), it is represented as percentage of the total signal in CIV and MITRAC and for COX2 (bottom), as arbitrary units. CIV, Monomeric Complex IV; CV, Complex V; RSC, Respiratory Super-Complexes. DOI: https://doi.org/10.7554/eLife.32572.004 observed a considerable loss of association of COX16 with the pathogenic variants of SCO1. At the same time, the mutant versions of SCO1 maintained their association with COX2 ( Figure 4F). This finding indicated that the developed pathology in these patients was not due to defective COX2 recruitment but rather lack of association with COX16. In contrast, association of COX16 with pathogenic variants of COA6 was lost with a concomitant loss of interaction with COX2. In summary, methionine for 1 hr and whole cell extracts subjected to immunoprecipitation using anti-COX16 or control antisera. The eluates were analyzed by digital autoradiography after SDS-PAGE (Total, 5% and Eluate, 100%). (B) Immunoprecipitation from wild-type (WT) and COX16 knockout (COX16 -/-) mitochondria with anti-SCO1, anti-SCO2, anti-COA6 or control antisera. The eluates were analyzed by western blotting after SDS-PAGE with the indicated antibodies (Total 5% and Eluate, 50%). (C) Mitochondrial translation products in wild-type (WT) and COX16 knockout (COX16 -/-) were labeled with [ 35 S]methionine for 1 hr. Whole cell extracts were subjected to immunoprecipitation with anti-SCO1, anti-SCO2, anti-COA6, anti-FAM36A or control antisera. Eluates were separated by SDS-PAGE and Western blotting. Radioactive signals were visualized by digital autoradiography and membranes afterwards decorated with the indicated antibodies. (Total, 5% and Eluate, 100%). (D) Quantification of co-isolated COX2 from immunoprecipitations with the indicated antibodies from (C) (mean ± SEM and n = 3). (E) Mitochondria (without (-) or with thiamphenicol (TAP) (+) treatment) from wild-type (WT) and COX16 knockout cells (COX16 -/-) were subjected to immunoprecipitations using antibodies against SCO1, SCO2, COA6 and FAM36A. Samples were subjected to SDS-PAGE and analyzed by western blotting using the indicated antibodies (Total, 5% and Eluate, 100%). (F) Immunoisolations of COA6 FLAG or SCO1 FLAG along with variants harboring individual pathogenic substitutions (COA6 -W59C and W66R, SCO1 -G132S and P174L). Cells were solubilized and subjected to anti-FLAG immunoprecipitation and eluates analyzed by SDS-PAGE and western blotting using the indicated antibodies. WT, wild type. (Total 5% and Eluate, 50%). (G) Immunoprecipitation from wild-type (WT) mitochondria with anti-COA6, anti-RIESKE or control antisera. The eluates were analyzed by western blotting after SDS-PAGE with the indicated antibodies (total 5% and eluate, 50%). DOI: https://doi.org/10.7554/eLife.32572.005 COX16 is part of and directly involved in the biogenesis of the COX2 assembly module. Loss of COX16 association with SCO1 is a hallmark of the tested SCO1 patient models. To assess if COX16 was associated with respiratory chain supercomplexes, we isolated supercomplexes via the RIESKE protein of complex III. RIESKE purification led to coisolation of complex IV and I components. However, COX16 was not copurified in contrast to immunoisolation with the metallochaperone COA6 ( Figure 4G). We conclude that COX16 does not act on respiratory chain supercomplexes. COX16 facilitates integration of COX2 into MITRAC-COX1 modules Based on the above considerations, we hypothesized that a block in early stages of COX1 assembly may reflect accumulation of intermediates in COX2 assembly, which can not engage with COX1 to promote further maturation steps. To test this directly, we utilized SURF -/cells (Kim et al., 2012;Tiranti et al., 1999) and immunoisolated endogenous COX16 and COA6 after radiolabeling of mitochondrial translation products. In both cases, COX2 was specifically enriched in COX16 and COA6 eluates from wild type cells. However, the amount of COX2 was significantly enriched in COX16 and COA6 eluates from SURF -/cells ( Figure 5A). Hence, a block in COX1 assembly lead to accumulation of COX2-containing assembly modules containing COX16 or COA6. To substantiate the idea that COX16 associates with the MITRAC complexes only upon association with COX2, we targeted early steps of COX2 assembly. FAM36A is an early chaperone crucial for synthesis of COX2 (Bourens et al., 2014). Hence, we performed immunoisolations of MITRAC12 from wildtype and FAM36A -/mitochondria. Upon loss of COX16, COX2 was no longer copurified with MITRAC12, indicating that COX16 interacted with MITRAC12 only after associating with COX2 ( Figure 5B). To this end, we compared the amount of COX16, that associated with COX2 pathway constituents to that in complex with MITRAC12. Both COX16 and COX2, associated with MITRAC12 to a significantly lower magnitude as compared to other COX2 associated proteins ( Figure 5C), indicating that this interaction might be a fairly transient one. We were able to enrich the MITRAC12-COX16 complex when we applied native purification via MITRAC12 FLAG ( Figure 5D). Under these conditions, we were able to also detect COX16 in complex with C12ORF62 ( Figure 5E), which had not been apparent in the immunosiolations using the C12ORF62 antiserum ( Figure 1A). These findings raised the question if COX16 was required for recruitment of COX2 to COX1-containing MITRAC complexes. To this end, we examined protein co-purification with C12ORF62 and MITRAC12 in the presence or absence of COX16. Our analyses showed that COX2-association with MITRAC12 or C12ORF62 was drastically affected by the absence of COX16 ( Figure 5F). Moreover a slight but consistent increase in the association of COX1 with C12ORF62 and MITRAC12 was apparent. When similar imunoisolation experiments were carried out after radiolabelling of mitochondrial translation products, we observed a specific association of MITRAC12 with newly synthesized COX2, that was not observed for C12ORF62 ( Figure 5G). Moreover, in the absence of COX16, a defect in association of the newly synthesized COX2 with MITRAC12 was apparent ( Figure 5G). Concomitantly, the C12ORF62 and MITRAC12-associated COX1 levels were increased ( Figure 5G). We conclude, that COX16 facilitates the assembly of COX2-containing assembly modules to the COX1containing MITRAC-complexes possibly together with MITRAC12. However, association of MITRAC12 with COX2 is stimulated by COX16. Discussion In this study, we demonstrate an involvement of COX16 in the formation of the COX2 assembly module. Despite the identification and functional assessment of several assembly factors involved in the process of COX2 metalation, it has remained unknown as to how the COX2-containing module engages with its partner subunit COX1. Here we find that COX16 acts at two distinct stages of COX2 biogenesis, the recruitment of metallochaperones to COX2 and the merging COX2 and COX1 assembly routes ( Figure 6). Our analyses show: (1) COX16 facilitates association of the metallochaperone SCO1 to newly synthesized COX2; (2) the early metalochaperone COA6 strongly associates with COX16, further supporting the role of COX16 in COX2 maturation; (3) COX16 acts at a checkpoint for proper COX2 maturation leading to increased turnover of COX2 in the absence of COX16; (4) the COX2 assembly module accumulates in the absence of COX1 hemelation (SURF knockdown); (5) a small amount of COX16 also interacts with COX1-containing intermediates (copurified by C12ORF62 and MITRAC12) in an apparently transient manner; and (6) COX16 promotes Whole cell extracts were subjected to immunoprecipitation using anti-COX16, anti-COA6 or control antisera. Eluates were analyzed by digital autoradiography after SDS-PAGE (Total, 5% and Eluate, 100%). Quantification of co-isolated COX2 amounts with the indicated antibodies were performed using ImageJ (mean ± SEM and n = 3). (B) Immunoprecipitation from wild-type (WT) and FAM36A knockout (FAM36A -/-) mitochondria with anti-MITRAC12 or control antisera. The eluates were analyzed by western blotting after SDS-PAGE with the indicated antibodies (Total 5% and Eluate, 100%). (C) Immunoprecipitation from wild-type (WT) mitochondria with anti-MITRAC12, anti-COA6, anti-FAM36A or control antisera. The eluates were analyzed by western blotting after SDS-PAGE with the indicated antibodies (Total 5% and Eluate, Figure 5 continued on next page the integration of COX2 into the COX1-assembly module thereby becoming a transient constituent of the above mentioned intermediate. Cu A site formation on COX2 requires a relay of copper chaperones. Copper-loaded SCO2 is thought to bind to COX2 either as it is inserted into the inner membrane or immediately thereafter (Leary et al., 2009). This promotes SCO1 to be metallated by COX17 (Banci et al., 2008). There seems to be a dichotomy in the order of these steps. In the first scenario, both SCO2 and SCO1 sequentially bound to COX2 together, delivering one Cu 2+ each in the Cu A site. The other possibility involves their binding being spatiotemporally separated. Here we observe that in the absence of COX16 there is a reduction in the ability of newly synthesized COX2 to associate with SCO1 but not SCO2 ( Figure 4C). Thus, irrespective of whichever integration model for the SCO proteins be correct, COX16 specifically mediates the association SCO1 with COX2. Both scenarios have another layer of complexity in terms of Cu 2+ -ion association with SCO1. We observe that mutations in SCO1 do not reduce its ability to bind to COX2 ( Figure 4D). However, the association with COX16 is severely affected in the mutants. Studies on the P 174 L mutant, led to the idea that SCO1 has distinct faces for interaction with COX17 and COX2 (Banci et al., 2007;Cobine et al., 2006). Thus, it is Figure 5 continued 50%). (D) Mitochondria isolated from induced MITRAC12 FLAG cells were solubilized and subjected to anti-FLAG immunoprecipitation and eluates analyzed by SDS-PAGE and western blotting using the indicated antibodies. WT, wild type. (Total 5% and Eluate, 50%). (E) Mitochondria isolated from induced C12ORF62 FLAG cells were solubilized and subjected to anti-FLAG immunoprecipitation and eluates analyzed by SDS-PAGE and western blotting using the indicated antibodies. WT, wild type. (Total 3% and Eluate, 100%). (F) Mitochondria isolated from wild-type (WT) and COX16 knockout (COX16 -/-) were used for immunoprecipitation with anti-MITRAC12 or control antisera. The eluates were analyzed by western blotting after SDS-PAGE with the indicated antibodies (Total 5% and Eluate, 50%). (G) Antibodies against C12ORF62, MITRAC12 or control antisera were used for immunoisolation after [ 35 S]methionine labeling of mitochondrial translation products in wild-type (WT) and COX16 knockout (COX16 -/-) cells and analyzed by SDS-PAGE and digital autoradiography (Total, 5% and Eluate, 100%). Quantification of co-isolated COX1 or COX2 amounts with the indicated antibodies were performed using ImageJ (mean ± SEM and n = 3). DOI: https://doi.org/10.7554/eLife.32572.006 Figure 6. Model for the role of COX16. COX1 is assembled and guided through the assembly process through its association with MITRAC, where it awaits the association of COX2. COX2 is initially associated with FAM36A and metallochaperones such as SCO2 and COA6 in the early assembly stages. COX16 acts at the later stages of COX2 assembly. The early assembly factors are apparently no longer associated with the complex at this stage. COX16 facilitates the association of SCO1 and thus probably leads to proper COX2 maturation. It then facilitates the merger of COX1 and COX2 assembly lines after the exit of SCO1. DOI: https://doi.org/10.7554/eLife.32572.007 conceivable that COX17 delivers Cu 2+ to SCO1 only after SCO1 is bound to COX2. Hence, based on our studies, we propose that COX16 facilitates SCO1-COX2 binding and potentially the subsequent interaction of SCO1 with COX17. In agreement with a SCO1-specific function of COX16, the tested COA6 mutants loose association to COX2 and concomitantly also to COX16 ( Figure 4D). This indicates that events leading to lack of interaction of COX16 with SCO1 mutants are different to that of COA6 mutants. During assembly of the cytochrome c oxidase, particularly in the COX1 assembly module, MITRAC7 stabilizes COX1 in a late MITRAC intermediate Ghezzi and Zeviani, 2012). This is considered to enable the COX1-containing complex to receive additional subunits. Since absence of COX16 does not allow the assembly of COX2 to proceed, we find an accumulation of COX1 on MITRAC12 and C12ORF62 both being involved in initial stages of COX1-translation and assembly ( Figure 5C and Figure 5D). Similarly, by creating a block in the maturation of the COX1 module in SURF knockout cells, the COX2 assembly module accumulated together with COX16 ( Figure 5A). This supports current models according to which both assembly modules initiate independent of each other (Fernández-Vizarra et al., 2002;Ghezzi and Zeviani, 2012;Gorman et al., 2016;McStay et al., 2013;Timó n-Gó mez et al., 2017). Since, COX16 and COX2 are co-dependent with regard to their interaction with MITRAC12 ( Figure 5B and Figure 5C), we suggest that COX16 acts initially in COX2 maturation. Furthermore, we observe that only COX16 but not SCO1 associate with MITRAC12 ( Figure 5B). This finding suggests that the function of COX16 extends beyond SCO1 recruitment and that the role in merging the assembly lines between COX1 and COX2 is downstream of SCO1-dependent metalation. Hence, it is tempting to speculate that COX16 recognizes readiness for merging COX2 into the COX1 assembly line. In addition, we observe that MITRAC12, which is required for COX1 biogenesis, also binds to newly synthesized COX2 in a COX16-dependent manner. In contrast, C12ORF62, the most early COX1 interacting protein, does not display association with newly synthesized COX2. Hence, it is conceivable, that MITRAC12 cooperates with COX12 in late steps of COX2 biogenesis to link it to the COX1 module ( Figure 6). The finding of an interaction between human COX16 and COX1-containing assembly interemediates is in line with a recent study in yeast (Bourens and Barrientos, 2017;Mick et al., 2012;Ostergaard et al., 2015;Richter-Dennerlein et al., 2016;Su and Tzagoloff, 2017). However, the observed presence of yeast Cox16p in mature cytochrome c oxidase and its supercomplexes is not conserved in human. In fact, this finding illustrates that the situation in human cells differs considerably from yeast. We demonstrate that human COX16 is primarily associated with COX2 assembly modules, which has not been investigated in yeast. The association of COX16 with MITRAC assembly intermediates containing COX1 is COX2-dependent and thus likely transient in nature. Importantly, BN-PAGE analyses demonstrated that COX16 co-migrates with SCO1-containing protein complexes supporting quantitative association of COX16 with COX2 assembly intermediates ( Figure 3D). In summary, we demonstrate that human COX16 functions to stabilize the interaction between SCO1 and newly synthesized COX2. It subsequently facilitates the maturation of COX2 for proper insertion of Cu 2+ into Cu A sites of COX2. Lack of COX16 leads to increased turnover of newly synthesized COX2 and accumulation of COX1 in MITRAC. Hence, COX16 cooperates with MITRAC12 and copper chaperones to facilitate COX2 assembly with COX1-containing intermediate. , 2 mM L-glutamine and 50 mg/ml uridine at 37˚C under a 5% CO 2 humidified atmosphere. The cell lines were authenticated by STR profiling using eight different and highly polymorphic short tandem repeat loci at the Leibniz-Institut DSMZ, Braunschweig, Germany. All cell lines were regularly monitored for mycoplasma. Cell were treated either with 20 mg/ml emetine (Sigma-Aldrich GmbH, Munich, Germany) for 6 hr or with 50 mg/ml thiamphenicol (Sigma-Aldrich) for 2 days in DMEM medium, for inhibition of cytosolic or mitochondrial translation, respectively. Transfections were performed according to manufacturer's recommendations using GeneJuice (Novagen, Merck KGaA, Darmstadt, Germany). Briefly, approximately 300,000 cells/25 cm 2 were transfected using 4 ml of transfection reagent and 1 mg of DNA. Cells were either harvested or subjected to drug selection, 48 hr after transfections. COX16 -/-HEK-293T cell line was generated applying the CRISPR/Cas9 technology as previously described (Ran et al., 2013;Richter-Dennerlein et al., 2016). Briefly, oligonucleotides GCGAAAAGCACGCATCACCG and CGGTGATGCGTGCTTTTCGC containing the guide sequences were annealed and ligated into the pX330 vector. HEK-293T cells were co-transfected with pX330 and with the pEGFP-N1 plasmid. After three days, single cells expressing GFP were sorted by flowcytometry into 96-well plates. After colony expansion, single colonies were screened by immunoblotting. Mitochondrial isolation and protein localization assays Isolation of mitochondria from cultured cells was performed as described previously Richter-Dennerlein et al., 2016). Bradford analysis using BSA as a standard was used to measure protein concentrations. Carbonate extraction and mitochondrial swelling experiments were implemented as described previously (Fiumera et al., 2007;Mick et al., 2012;Soto et al., 2012). Briefly, for carbonate extraction of proteins, isolated mitochondria were incubated in 100 mM Na2CO3 (pH 10.8 or 11.5) followed by centrifugation for 30 min at 100,000 x g at 4˚C. Analysis of submitochondrial localization of proteins was performed by protease protection assay using proteinase K (PK). Isolated mitochondria were resuspended either in SEM buffer (250 mM sucrose, 1 mM EDTA, and 10 mM MOPS [pH 7.2]), to osmotically stabilize them, or in EM buffer (1 mM EDTA, and 10 mM MOPS [pH 7.2]), to rupture the outer mitochondrial membrane. As a positive control, mitochondrial membranes disrupted either by 1% Triton X-100 for carbonate extraction experiments or by sonication for submitochondrial localization experiments. In vivo labeling of mitochondrial translation products with [ 35 S] methionine In vivo labeling in human cells was performed as described previously (Chomyn, 1996;Fiumera et al., 2009;Khalimonchuk and Winge, 2008). Inhibition of cytosolic translation was achieved by treating cells either with 100 mg/ml emetine during pulse experiments, or with 100 mg/ml anisomycin (Sigma-Aldrich) in pulse chase experiments. Mitochondrial translation products were labeled with 0.2 mCi/ml [ 35 S]methionine for 1 hr. For chase experiments, the radioactive medium was substituted by adding fresh growth medium, followed by incubation at 37˚C under 5% CO 2 for the indicated time points. The cells were harvested in 1 mM EDTA/PBS. The samples were further analyzed either by SDS-or BN-PAGE and processed for affinity purification procedures. Signals were obtained by auto-radiography. Cytochrome C Oxidase activity and quantitation assay Specific activity and relative amount of cytochrome c oxidase were determined according to the manufacturer's instructions using Complex IV Human Specific Activity Microplate Assay Kit (Mitosciences, Abcam, Cambridge, United Kingdom). Total 15 mg of cell lysate was loaded per well. Cytochrome c oxidase activity was calculated by measuring the oxidization of cytochrome c and the decrease of absorbance at 550 nm. To measure the relative COX amounts, the lysates from the same batch were incubated with a specific cytochrome c oxidase antibody, conjugated to alkaline phosphatase. The increase of absorbance at 405 nm was measured. Affinity purification procedures Whole cells or isolated mitochondria (0.5 mg) were resuspended in lysis-buffer (50 mM Tris-HCl [pH 7.4], 150 mM NaCl, 0.1 mM EDTA, 10% glycerol, 1 mM PMSF, and 1% digitonin) to a final concentration of 1 mg/ml. This was followed by an incubation at 30 min at 4˚C under mild agitation. Nonsolubilized material was removed by centrifugation at 20,000 xg, 4˚C for 15 min. Supernatants were incubated with anti-FLAG-agarose (Sigma-Aldrich) or ProteinA-Sepharose (GE Healthcare, Chicago, IL) conjugated with specific or control antibodies. After washing of the resin, proteins were eluted with FLAG peptides or by pH shift (0.1 M Glycin [pH 2.8]). Miscellaneous SDS-PAGE and western-blotting of proteins to PVDF membranes (Millipore, Merck KGaA, Darmstadt, Germany) was performed using standard methods. Primary antibodies were raised in rabbits or purchased (anti-COX16, Protientech). HRP-coupled secondary antibodies applied to antigen-antibody complexes and detected by enhanced chemiluminescence on X-ray films. Statistical analyses Data are expressed as mean ± SEM. Significant differences between groups were analyzed using Prism five software (GraphPad Software, San Diego, CA) by unpaired Student t test and ANOVA, unless otherwise noted.
v3-fos-license
2019-09-09T21:21:54.327Z
2020-01-02T00:00:00.000
202021453
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/1343943X.2019.1651207?needAccess=true", "pdf_hash": "52e7e291f7cbcf537727827fbf0229d475266d3d", "pdf_src": "TaylorAndFrancis", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:264", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "sha1": "9e5a0bf927dcccf0d1c931520a9c1b856030fdb4", "year": 2020 }
pes2o/s2orc
Yield response of high-yielding rice cultivar Oonari to different environmental conditions ABSTRACT A new rice (Oryza sativa L.) cultivar, ‘Oonari’, was developed to reduce the shattering habit of the high-yielding cultivar ‘Takanari’. To evaluate the effects of environment on its yield and related traits, Oonari and Takanari were grown in multiple environmental conditions at three locations in Japan. The whole and filled brown rice yields of Oonari were around 1000 g m−2, similar to those of Takanari. But the panicle number m−2 of Oonari was slightly but significantly higher than that of Takanari and the spikelet number per panicle was slightly but significantly lower. With respect to the effect of environment on yield of Oonari, cumulative radiation during 40 days after heading was positively related to whole brown rice yield of Oonari. Yield tended to be higher under increased atmospheric CO2 concentration than under ambient. Oonari was confirmed to exhibit high yield in various environmental conditions. Graphical Abstract Introduction Rice (Oryza sativa L.) is an important crop and a major source of food for more than a third of the world's population (Khush, 1997). A continuous increase in rice production is required to meet the demands of increasing population (Tilman et al., 2011). 'Takanari' is one of the highest-yielding rice cultivars in Japan. The whole brown rice yield of Takanari is as high as about 1000 g m −2 in Japan (Yoshinaga et al., 2013). However, Takanari, derived from a cross between indicaand japonica-type parents, is prone to seed shattering (Ando, 1990), which can lead to yield losses during harvesting. To reduce the shattering habit of Takanari, a new cultivar, 'Oonari' was developed by γirradiation of Takanari at the National Agriculture and Food Research Organization (NARO) of Japan and released in 2015 (Kobayashi et al., 2016). The whole brown rice yield of Oonari was higher than that of Takanari (Kobayashi et al., 2016), but the yield traits have not yet been clarified. It is important to reveal yield response of new high-yielding rice cultivars to various environmental conditions. Crop growth and yield are affected by various environmental conditions (Horie et al., 1995;Peng et al., 2004;Porter et al., 2014;Yoshida & Parao, 1976). Among these, air temperature and solar radiation are the major environmental factors that determine crop yield. In addition, carbon dioxide concentration ([CO 2 ]) is a notable factor in light of the continuing global increase: atmospheric [CO 2 ] has risen from 280 µmol mol −1 before the Industrial Revolution to 409 µmol mol −1 in 2018 (Earth Systems Research Laboratory, 2018), and is projected to rise further during the next 50 years even if efforts are made to reduce emissions (Fisher et al., 2007). Yield responses to these conditions show varietal differences (Hasegawa et al., 2013;Nagata et al., 2016;Nakano et al., 2017;Shimono et al., 2009). However, the responsiveness of the yield-related traits of Oonari to environment has not yet been clarified. Therefore, in this study, we compared brown rice yield and yield components between Oonari and Takanari at three distant locations in Japan: Tsukuba (eastern lowlands, 12 m above sea level) where these cultivars were developed, Fukuyama (western lowlands, 1 m above sea level) and Nagano (central highlands, 340 m above sea level) which is known to be high yielding region for rice in Japan. We investigated the effect of environmental conditions on the yield traits. We also investigated the responses of the yield traits of Oonari to increased atmospheric [CO 2 ] in a free-air CO 2 enrichment (FACE) facility. Plant materials and cultivation Takanari and Oonari were grown in field experiments in all three locations. Cultivation methods in each location were shown in Table 1. Fertilizers were applied as following practice in each location. In Tsukuba, we applied a basal dressing of manure at about 1 kg m −2 and chemical fertilizers at rates of 16 and 12 g m −2 (P 2 O 5 and K 2 O) in two cultivars and 12 g N m −2 (as controlled-release fertilizer: equal parts LP40 and LP100; JCAM Agri, Tokyo, Japan). We applied a topdressing of LP40 at 4 g N m −2 at the panicle formation stage. LP40 and LP100 fertilizers release 80% of their total N content at a uniform rate up to 40 and 100 days, respectively, after application, at soil temperature at 25°C. In Fukuyama, we applied a basal dressing of chemical fertilizers at 12 g N m −2 (as controlled-release fertilizer: equal parts LP40 and LP100; JCAM Agri, Tokyo, Japan), rates of 10 g m −2 each of P 2 O 5 and K 2 O. We applied a topdressing of LP40 at 6 g N m −2 at the panicle formation stage. In 2014, we applied a basal dressing of 4.0 g m −2 (N, P 2 O 5 and K 2 O each), and topdressings of 2 g m −2 (N, P 2 O 5 and K 2 O each) at tillering, 3 g m −2 (N, P 2 O 5 and K 2 O each) at panicle neck node differentiation and panicle formation, and 3 g N m −2 at meiosis stage. In Nagano, we applied a basal dressing of manure at about 2 kg m −2 and a chemical fertilizer at the rate of 7, 9.8 and 11.2 g m −2 (N, P 2 O 5 and K 2 O). We applied topdressings of 3, 4.2 and 4.8 g m −2 (N, P 2 O 5 and K 2 O) at tillering, 7.5 and 3.8 g m −2 (N and K 2 O) at panicle formation stage, and 4.5 and 2.5 g m −2 (N and K 2 O) 14 days later. At maturity, when approximately 85% of grains became yellow, we sampled from each replicate to measure yield and its components in each site. Plants were harvested carefully by hand and packed immediately into nylon net bags to avoid loss by shattering during transport and drying. FACE experiment We transplanted Oonari on 24-25 May 2017 in the FACE experimental site as shown in Table 1. We applied a basal dressing of 19 g N m −2 (as controlled-release fertilizer: equal parts LP40, LPs100, and LP140; JCAM Agri, Tokyo, Japan as), and 10 g m −2 each for P 2 O 5 and K 2 O. LPs100 fertilizer releases 80% of its total N content at a sigmoidal rate up to100 days after application at soil temperature at 25°C. Detailed descriptions on the FACE experiment are presented by Nakamura et al. (2012). In brief, the elevated [CO 2 ] treatments were imposed on octagonal plots ('FACE rings') in the fields. Each 'FACE ring' which was 240 m 2 in area and 17 m in inside diameter, was prepared in combination with a corresponding ambient [CO 2 ] plot. We used four fields as replications, each with a pair of FACE and ambient control plots with their centers 75 m apart to minimize crosscontamination. From transplanting to harvest, the target [CO 2 ] was 200 µmol mol −1 above ambient, and CO 2 was supplied during daylight hours. The season-long daytime average [CO 2 ] was 585 ± 1.6 µmol mol −1 in the FACE plots and 391 µmol mol −1 in the ambient plots. Weather conditions The weather conditions during the growing season at each site were shown in Supplementary Figure S1. Statistical analysis All statistical analyses were conducted in statistical analysis software R (Core Team, 2017). Yield and components in Tsukuba and Fukuyama were tested by analysis of variance (ANOVA). Pearson's correlation analysis and a test for no correlation in Oonari were conducted using five sets of data obtained from Tsukuba, Fukuyama and Nagano. FACE experiment data were analyzed by paired t-test (p < 0.05). Yield and its components between Oonari and Takanari at three sites The whole and filled brown rice yields of Oonari were ranged from 983 to 1053, and from 908 to 1016 g m −2 , respectively, in Tsukuba and Fukuyama (Table 2). These brown rice yields of Takanari were ranged from 905 to 1068, and from 884 to 1042 g m −2 , respectively. There were no significant differences in whole and filled brown rice yields between Oonari and Takanari in Tsukuba and Fukuyama in each year (Table 2; Supplementary Tables S1 and S2). Among yield components, Oonari had a significantly higher panicle number m −2 and a significantly lower spikelet number per panicle than Takanari in Tsukuba and Fukuyama. The grain yield and its components in Nagano showed a similar tendency as in Tsukuba and Fukuyama (Table 2). In Tsukuba, Oonari had more total spikelets m −2 than Takanari. Furthermore, Oonari had a significantly lower percentage of filled spikelets, but a significantly higher 1000-grain weight than Grains thicker than 1.6 mm in Tsukuba and Nagano and 1.8 mm in Fukuyama were considered as filled brown rice and were counted to calculate 1000-grain weight. Yield and 1000-grain weight are presented at a water content of 15%. The percentage of filled spikelets was calculated as the number of filled spikelets divided by the total number of spikelets. Takanari. On the other hand, in Fukuyama, there was no significant difference in total spikelet number between cultivars. In addition, Oonari had a significantly lower percentage of filled spikelets than Takanari, but there was no significant difference in 1000-grain weight between cultivars. Effects of weather on yield and its components among sites of Oonari The mean air temperature during the growing periods of Oonari was lower in Nagano than in Tsukuba and Fukuyama (Table 3; Supplementary Figure S1). The solar radiation in June was higher in Nagano than in Tsukuba and Fukuyama, whereas that in August was higher in Tsukuba and Fukuyama in 2013 than in other sites and years (Supplementary Figure S1). The cumulative solar radiation before heading was higher in Nagano than in Tsukuba and Fukuyama, whereas that during the 40 days after heading was highest in Tsukuba and Fukuyama in 2013. There was a significant correlation of whole brown rice yield with cumulative radiation during 40 days after heading, but not with cumulative radiation before heading or with mean air temperature either before heading or during 40 days after heading (Figure1; Supplementary Table S3). Takanari had also the same tendency as Oonari (data not shown). There was a significant correlation of panicle number with mean air temperature before heading, but not with mean air temperature during 40 days after heading or with cumulative radiation either before heading or during 40 days after heading (Figure 2; Supplementary Table S3). There were no significant correlations with spikelet number either. Takanari also had the same tendency as Oonari (data not shown). Effects of increased atmospheric [CO 2 ] on yield and its components of Oonari Under increased [CO 2 ] in FACE, the whole and filled brown rice yields of Oonari were 937 and 900 g m −2 , respectively. There were no significant differences in yields or yield components of Oonari (Table 4). However, whole and filled brown rice yields tended to be higher (FACE/ambient = 1.17 and 1.16, respectively) and total spikelet number m −2 tended to be larger (FACE/ambient = 1.15) in Oonari under increased [CO 2 ]. Discussion We investigated the yield traits of Oonari at three locations in Japan and to clarify the effects of environment on the yield traits of this high-yielding cultivar. We also investigated the responses of the yield traits to increased [CO 2 ]. The yield of Oonari was as high as that of the highyielding Takanari at all three locations. In a previous report, the whole brown rice yield was higher in Oonari than in Takanari (Kobayashi et al., 2016). Unlike the previous study, we harvested plants using nylon net bags to avoid loss by shattering. We found no significant difference in the whole and filled brown rice yields between Oonari and Takanari (Table 2; Supplementary Tables S1 and S2). We attribute the equally high yield of Oonari to its improved shattering habit. Therefore, we would expect practical yields of Oonari to be higher than those of Takanari when rice is harvested by machine. Among yield components, the number of panicles m −2 was 2-9% larger and the number of spikelets per panicle was 6-9% smaller in Oonari than in Takanari at all three locations (Table 2; Supplementary Tables S1 and S2). Therefore, Oonari differs from Takanari not only in shattering habit but also slightly in these yield-related traits. Grain yield is determined by the complex sink (total number of spikelets per unit area × filled grain weight)source (carbohydrate supply to panicle) balance. The improvement of sink production efficiency would result in the increase in the yield potential in the indicadominant varieties (Yoshinaga et al., 2013). However, in this study, under heavy N fertilization (15-19 kg N m −2 ), total spikelets number was high (>52.6 x 10 3 m −2 ) at all three locations (Table 2; Supplementary Tables S1 and S2). There was no correlation between the number of total spikelets and whole brown rice yield (r = 0.41, Table 2; Supplementary Tables S1 and S2), irrespective of year or location. In addition, cumulative radiation during the 40 days after heading was related to whole brown rice yield of Oonari (Figure 1; Supplementary Table S3). Kobayashi and Nagata (2018) also reported that solar radiation during 20 days after heading had a significant positive correlation with yield of a japonica-dominant high-yielding cultivar, Yamadawara. These results may imply that carbohydrate supply to grains after heading, but not spikelet productivity before heading, is the major factor determining brown rice yield of Oonari, which is indica-dominant cultivars, under heavy N fertilization. We presume that yield can be increased further if the ability to supply carbohydrates to grain after heading could be increased, such as through genetic improvement by breeding or the development of field management methods more suitable for Oonari. Nagano Prefecture is ranked the highest in rice yield per unit area almost every year in Japan (Ministry of Agriculture, Forestry and Fisheries, 2019). Therefore, we expected that the whole brown rice yield of Oonari would be highest in Nagano. Instead, yields were similar at all three locations (Table 2; Supplementary Tables S1 and S2). Among yield components, the number of panicles was smaller, and therefore the number of total spikelets tended to be smaller, in Nagano than in the other regions. There was significant correlation between panicle number and mean air temperature before heading (Figure 2). This may imply that the tiller number was reduced by the low mean air temperature before heading in Nagano, and so the number of panicles at heading was low. We also investigated the effects of increased [CO 2 ] on yield traits of Oonari to assess the effects on a high-yielding cultivar, as yield responses to increased [CO 2 ] have shown varietal differences (Hasegawa et al., 2013;Nakano et al., 2017;Shimono et al., 2009). The grain yield of Takanari, with a large sink capacity, increased more in response to increased [CO 2 ] than that of Koshihikari, with a smaller sink capacity (Hasegawa et al., 2013). Our results suggest that the yield of Oonari, with an equally large sink capacity, also increased under increased [CO 2 ] (Table 4). Among yield components, the number of total spikelets m −2 tended to be larger, while the percentage of filled spikelets was similar under increased [CO 2 ]. Thus, the larger sink capacity of Oonari and the improvement of source ability by increased [CO 2 ] increased the whole brown rice yield, as in Takanari. The results of our field experiments to clarify the effects of environment on the yield response of Oonari in different locations lead to the conclusion that the grain yield of Oonari can reach around 1000 g m −2 in various environments, including under higher [CO 2 ], similar to that of the high-yielding Takanari, which is already in production. Our results also suggest that carbohydrate supply to grains after heading is an important factor in achieving the higher yield of Oonari. To attain higher and stable grain yields, further study is needed to clarify the physiological mechanisms underlying the source abilities of Oonari. Takeshi Tokida http://orcid.org/0000-0001-7245-2952
v3-fos-license
2021-07-16T06:16:33.150Z
2021-07-15T00:00:00.000
235906274
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.analchem.1c00857", "pdf_hash": "70d69b57f671dd496c8e9603275da548c9b2cd08", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:265", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "sha1": "5fff85ee9ec205251654e0fff395768a2e6f5036", "year": 2021 }
pes2o/s2orc
Emission Intensity Readout of Ion-Selective Electrodes Operating under an Electrochemical Trigger We report for the first time on in situ transduction of electrochemical responses of ion-selective electrodes, operating under non-zero-current conditions, to emission change signals. The proposed novel-type PVC-based membrane comprises a dispersed redox and emission active ion-to-electron transducer. The electrochemical trigger applied induces a redox process of the transducer, inducing ion exchange between the membrane and the solution, resulting also in change of its emission spectrum. It is shown that electrochemical signals recorded for ion-selective electrodes operating under voltammetric/coulometric conditions correlate with emission intensity changes recorded in the same experiments. Moreover, the proposed optical readout offers extended linear response range compared to electrical signals recorded in voltammetric or coulometric mode. I on-selective membranes (ISMs) were developed for potentiometric (open-circuit) sensors intended mostly for clinical/environmental applications. Nowadays, ISMs are also of interest for other applications, e.g., decentralized ion sensing for diagnostic or wearable applications 1 and electrolyte-gated transistors. 2 Due to the high selectivity offered, ISMs are attractive for sensors operating under controlled current or potential in voltammetric/coulometric/chronopotentiometric mode of "redox-inactive" ion selective sensing. 3−5 For these applications, two layers of sensors are typically applied, with the ISM layer coated on an electroactive transducer. An electrochemical trigger induces a redox process in the transducer, ion exchange with the ISM, ion transport in the phase, and ion exchange on the ISM−sample interface. Ultimately, selective incorporation of analyte ions to the membrane phase results in analytical, electrochemical signals. Although potentiometric methods may offer low detection limits, the non-zero-current methods offer significant advantages for sensing but also reveal challenges due to the high resistive nature of ion-selective membranes. Alternatively, the optical readout mode herein proposed offers a possibility to overcome these issues, ultimately to offer a higher sensitivity or to extend an analytically useful range. Voltammetry/coulometry using ion-selective membranes can be foreseen as a descendant of ion-transfer voltammetry at immiscible liquid interfaces. 6 Different modes of controlled current/potential approaches offer different benefits. The voltammetric mode of ion-selective electrode application allowed the detection of perchlorate, potassium, or ammonium ions at a nanomolar concentration level. 7,8 On the other hand, the chronopotentiometric approach is useful to control ion fluxes in the ISM, leading to a lower detection limit of the sensor. 5 A significant increase in sensitivity is offered by the constant-potential coulometry approach. 4,9 Extraction of ions from a thin solution layer using an ion-selective electrode operating in coulometric mode allows improving sensitivity and selectivity without the need for frequent recalibration of sensors. 10 Electrochemical trigger-based sensing is also explored in ISM-based transistors using different configurations applied to the membrane on the gate 11 or on the conducting polymer channel. 2 Non-zero-current ISM applications typically require the presence of a relatively thin film to reduce the resistance of the system. 4,12 Most often polyoctylthiophene (POT) − redox and optical active polymer is applied as transducer. 3 In a neutral, semiconducting form, POT is characterized with bright emission, whereas for the oxidized polymer, emission is significantly quenched. 13 This effect is observed for either films or nanoparticles 14 (although emission spectra are different). The emission mode is highly sensitive to polymer redox state changes, 13,15 offering the possibility of optical readout of processes occurring in systems operating under an electrochemical trigger, an approach yet not explored to our best knowledge. Even under zero-current potentiometric conditions, the optical readout of ISM signals was considered only for chromoionophore-containing H + sensors to study undesired coextraction of anions 16,17 but not as an alternative readout mode. On the other hand, spectroelectrochemistry, understood as combining the optical readout with electrochemical trigger, was successfully applied, e.g., to study processes occurring in conducting polymers belonging to the polythiophene family or other optically active systems 15,18,19 (in the absence of ISMs). The novel idea of this work is to explore the emission change of POT corresponding to the redox transition of the polymer dispersed within the ISM forced by an applied trigger as a signal of an electrochemical reaction occurring ( Figure 1). The application of a composite material as a membrane is attractive as (i) it allows elimination of spontaneous partition of POT to the membrane phase, 20 offering full control of membrane composition. (ii) In consequence, effects related to interactions between POT and the ionophore/ion exchanger are controlled, too. 21 (iii) A composite membrane with particulates of POT dispersed within the plasticized PVC matrix seems to be an attractive alternative also to assure a larger contact area between the conducting polymer and ionselective membrane, which is proven as an important factor for sensors operating under coulometric conditions. 22 We propose an emission readout of ion-selective sensors operating under non-zero-current conditions benefiting from a novel-type fluoroelectrochemical ion-selective membrane (FE-ISM). As a model system, potassium-selective sensors were studied. Preparation of Potassium-Selective FE-ISE Sensors. FE-ISE electrodes were obtained by drop-casting of a cocktail on the surface of carbon paper (Toray Carbon Paper PTFEtreated, Alfa Aesar, 200 μm thickness) ( Figure S1). The surface area of the working electrode was limited to 0.137 cm 2 by Teflon tape, and in turn, an electrical contact was provided by the use of a copper tape. The FE-ISE cocktail used contained (% by weight) 4.9% of POT, 2% of NaTFPB, 9.8% of valinomycin, 22% of PVC, and 61.3% of DOS. If not stated otherwise, 40 μL of the cocktail was applied per electrode, while for some experiments, thinner membranes were used and obtained by casting 15 μL of the cocktail. The estimated thickness of fluoroelectrochemical ion-selective membranes (by an IP54 digital micrometer 1−2″/25−50 mm) was 50 ± 2 μm and 78 ± 2 μm (n = 3, in both cases) for 15 or 40 μL of cocktail applied, respectively. A total of 46 mg membrane components were dissolved in 1 mL of THF. The mole ratio of membrane components was 10.9:1:3.8 for POT (mere units):NaTFPB:valinomycin. Before experiments, fluoroelectrochemical ion-selective membranes were conditioned for 1 h in a 10 −3 M solution of KCl. Fluoroelectrochemical ion-selective membranes were characterized by the following electrochemical techniques: cyclic voltammetry, chronoamperometry, and impedance spectroscopy and an optical technique, namely, fluorimetry. Simultaneous experiments are as follows: electrochemical and fluorimetric experiments were carried out at a disposable cuvette using a conventional three-electrode setup with Ag/ AgCl (3 M KCl) as the reference electrode, Pt wire (1 mm diameter) as the counter electrode, and FE-ISE as the working electrode. Emission spectra were recorded in the range from 600 to 800 nm after excitation at 550 nm; to observe intensity changes in time or for applied voltage, emission measurements at 720 nm were chosen. The excitation and emission slits were 10 nm, while the fluorimeter detector voltage was maintained at 750 V. Cyclic voltammetry experiments were performed in a KCl solution of concentrations from 10 −1 to 10 −6 M in a potential range from 0 to 1.2 V at a scan rate of 5 mV s −1 . The emission increase onset potential, E EIO , was obtained as the cross section of both extrapolated linear portions of emission vs applied potential dependence: part of dependence where emission is independent of applied potential and the part where abrupt increase in emission is observed for cathodic polarization -half scans ( Figure S2). FE-ISE, carbon paper, and glassy carbon (0.07 cm 2 ) electrodes were also tested by cyclic voltammetry in 0.01 M K 4 [Fe(CN) 6 ] dissolved in 0.1 M KCl water solution in the same potential range as above. Chronoamperometric measurements were carried out for a KCl solution of the concentration as above with sequential pulses at 1.2 V (regeneration−oxidation of POT) and at 0.2 V (signal−reduction of POT) with different pulse duration times. Charge passed in the experiment was calculated by integration of the current over time. AC impedance spectra were collected over a wide frequency range (0.01−100,000 Hz) at a bias of 0.3 V and amplitude of 50 mV. ■ RESULTS AND DISCUSSION The new type of fluoroelectrochemical ion-selective membrane is proposed comprising a tailored composition of POT together with an ionophore (L) and cation−exchanger (R − ) in a plasticized PVC matrix that result in one layer. The emission spectra recorded in 0.1 M KCl are shown in Figure S3. Two maxima present in spectra at ca. 670 and 720 nm, with intensity dependent on applied potential, are characteristic of nanostructures of POT 14 formed in the PVC matrix. 20 The thickness of the FE-ISE was estimated to be Analytical Chemistry pubs.acs.org/ac Article close to 78 μm. The resistance of the prepared sensor, estimated from chronopotentiometric experiments performed in 0.1 M KCl using a cathodic/anodic current equal to 10 −8 A, was close to 2.5 . 10 5 Ω ( Figure S4A). This experiment shows also almost linear dependence of the potential on time with a small curvature pointing to diffusion limitations across the membrane resulting from transport of either ions or ionophores. In the case of slow diffusion of the ionophore from the membrane bulk to the membrane/solution interface (where the ionophore interacts with potassium ions entering the membrane), decrease in the free ionophore concentration in the surface layer can occur. Under galvanostatic conditions the surface concentration of ionophore, c(0, t) can be estimated from eq 1: 23 where c 0 is the free ionophore concentration in the membrane bulk (ca. 0.08 M), I is the applied current (1 . 10 −8 A), t is the time (30 s), D is the diffusion coefficient of the ionophore in the membrane (2 . 10 −8 cm 2 /s), 24 and the other symbols have their usual meaning. The result of calculation shows that ionophore depletion close to the membrane surface, under the above given conditions, is negligibly small (below 0.1%). An exemplary impedance spectrum of the sensor is shown in Figure S4B. It represents a high-frequency semicircle, suggesting a membrane resistance close to 1.5 . 10 5 Ω and Warburg impedance behavior for lower frequencies. The slightly lower resistance obtained from this experiment (compared to chronopotentiometry, Figure S4A) may be explained by ion concentration polarization in the membrane under galvanostatic conditions, resulting in apparent membrane resistance increase. 25 The Warburg impedance confirms diffusional limitations in ion transfer in the membrane suggested by chronopotentiometric experiments (described above). Assuming the determined resistance and current of a range of 10 −8 A, as used in chronopotentiometry, the estimated ohmic drop in the membrane is in a range of a single millivolt. Therefore, migration effects are of minor significance for ion transfer in the membrane under these conditions and diffusion is the predominant mode of transport. The electrochemical properties of the FE-ISE are similar to those of typical ISMs, and the presence of POT in the membrane is not resulting in the sensitivity of the layer to solution redox systems ([Fe(CN) 6 ] 3−/4− ) ( Figure S4C). The principle of the fluoroelectrochemical approach can be summarized by the following reaction (eq 2) where M + is the analyte; e − is the electron; n, m, and z are stoichiometric coefficients; POT + represents the oxidized (quenched) polymer backbone and POT 0 the neutral (emissive) polymer backbone; and the FE-ISE denotes the membrane phase. Reduction of POT + requires incorporation of cations to compensate charge changes in the membrane, and POT 0 generated contributes to the recordable increase in emission. The process is reversible: formation of POT + in the membrane requires expulsion of analyte ions and results in emission decrease. The redox process of POT present in the membrane, as shown by eq 2, is dependent on the applied trigger and primary ion concentration, leading to a change in the emission of the system quantitatively corresponding to the electrochemical process occurring. Thus, the herein proposed approach allows translation of the electrical signal, related to selective exchange of primary ions with solution, into a highsensitivity optical signal. Cyclic voltammograms (CVs) recorded for FE-ISM in KCl solutions are shown in Figure 2A. Although the FE-ISM was relatively thick compared to that typically used in ion-selective membrane voltammetric experiments, e.g., 3 on a cathodic scan, a peak, attributed to cations incorporation to the membrane, was formed at E CAT . E CAT shifts with a decreasing electrolyte concentration to a lower value, with the slope of E CAT on logarithm of the KCl concentration (log C KCl) close within the range of experimental error to Nernstian 56.4 ± 3.6 mV/ dec (for a range of 10 −5 −10 −1 M, R 2 = 0.988) ( Figure 2B). Thus, the herein proposed FE-ISE of nearly 80 μm thickness can be also useful in electrochemical only studies as an alternative to thin, thus prone to deterioration, ion-selective membranes. Oxidation and reduction of POT in the FE-ISM can be also observed as change in the emission intensity ( Figure 2C,D). An initially applied potential results in some increase in emission ( Figure 2D); however, the onset of oxidation current increase corresponds to an abrupt decrease in emission intensity due to the decrease in number of neutral polymer backbones in favor of POT + formation. The magnitude of the recorded current and emission signal for 0.1 M KCl decreases for potentials close to or higher than 1 V. On the reverse scan, an initially small increase in emission is observed followed by an abrupt increase for potentials lower than that of the cathodic peak. It should be stressed that both current and emission response were reproducible ( Figure 2D). Comparison of emission changes recorded for different KCl concentrations ( Figure 2B), clearly confirms the sensitivity of the emission approach to follow changes occurring in response to an applied potential trigger. In cathodic scans, both E CAT and emission increase onset potential (E EIO ) are moving toward lower values with a decreasing electrolyte concentration. The linear range of E EIO vs log C KCl is shifted to lower concentrations compared to E CAT dependence, and it covers the range from 10 −2 to 10 −6 M, with a slope of 65.0 ± 6.9 mV/dec (R 2 = 0.967). These results clearly show that emission readout offers advantages compared to current signals at low concentrations, which can be attributed to the high sensitivity of the emission approach in general and independence of fluorimetric signals recorded from ohmic drop related to the ion-selective membrane and/or diluted sample solution. Sample concentration change also affects emission values read at a 0 V (lowest cathodic) potential applied to the FE-ISM. A linear dependence of emission read at 0 V on log C KCl was obtained within the range from 10 −5 to 10 −2 M (R 2 = 0.991); formation of reduced, emissive POT 0 is dependent on the electrolyte concentration. On the other hand, at 1.2 V, formation of POT + occurs in the FE-ISM; this process is independent of analyte concentration, and emission recoded at this potential is practically independent of log C KCl. The effect of KCl concentration influence on voltammetric curve peak positions on the potential scale is related with the rate-limiting step (rds) in this process. The rds in this case (as confirmed by EIS results for the low-frequency range showing a Warburg impedance effect in the time domain typical for Analytical Chemistry pubs.acs.org/ac Article voltammetry, Figure S4B) is ion diffusion in the membrane. This process is independent of mass transfer phenomena in solution, and thus it is independent of KCl concentration unless the concentration in solution is very low. As expected for a KCl concentration > 10 −4 M, recorded current magnitude under voltammetric conditions is practically independent of KCl concentration and somewhat decreases for the lower concentrations tested (10 −5 and 10 −6 M) (Figure 2A). On the other hand, the process (equilibrium) of ion exchange at the membrane/solution interface is implemented in the series of processes starting with mass transfer in solution and ending with POT reduction and electron flow in the external circuit. Since the membrane potential is dependent on KCl concentration in a Nernstian manner, the concentrationdependent shift of the peak potential is observed ( Figure 2B). The role of diffusion in the membrane as the rds can be also confirmed by comparing the recorded reduction charge with the charge needed to completely reduce POT present in the membrane. The experimentally determined charge is only around 1% of the maximal reduction charge of POT, confirming the charge trapping effect for the polymer particles and only slight reduction of the polymer ( Figure 3A). The observed linear dependence of E EIO on the logarithm of KCl concentration can be related to voltammetric results showing a similar linear dependence of the cathodic peak (E CAT ) on the logarithm of KCl concentration ( Figure 2B). The shift of peak potential, observed at different KCl concentrations, or the potential difference between the applied potential and peak potential, is coupled with charge flow. As shown on the chronopotentiometric curve ( Figure S4A), the potential changes almost linearly with time (i.e., it changes almost linearly with flowing charge under galvanostatic conditions). Since the charge is practically linearly dependent on potential and the potential responds to KCl concentration in a Nernstian manner, the reduction charge of POT will be also linearly dependent on the logarithm of KCl concentration. Reduction of POT results in formation of a neutral form of the polymer, characterized by emission; thus, in the absence of side faradaic reactions, the fluorimetric signal will be a linear function of charge. Taking all the abovementioned relations, KCl concentration, potential, charge, and emission intensity, the fluorimetric signal is expected to linearly depend on the logarithm of electrolyte concentration, as observed experimentally. The fluoroelectrochemical ion-selective membrane was proven to be selective both in electrochemical and optical mode ( Figure S5). Figure S5A shows CV and corresponding emission changes at 720 nm recorded in NaCl−model interferent−solutions, whereas Figure Analytical Chemistry pubs.acs.org/ac Article lower potentials as expected 3 for hindered access of interfering ions to the FE-ISE during reduction. On the other hand, changes of emission at 720 nm, accompanying the recorded CVs in interfering ions, were significantly smaller for Na + , Ca 2+ , or Mg 2+ compared to those observed in potassium ion solutions ( Figure S5B). E CAT , E EIO , and emission (at 720 nm) recorded at 0 V or at 1.2 V were only slightly dependent on changes of the logarithm of concentration of interfering ions ( Figure S5E,F) compared to the effect of KCl concentration change shown in Figure 2B. These results clearly confirm the high selectivity of the herein proposed FE-ISE, both using electrochemical and optical readouts of generated signals. The coulometric readout of ISE 4 requires a redox reaction of POT, opening the possibility of emission signal recording. Figure 3A shows exemplary changes of current recorded for repeatedly applied potential corresponding to reduction−signal generation (0.2 V) and oxidation−regeneration of FE-ISE (1.2 V) and corresponding changes of emission read at 720 nm. Clearly, emission signal stabilization is quicker, especially at lower potentials, where the process is controlled by incorporation of analyte cations from the solution to the membrane phase. For regeneration of the membrane at higher potentials, repulsion of ions from the FE-ISE, as expected, is a slower process. It should be stressed that repeated polarization of the sensor at 0.2 and 1.2 V results in similar, within the range of experimental error, changes and ultimate values of current/emission recorded. Figure 3B−D shows dependence of charge (integrated current recorded at 0.2 V) and emission value plotted as a function of log C KCl. Figure 3B shows that the change of concentration does not affect significantly the charges recorded within the range from 10 −1 to 10 −4 M; however, for lower concentrations, pronounced changes are observed. On the other hand, the emission signal recorded was linearly dependent on log C KCl within the whole tested range from 10 −6 to 10 −1 M (R 2 = 0.984). This clearly shows that the emission signal advantageously allows significant extension of the linear response range. This effect is ascribed to the high sensitivity of POT emission changes for alteration of the polymer redox state. Extending the time of potential pulses resulted in somewhat different response patterns and higher sensitivity both in coulometric and emission modes ( Figure 3C); however, it did not affect the linear range of emission dependencieslinear responses were obtained within the whole range 10 −6 −10 −1 M KCl (R 2 = 0.996). The broader linear range of signal vs logarithm of concentration dependence of the emission approach was proven for a thinner FE-ISE ( Figure 3D). The emission readout resulted in a linear relation Figure 3D). The observed difference in dependence of charge on the logarithm of KCl concentration, for thinner and thicker membranes, may be related to the shape of the chronoamperometric curve with a relatively high current just after potential pulse application, which may lead to some errors in charge calculation (integration). In conclusion, the novel signal transduction of ion-selective sensors operating under an electrochemical trigger is presented. We have demonstrated for the first time the application of emission changes of the dye embedded within the membrane as an alternative signal for the electrochemical signal under conditions of voltammetric or coulometric experiments. The emission correlates with electrochemical responses observed under non-zero-current conditions, offering lower detection limits and broader linear response ranges under coulometric conditions. The new emission readout principle can be extended to a wide range of ion-selective systems under various non-zero-current electrochemistry conditions. Scheme of electrode used, emission increase onset potential determination method, emission spectra of the fluoroelectrochemical ion-selective membrane recorded in an open circuit and while applying oxidizing or reducing potentials and corresponding chronoamperometric dependencies, chronopotentiometric studies and impedance spectra of fluoroelectrochemical ion-selective membranes applied on glassy carbon or carbon paper substrates, effect of interfering ions presence on cyclic voltammograms, and emission changes recorded for FE-ISE (PDF) ■ AUTHOR INFORMATION Corresponding Author
v3-fos-license
2014-10-01T00:00:00.000Z
2012-04-10T00:00:00.000
16868273
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/archive/2012/154283.pdf", "pdf_hash": "f4c7b153a645654ccf254d2a5ddbcbb9582376e4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:267", "s2fieldsofstudy": [ "Biology" ], "sha1": "63028b33adb110e744c561e62997216f08d22fa3", "year": 2012 }
pes2o/s2orc
RASSF1A Signaling in the Heart: Novel Functions beyond Tumor Suppression The RASSF proteins are a family of polypeptides, each containing a conserved Ras association domain, suggesting that these scaffold proteins may be effectors of activated Ras or Ras-related small GTPases. RASSF proteins are characterized by their ability to inhibit cell growth and proliferation while promoting cell death. RASSF1 isoform A is an established tumor suppressor and is frequently silenced in a variety of tumors and human cancer cell lines. However, our understanding of its function in terminally differentiated cell types, such as cardiac myocytes, is relatively nascent. Herein, we review the role of RASSF1A in cardiac physiology and disease and highlight signaling pathways that mediate its function. Introduction The Ras association domain family (RASSF) consists of 10 members: RASSF1-10. Additionally, splice variants of RASSF1, 5 and 6 have been identified [1]. Importantly, all isoforms contain a Ras association (RA) domain either in their C-terminal (RASSF1-6) or N-terminal (RASSF7-10) regions [2]. To date, no known catalytic activity has been described for this family, and the general consensus supposes that RASSF proteins function as scaffolds to localize signaling in the cell. Accordingly, protein-protein interactions are critical in mediating their biological functions. RASSF1 isoform A (RASSF1A) is the most characterized member of the RASSF family. This paper will focus primarily on RASSF1A and its role in cardiovascular biology. RASSF1A RASSF1A was first identified and described by Dammann et al. in 2000 [3]. The RASSF1 gene encodes multiple splice variants, including the two predominant isoforms, RASSF1A and C. The RASSF1A isoform is the longest variant of the RASSF1 gene. Structurally, RASSF1A is a product of exons 1α, 2α/β, 3, 4, 5, and 6, while RASSF1C consists of exons 2γ, 3, 4, 5, and 6. Both isoforms contain a C-terminal RA domain; however, RASSF1A has an additional C1 domain that is not present in RASSF1C. The RASSF1 gene is located on Chr3p21.3 [3]. This short arm of chromosome 3 is known to exhibit loss of heterozygosity in many tumor models and is thought to harbor tumor suppressor genes. As the literature has shown, RASSF1A fits this description. The RASSF1A promoter contains a CpG island that shows a high frequency of hypermethylation in tumors, thereby silencing RASSF1A expression in many human cancers including lung, breast, ovarian, renal, and bladder [4][5][6][7]. RASSF1A expression is also lost in numerous cancer cell lines, while RASSF1C expression is seemingly unaffected [4]. Interestingly, recent work suggests that RASSF1C may actually promote tumor progression [8,9], further distinguishing these two splice variants. All RASSF proteins have an RA domain, which is thought to necessitate their binding to activated, GTP-bound Ras proteins. While RASSF5 (Nore1) is thought to bind Ras directly, whether RASSF1A is able to associate with Ras is less clear. It has been shown that RASSF1A binds K-Ras in vitro [10], and an interaction between ectopically expressed RASSF1A and activated K-Ras has been observed in HEK293 cells [11,12]. However, other work has found that this interaction only occurs in the presence of Nore1, arguing for an indirect association [13]. Importantly, to our knowledge, there are no reports demonstrating the interaction of endogenous RASSF1A and Ras proteins. RASSF1A has several key biological functions typical of tumor suppressor proteins. It has been implicated in the negative regulation of cell cycle progression, cell proliferation, and cell survival [2]. RASSF1A has been shown to localize to microtubules of proliferating cells, increasing microtubule stability and inhibiting cell division [14,15]. This may be mediated through direct binding or though interaction with microtubule-associated proteins such as C19ORF5 [16]. RASSF1A has also been shown to inhibit proliferation by inhibiting the accumulation of cyclin D1 and arresting cell division [17,18]. RASSF1A also promotes apoptosis, which can reportedly occur through multiple mechanisms and is likely cell-type dependent. One mechanism that mediates the apoptotic function of RASSF1A involves protein interaction with modulator of apoptosis-1 (MOAP-1 or MAP-1) [19]. MOAP-1 is normally sequestered in an inactive form in healthy cells. Upon death receptor stimulation, RASSF1A binds MOAP-1, causing its activation and subsequent association with Bax, which leads to apoptosis [19]. Previous work has also demonstrated enhancement of RASSF1A/Mst-mediated cell death by the scaffold CNK1 [20]. RASSF1A and Hippo Signaling. RASSF1A can also elicit inhibitory effects on growth and survival through engagement of the Hippo pathway. The Hippo signaling pathway is a highly conserved kinase cascade that was originally discovered in Drosophila and has been shown to be a critical regulator of cell proliferation, survival, and organ growth [21]. Three members of this pathway, dRASSF, Salvador and Hippo, contain the SARAH (Salvador-RASSF-Hippo) domain, which is conserved in its mammalian counterparts RASSF1-6, WW45, and Mst1/2, respectively [22]. The SARAH domain is critical for homo-and heterodimerization between components [23][24][25][26][27]. While the Drosophila ortholog dRASSF is known to antagonize Hippo activation in the fly [28], it has been demonstrated that RASSF1A promotes phosphorylation and activation of Mst 1/2 by inhibiting the phosphatase PP2A in mammalian systems [29,30]. The biological relevance of RASSF1A-mediated activation of Hippo signaling has also been investigated. Matallanas et al. reported a RASSF1A-Mst2-Yap-p73-PUMA signaling axis that promotes apoptosis in mammalian cells [31]. Hippo signaling is also important for maintaining intestinal homeostasis and tissue regeneration in response to injury. Mouse models with conditional disruption of either Mst1/2 or Sav1 in the intestinal epithelium displayed hyperactivation of Yes-associated protein (Yap), increased intestinal stem cell (ISC) proliferation, and increased polyp formation following dextran sodium sulfate (DSS) treatment [32,33]. Similarly, loss-of-function mutations of Hippo components in the fly midgut caused increased ISC proliferation [34]. These findings suggest that perhaps Hippo signaling serves a more global role in regulating organ integrity, structure, and response to injury, and that perturbation of this pathway can lead to aberrant growth and dysfunction. Cardiovascular Function of RASSF1A In 2005, two independent groups generated and published findings regarding the systemic deletion of the Rassf1a gene variant in mice [35,36]. Both described similar phenotypes involving the spontaneous generation of tumors, particularly in aged mice, thus further supporting the notion that RASSF1A is a bona fide tumor suppressor [35,36]. Not surprisingly, nearly all studies involving RASSF1A to date are related to cancer biology with few reports related to the cardiovascular field. RASSF1A is ubiquitously expressed and has been detected in heart tissue [3,37,38]. Initial investigation into the role of Rassf1 gene products in a cardiac context came from the Neyses laboratory [39]. Their findings demonstrated that both RASSF1A and RASSF1C could associate with the sarcolemmal calcium pump, PMCA4b, in neonatal rat cardiac myocytes. This interaction was shown to mediate the inhibition of ERK, and subsequent Elk transcription and suggested the possibility that RASSF1A could modulate cardiac myocyte growth [39]. 3.1. Rass f 1a −/− Mice. Five years later, the same group demonstrated that RASSF1A does in fact negatively regulate cardiac hypertrophy in vivo using Rass f 1a −/− mice [37]. Although these mice have increased susceptibility to spontaneous tumorigenesis [36], no apparent cardiovascular phenotype was observed under basal conditions, that is, no differences in heart size, morphology, or function compared to WT. However, when Rass f 1a −/− mice were challenged with pressure overload, they responded with an exaggerated hypertrophic response, evidenced by significantly greater increases in heart weight/body weight and hypertrophic gene expression (ANP, BNP, β-MHC). Cardiac myocytes of Rass f 1a −/− mice were significantly larger, which explains the augmented heart growth. Chamber dilation of Rass f 1a −/− mouse hearts was observed by echocardiography, consistent with eccentric hypertrophic remodeling. Hemodynamic analysis of WT and Rass f 1a −/− mice showed a rightward shift in PV loops following pressure overload in Rass f 1a −/− hearts, yet dP/dt max , dP/dt min , and fractional shortening were not altered in Rass f 1a −/− mice compared to WT. To examine RASSF1A function in cardiac myocytes, Oceandy et al. utilized a neonatal rat cardiac myocyte (NRCM) culture and the forced expression of RASSF1A through adenoviral gene transfer [37]. Increased RASSF1A expression inhibited phenylephrine-(PE-) induced cardiac myocyte growth and suppressed Raf-1 and ERK1/2 activation by PE treatment. Conversely, both Raf-1 and ERK1/2 phosphorylation were increased in Rass f 1a −/− hearts following pressure overload, suggesting negative regulation of MAPK signaling by RASSF1A. Deletion mutants of RASSF1A revealed an important function of the N-terminus of RASSF1A that disrupts the binding of active Ras and Raf-1, thus preventing ERK activation and cardiac myocyte growth. Cardiac Myocyte-Specific Rassf1a Deletion. To better understand the function of RASSF1A in cardiac myocytes in vivo, we crossed genetically altered mice harboring a floxed Rassf1a allele [35] with mice harboring the Cre recombinase transgene driven by the α-MHC promoter. This strategy disrupted endogenous Rassf1a gene expression and ensured cardiac myocyte specificity [38,40]. Similar to the Rass f 1a −/− mice, Rassf1a F/F -Cre mice had no obvious baseline cardiac phenotype. Although we also found exaggerated heart growth in the Rass f 1a −/− mice in response to pressure overload, the Rassf1a F/F -Cre mice unexpectedly had attenuated hypertrophy, that is, smaller hearts and cardiac myocytes, compared to Rassf1a F/F and α-MHC-Cre controls [38]. Furthermore, Rassf1a F/F -Cre mice had significantly less fibrosis and myocyte apoptosis, and better cardiac function following pressure overload. This was in stark contrast to the Rass f 1a −/− mice, which presented significantly more fibrosis and a decline in cardiac function comparable to the levels found in WT mice. As an alternative approach we also generated two different cardiac-specific transgenic mouse lines: the first expressing wild-type RASSF1A and the second expressing a RASSF1A SARAH domain point mutant (L308P) that renders it unable to bind Mst1 [41]. Interestingly, we found that increased RASSF1A expression in the heart caused increases in Mst1 activation, cardiac myocyte apoptosis, and fibrosis, and led to worsened function following pressure overload. Conversely, RASSF1A L308P TG mice had significant reductions in Mst1 activation, apoptosis and fibrosis, while cardiac function was preserved after stress [38]. These opposing phenotypes strongly implicate Mst1 as a critical effector of RASSF1A-mediated myocardial dysfunction. In cultured NRCMs, increased RASSF1A expression elicited activation of Mst1 and caused Mst1-mediated apoptosis. However, in primary rat cardiac fibroblasts, RASSF1A had a more pronounced effect on inhibition of cell proliferation rather than survival. Indeed, we found that silencing of RASSF1A in fibroblasts caused increased cell proliferation. Additionally, RASSF1A depletion led to an upregulation of NF-κB-dependent TNF-α expression and secretion in cardiac fibroblasts, while no change in IL-1β, IL-6, or TGF-β1 was observed. Through conditioned medium transfer experiments, we demonstrated that TNF-α secretion from fibroblasts promotes cardiac myocyte growth. Furthermore, treatment of Rassf1a −/− mice with a neutralizing antibody against TNF-α was able to rescue the augmented heart growth and fibrosis observed following pressure overload [38]. These data strongly implicated TNF-α as a critical paracrine factor influencing the cardiac myocyte growth response to stress in vivo. This work also demonstrated can also activate Mst1 to elicit apoptosis. In cardiac fibroblasts, RASSF1A represses NF-κB transcriptional activity and inhibits TNF-α production and secretion, thereby preventing paracrinemediated hypertrophic signaling between fibroblast and myocyte. the cell-type specificity of RASSF1A signaling in the heart and highlighted a novel signaling pathway downstream of RASSF1A/Mst1 that mediates a paracrine effect in vivo (see Figure 1). This mechanism involving multiple cell types, and paracrine signaling among them is rather unique and contrasts with more established signaling paradigms of cardiac hypertrophy including calcineurin/NFAT, HDAC/MEF2 and MEK/ERK pathways, which have been elucidated in the cardiac myocyte [42]. Hippo Signaling in the Heart. Our previous work has demonstrated the functional importance of Hippo signaling in the heart. Using genetically altered mouse models we showed that increased expression of Mst1, and subsequent activation of the Hippo pathway, caused increased apoptosis, dilated cardiomyopathy, and premature death [43]. Interestingly, expression of Mst1 also attenuated cardiac myocyte hypertrophy thereby impairing the heart's ability to appropriately respond to stress. In contrast, expression of a kinaseinactive Mst1 mutant (DN-Mst1) prevented cell death and protected the heart from insult [43]. Lats1/2 kinases (mammalian homologs of Warts) are targets of Mst1/2 that can phosphorylate and inactivate Yap, thereby inhibiting Yapmediated gene transcription [44]. Similar to our findings related to Mst1, we demonstrated that transgenic expression of Lats2 in the heart led to inhibited growth and worsened function [45]. Conversely, kinase-inactive Lats2 (DN-Lats2) transgenic mice had larger hearts both at baseline and following pressure overload and displayed attenuated cardiac myocyte apoptosis in response to stress [45]. Taken together, these results provide further evidence that activation of Hippo signaling, via increased Mst1 or Lats2 expression, inhibits cardiac myocyte growth and promotes apoptosis in the adult heart. Furthermore, selective inhibition of Hippo signaling in the cardiac myocyte (DN-Mst1 or DN-Lats2 TG) confers protection against insult, similar to what we observed in the cardiac myocyte-specific RASSF1A deleted mice [38]. However, the hypertrophic response in these two models was opposite, which may result from a Hippoindependent pathway(s) downstream of RASSF1A. It should be pointed out that studies of adult mouse models using cardiac myocyte-restricted deletion of Mst1/2, Lats1/2 or Yap have not been published. Findings from these models should be helpful in further elucidating the role of Hippo signaling components in the adult murine heart. Recent work from the Martin laboratory demonstrated the importance of mammalian Hippo signaling during cardiac development and cardiac myocyte proliferation [46]. Conditional deletion of Salvador (Sav1) in the embryonic heart, driven by Nkx2.5-Cre expression, caused increased myocyte proliferation and cardiac enlargement and was mediated by hyperactivation of Yap and subsequent Wnt/βcatenin-regulated gene expression. In a similar vein, direct targeting of Yap expression in the developing mouse heart further demonstrated its role in governing both myocyte proliferation and heart growth [47]. Interestingly, both reports described an interaction between Yap and Wnt signaling, highlighting additional Hippo signaling crosstalk in the heart. Conclusion Fueled by the initial reports described herein, investigation into the role of RASSF1A in cardiovascular biology has begun to accelerate. Yet many questions remain outstanding. Among them, what are the upstream inputs that regulate RASSF1A function? What is the mechanism responsible for RASSF1A cell-type-specific signaling? What are the molecular constituents of the RASSF1A complex? Does RASSF1A have additional Mst1-independent functions in the heart, as has been demonstrated in tumor cell lines [41]? Recent work identified activated K-Ras as a promoter of RASSF1A signaling in colorectal cancer cells [48]. This finding begs the question of whether K-Ras or additional Ras isoforms regulate RASSF1A in other systems and cell types. Based on our findings in Rassf1a-deleted mice [38], we speculate that the difference in proliferative capacity between cardiac myocytes and fibroblasts may explain the distinct effects of RASSF1A signaling in the heart. There may also be differences in the expression or localization of signaling components, thereby modulating their ability to effectively signal in certain cell types. Exposure to diverse signals and cues in the extracellular milieu may also contribute to varied outcomes downstream RASSF1A. As we continue to elucidate the role of RASSF1A and Hippo signaling in the heart, its importance in cardiac development, physiology, and disease is becoming apparent. Of course, translating these findings into meaningful therapeutic strategies remains the greatest challenge. Our work has shed light on the importance of cell type specificity RASSF1A in determining pathological outcomes [38]. We also defined a paracrine mechanism functioning downstream of RASSF1A in response to cardiac stress [38]. It is likely that additional complexities remain to be uncovered and will ultimately influence possible interventions to manipulate RASSF1A and treat heart disease. RASSF1A signaling is diverse and our knowledge regarding RASSF1A function is rapidly expanding. Given that a bridge from cancer to cardiovascular biology is in place, it is likely that as additional RASSF1A mechanisms of action are discovered, its impact on cardiac biology will continue to grow.
v3-fos-license
2023-12-11T05:04:57.049Z
2023-07-12T00:00:00.000
266149171
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "0a41f9b5cb0eb3f43de9f25354bcb298e68362e0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:268", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "0a41f9b5cb0eb3f43de9f25354bcb298e68362e0", "year": 2023 }
pes2o/s2orc
Soil moisture and temperature drive emergence delays associated with fire seasonality in eucalypt forests Abstract Many ecosystems are well adapted to fire, although the impacts of fire seasonality and its effect on post-fire recruitment are less well understood. Late summer or autumn fires within eucalypt forests with a Mediterranean-type climate allow for seedling emergence during the cooler and wetter seasons. The emergence and survival after spring fires may be impacted by higher soil temperatures and water stress, delaying recruitment until the subsequent winter period. During this delay, seeds may be exposed to predation and decay, which reduce the viable seed bank. This study examines post-fire recruitment dynamics in a eucalypt forest ecosystem (Northern Jarrah Forest (NJF) of southwestern Western Australia) and whether it may be vulnerable to human-induced changes to fire season. Here, we compare in situ post-fire seedling emergence patterns between autumn and spring burns and account for a potential ecological mechanism driving seasonal differences in emergence by determining the thermal germination requirements of seeds for 15 common species from the NJF. Our results demonstrate that 93% of species had thermal optima between 10°C and 20°C, analogous with soil temperatures measured during the germination window (late April to October). Concurrent in situ post-fire emergence was highest 144 days after an autumn (seasonal) fire, followed by a 10–72% decline. In contrast, there was no emergence within the first 200 days following a spring (aseasonal) fire. We conclude that aseasonal fire in the NJF can lead to a complete delay in recruitment in the first season post-fire, resulting in a lower inter-fire growth period and increasing the potential for further reductions in recruitment through seed predation and decay. The study suggests that aseasonal fire has an immediate and significant impact on initial recruitment in the NJF, but further research is required to determine any longer-term effects of this delay and its implications for fire management in southwestern Western Australia. Introduction Fire is a unique and complex disturbance that has an essential role in driving the evolution of organisms in fire-adapted ecosystems (Keeley et al., 2011).It has been an important evolutionary process in ecosystems for almost as long as terrestrial vegetation has existed (Glasspool et al., 2004).The human manipulation of fire has been an important cultural and practical tool for human ancestors since before the rise of Homo sapiens ∼300 kya (Wrangham and Carmody, 2010).While fire is essential for the survival of many plant species in fire-adapted ecosystems, it is also potentially destructive for the people living in and around them.Modifying the fire regime and fire seasonality as part of fire and forest management may have detrimental ecosystem impacts (Nolan et al., 2021).Modern fire managers have the challenging task of applying and managing an appropriate fire regime to protect both human life and environmental values in fireprone forest ecosystems. An appropriate fire regime is essential for the persistence of fire-adapted communities (Ashton and Chinner, 1999;Burrows, 2008;Nolan et al., 2021).Many species in fireadapted ecosystems rely on fire to complete their life cycle, as there are multiple benefits to timing flowering or recruitment following a fire event (Ooi, 2010;Miller et al., 2019).In many fire-adapted ecosystems, prescribed fire is applied to actively manage fire regimes and fuels while balancing the goals of preserving human life and property with the conservation of biological communities (Burrows and McCaw, 2013;McCaw, 2013).To achieve these goals, a well-developed understanding of fire ecology is paramount, as vegetation responses to altered fire regimes can be temporally and spatially variable (Hobbs and Atkins, 1988;Mackenzie et al., 2021).Many communities will therefore have different requirements for the frequency, intensity and seasonality of fire, which together are called the fire regime.Meeting these requirements may become more difficult to manage as rainfall decreases and mean annual temperatures increase due to anthropogenic climate change, which has extended the fire season across southwestern Western Australia (Bates et al., 2008;Burrows and McCaw, 2013).This becomes increasingly challenging when meeting annual forest management spatial-area burning targets set for the purposes of fuel reduction to protect people and property (Burrows, 2008). In fire-adapted ecosystems, seed dormancy release and subsequent germination are closely tied to fire and the post-fire environment (Ferrandis et al., 1999;Tangney et al., 2020b;Mackenzie et al., 2021).Complex dormancy mechanisms ensure that once seed dormancy is overcome, germination can proceed under optimal conditions for emergence and survival (Ooi et al., 2022).Germination is induced by set environmental conditions that are acutely defined by a range of suitable temperature and moisture conditions that maximize emergence success (Alvarado and Bradford, 2002) and, in some cases, further enhance germination through additional germination cues, including light-or smoke-derived chemicals (Baskin and Baskin, 2004). The effects of fire seasonality on recruitment may be most pronounced in regions with a strongly seasonal climate like southwestern Western Australia, which has a Mediterranean climate characterized by cool, wet winters after hot, dry summers (Dell et al., 1989;Miller et al., 2019;Tangney et al., 2022).Fire that occurs before the beginning of the cool, wet winters aligns dormancy release and germination requirements, allowing seedlings to emerge and establish over winter (Tangney et al., 2020b), which in the Mediterranean climate of southwestern Western Australia is conducive to seedling survival; there is frequent rain, low temperatures and the flush of inorganic nutrients and reduced competition from the recent fire event (Figure 1A) (Dell et al., 1989;Chambers and Attiwill, 1994).Species with physically dormant (PY) seeds are common in the Northern Jarrah Forest (NJF) and take advantage of these predictable seasons, as fire events and subsequent high soil temperatures release dormancy by breaking the hard seed coat that physically prevents germination (Baskin and Baskin, 2004).When the growing season sets in soon after a late summer or early autumn fire, seeds are nondormant and emerge readily once germination requirements are met.For species from all dormancy classes (including nondormancy-ND, physiological dormancy-PD and morphophysiological dormancy-MPD), it is important for successful recruitment to capitalize on these conditions through rapid seedling emergence, which maximizes growth time before the onset of summer (Ooi, 2010;Miller et al., 2019). Historical fire use by the Noongar people, the traditional owners of southwestern Western Australia, generally occurred between late spring and autumn, from November to early April (Abbott, 2002;Lullfitz et al., 2017;Rodrigues et al., 2022).Lightning-induced wildfires also occurred during this time (Abbott, 2002;McCaw and Read, 2012;Lullfitz et al., 2017).However, due to climate change and modified land use, current prescribed burning practices are extending the fire season into the cooler and historically wetter parts of the year, particularly spring (Figure 1B) (Burrows and McCaw, 2013;Clarke et al., 2013). In this study, we assessed post-fire microsites following an autumn and (aseasonal) spring fire to evaluate the impact of fire season on post-fire germination and recruitment of a variety of common plant species in the NJF.We achieved this in two parts: first, we conducted a laboratory trial aimed at defining the thermal germination niche of common species found within the NJF.Second, a concurrent in situ field study tracked the emergence and survival of seedlings across two field sites, with one site tracking the emergence and survival after an autumn burn and the other tracking the emergence and survival following a spring burn.Further, we collected plot-scale temperature and moisture data at 10 plots within each site.The study asks the fundamental question: Does fire seasonality impact immediate post-fire recruitment?(Baskin and Baskin, 2004). Within this, we consider three objectives: (i) determine the thermal germination niche for seed from 15 study species common to the NJF, (ii) compare the emergence and summer survival of seedlings in the field between autumn and spring burn sites and (iii) examine how the thermal germination niche requirements align with post-fire soil temperature and moisture dynamics. Study region The NJF is an ecological community within the Southwest Australian Floristic Region characterized by the codominance of two eucalypts, Eucalyptus marginata (Jarrah) and Corymbia calophylla (Marri).The community runs along the Darling Range east of Perth between the Avon River in the north and Collie in the south (Dell et al., 1989).The NJF is acknowledged as under threat from drought and altered fire regimes (Lawrence et al., 2022) and exists across a gradient of decreasing rainfall from 1300 mm in the southwest to 600 mm in the northeast (Dell et al., 1989).The climate of the NJF is Mediterranean type, a highly seasonal rainfall regime with hot, dry summers after mild wet winters.The NJF has a varied fire history, but the current regime is dominated by low-to-moderate severity fuel-reducing prescribed burns (Dixon et al., 2022) performed largely in spring, which are undertaken to maintain a mean return interval of approximately 11 years (Burrows, 2008). Thermal germination trial A laboratory-based germination trial was undertaken to determine the fundamental thermal germination niche for common species in the NJF.A total of 15 common species considered likely to occur in our study region were selected for the thermal germination trial to represent a range of life forms and dormancy classes.Species with known dormancy release requirements were targeted to ensure unknown dormancy release requirements did not impact the germination trials (Table 1).Seeds of each species were collected from wild plant populations from a minimum of 10 mature plants from within the NJF and then stored at 15 • C and 15% relative humidity until experimental use. Seeds were X-rayed with a Faxitron MX-20 cabinet Xray system (Hologic, Inc.) to detect whether seeds were filled, and non-filled seeds were discarded as these are non-viable.Once PY and MPD species were dormancy treated, seeds of each species (n = 500) were then surface sterilized in a 2% bleach and 1.3% tween solution to reduce fungal contamination (Elmer and Stephens, 1988).Following surface sterilization, four replicates of 25 seeds were plated on a 90 mm petri dish with 0.7% agar and placed , 400-700 nM) and scored three times a week, with germination defined as the emergence of the radicle from the seed.Scoring occurred until there was a 2-week period with no germination, at which point germination was considered complete for that species. Laboratory data analysis To define the thermal germination niche, we used total germination (final germination proportion at the end of the germination trial) at each temperature in the thermal germination trial to build species-specific thermal performance curves (TPCs) using the generalized thermal performance package, rTPC for R (R Core Team, 2013;Padfield et al., 2021).Statistical analysis was undertaken in Rstudio 2022.02.1 + 461 environment for R 4.1.2.We used each species-specific TPC to identify the modelled temperature with the highest germination (optimum germination temperature, T opt ) and the lowest temperature with zero modelled germination (maximum germination temperature, T max ).We also determined the range of temperatures between T opt and T max to indicate how buffered seeds are to increased soil temperatures following fire. To determine the most suitable model for each speciesspecific response, we applied a limited suite of non-liner (nls) models defined in (Padfield et al., 2021) to germination data for each species.Curve fitting suitability was assessed using Akaike information criterion (AIC, Wagenmakers and Farrell, 2004), where the lowest ranking AIC model was chosen for each species to provide best fit, unless otherwise mentioned (Supplementary Table S1).Visual checks were made for each candidate model to ensure there was minimal overfitting.Once model selection had identified the most suitable model type, models were reconstructed using the minpack.lmpackage (Elzhov et al., 2022) which provides functionality for bootstrapping within the car package (Fox et al., 2007) in R. We applied residual bootstrapping with 100 iterations for each species-specific model, which provides the ability to create species-specific 95% confidence intervals around modelled parameters as well as 95% CI around T opt and T max . Germination speed for each species in the thermal germination trial was modelled using non-linear functions using the 'drm' function available within the drc package (Ritz et al., 2016) in R, fitting nonlinear functions as outlined in Ritz et al. (2016).Each model fit was tested to establish which model provided the most suitable fit, based upon log-likelihood estimations and lowest AICc ranking, and the most suitable fit was selected for each species (Supplementary Table S2-1).From these generated models we were able to estimate germination speed, defined here as time to 50% germination (T 50 ) in days (Tangney et al., 2020a) Field trial To link laboratory models of germination dynamics with in-situ emergence patterns following fire we established sets of plots following two prescribed burns, the first in autumn on 19/04/2021 and another in spring on 16/10/2021, both within the NJF in southwestern Western Australia.Two sites were selected in Korung National Park, approximately 30km southeast of Perth (Figure 2a), based on their relative location to each other and the timing of applied fire.Further replication from sites across the NJF would have been preferable to ensure treatment effects were spatially reproducible, but due to time constraints and the limited number of acceptable sites with spring and autumn burns in close proximity a single autumn and spring site were selected.Average rainfall since 1991 at the nearby Bickley weather station (Lat: −32.01,Lon: 116.14) is 1146.7 mm, most of which falls in winter (Figure 2b) (Bureau of Meteorology, 2022).Both sites are characteristic of NJF communities, dominated by E. marginata with a typical NJF understorey largely composed of Macrozamia riedlei, Xanthorrhoea preissii, and many shrub species from the Fabaceae, Myrtaceae and Proteaceae families (Dell et al., 1989). Within each burn site, ten 1-m 2 plots were installed over an area of approximately 1 ha, one month after each burn.During the first visit, all perennial woody and herbaceous seedlings were counted but could not be identified to the species level due to limited identifiable characteristics.Secondary site characteristics were visually estimated and recorded, including char height (height at which charcoal is present on tree stems), litter cover, resprouter cover and canopy cover.In subsequent months, all perennial woody and herbaceous seedlings within each plot were identified and counted monthly in the four months following the first site visit to track emergence over time.Summer survival was assessed during a final site visit in April 2022, and seedlings were again identified and counted. To supplement the seedling emergence data and provide information on post-fire microsite suitability, a TMS-4 temperature and moisture logger (TOMST ® ) was installed in the centre of each plot, ensuring the loggers were in contact with the soil complex.TMS-4 loggers measure temperature at 15 cm above soil, at the soil surface, and 6 cm beneath the soil surface, as well as soil moisture at 6 cm below the soil surface, measured as volumetric soil moisture % (Wild et al., 2019).As we are interested in the uppermost soil layers, where the bulk of seeds reside (Roche et al., 1998), we calculated median soil temperatures, which combined the soil surface temperatures with temperatures at 6 cm into the soil every 15 minutes.We also recorded soil moisture at each plot, recorded at 6 cm into the soil.All measurements started on 08/06/2021 for the autumn burn and 24/11/2021 for the spring burn. Field data analysis Following the completion of the post-fire survey, we calculated the mean, median seedling density, and species richness for each burn.Data visualization was performed with the ggplot (Wickham, 2011), ggpubr (Kassambara, 2020) and cowplot (Wilke, 2019) packages for R. Time series data, including logger data, species richness and seedling counts, were analysed with the R package zoo (Zeileis and Grothendieck, 2005) to aid visualization.Due to the complete absence of germination in any of the spring plots, formal statistical tests would have been redundant so formal statistical comparisons between spring and autumn plots were not undertaken. Thermal germination trial The 15 species showed a range of responses to the effect of incubation temperature.Two broad groups emerged (Figure 3, Supplementary Table S1).The first group is characterized by having a T opt close the to their T max (Figure 3A) and includes species like Anigozanthos manglesii, Calothamnus sanguineus and Acacia alata.Bossiaea ornata has an estimated T opt of 20.0 • C and a T max only 3.2 • C higher (Figure 3A).The other three Bossiaea species demonstrated some of the lowest T opt temperatures (ranging from 5.7 • C to 12.1 • C), but all had a T max > 6 • C higher than their T opt (Figure 3B).These three Bossiaea species are indicative of the second group of species, those with a wider temperature difference between their T opt and their T max .This second group included both dominant tree species for the NJF, C. calophylla and E. marginata (Figure 3B). The seeds from A. manglesii, C. calophylla and E. marginata all displayed high germination over a wide range of temperatures.C. calophylla had very high germination up to 25 • C, but dropped to 46% germination at 30 • C, resulting in an estimated T max of 29.7 • C (95% CI, 28.9 • C-30.0 • C), the highest estimated T max in this dataset, while E. marginata and A. manglesii displayed a T max of 26.3 • C and 29.5 • C respectively. All seeds germinated quickest around their T opt , but there was significant variation in germination speed across species, with Myrtaceae (ND) and Haemodoraceae (MPD) species generally germinating quicker than Fabaceae (PY) species (Supplementary Table S2-2).Nine of the 11 Fabaceae species germinated at a slow rate over a period of 40-60 days at all temperatures, resulting in longer T 50 estimates than most Myrtaceous species.The two Acacia species and Kennedia coccinea had a fast T 50 under temperatures suitable for optimum germination (A.alata = 9.9 days ± 0.3, A. pulchella = 8.3 ± 0.2, K. coccinea = 11.6 ± 0.5 at 15 • C), but under supra-optimal germination temperatures (≥20 • C) germination speed significantly declined (A. alata = 89.6 ± 0.3, A. pulchella = 24.3 ± 1.3, K. coccinea = 72.7 ± 6.5).E. marginata took 12.4 days at 15 • C to reach 50% germination, and 17.1 days at 20 • C to reach 50% germination, but at 25 • C germination is severely delayed, requiring 62.7 ± 2.2 days to reach 50% germination.The difference between families in near-optimum T 50 was such that the Myrtaceae species with the slowest T 50 near its optimum temperature (C.sanguineus, T 50 = 12.8 ± 0.4 days at 15 • C) had a similar T 50 to the non-Acacia Fabaceae species with the fastest T 50 (K.coccinea, T 50 = 11.6 ± 0.5 days at 15 • C). Field seedling emergence and mortality A comparison of the seedling counts and species richness between the autumn burn (19/04/2021) and the spring burn (16/10/2021) reveals a stark difference in the emergence of seedlings following fire across seasons (Figure 4).There was no emergence from any species following the spring burn (median ± SE; 0 ± 0 seedlings/m 2 ), despite early rains occurring in early 2022.had increased substantially following recent rainfall events (Figure 6).Median seedling count in the autumn site initially increased over winter to a peak of 48.5 seedlings/m 2 144 days after the fire, before decreasing at each following site visit to a low of 18.5 seedlings/m 2 on the final visit 372 days after the fire.In contrast, mean seedling species richness in the autumn site peaked later at 5.8 species/m 2 175 days after the fire but decreased over summer to a minimum of 4.2 species/m 2 372 days after the fire. Mean summer mortality for seedlings at the autumn site was 40.2% but varied by species (Figure 5).Opercularia echinocephala had the highest summer mortality, falling from a mean of 7. to just 2 seedlings per plot after summer, a mortality rate of 72.1%.The species with the lowest summer mortality was E. marginata, which lost just 10.3% of seedlings over summer, falling from 2.6 seedlings per plot at peak emergence to 2.3 seedlings per plot after summer.C. calophylla (61.1% mortality), the other dominant tree species in the NJF, had much higher summer mortality than E. marginata, but after summer had similar mean seedlings per plot (2.4).Darwinia citriodora and Stenanthemum notiale showed higher seedling counts after summer, potentially due to delays driven by the after-ripening periods and seasonal effects found in the complex and variable dormancy responses in Darwinia species (Auld and Ooi, 2009).However, the germination requirements (particularly in the field) are poorly defined for both species, so the exact mechanism driving the delayed emergence is unknown.Summer mortality was not assessed at the spring site as there was no emergence. Soil temperature and moisture Comparison of soil temperature and rainfall between the sites reveals similar soil temperature between the autumn and spring burn sites, with the autumn site logging slightly higher temperatures over summer (Figure 6).Variability in temperature between plots also increased in the autumn site over summer, before reducing again as temperatures dropped and rainfall events occurred in April 2022.Volumetric soil moisture was higher in the spring site than the autumn site, particularly after the first major rainfall event of the year in late March 2022.Soil temperature was suitable for germination between June and early December, before trending above T max for much of the summer period.In contrast, soil moisture at both sites reduced dramatically through October and remained low until April 2022 (Figure 6). Discussion Fire seasonality has a distinct and immediate impact on postfire seedling emergence in the NJF.Seedling emergence after the autumn burn was markedly higher, when compared to spring burn plots, which experienced a complete delay in recruitment, recording zero seedlings within the first 200 days following fire.However, it is important to note that despite the immediate delay, germination and emergence following fires in spring will most likely occur during the subsequent winter (Enright and Lamont, 1989), when conditions for germination become suitable.Nevertheless, the delay in emergence exposes seeds to an array of processes before germination conditions are suitable, which in turn may reduce recruitment in subsequent seasons (Céspedes et al., Ellsworth and Kauffman, 2013;Miller et al., 2019Miller et al., , 2021;;Tangney et al., 2020bTangney et al., , 2022)).However, the extent to which seed mortality during fire, seed predation and seed decay may have reduced the seed bank is unknown, so further research will be necessary to determine how long ungerminated seeds persist in the seed bank. Recent research has highlighted how changes in fire season can impact post-fire recruitment (Miller et al., 2019;Tangney et al., 2022).Of the eight mechanisms identified in Miller et al. (2019) through which altered fire seasonality impacts plant survival and reproduction, the present study provides strong evidence for part of mechanism seven, post-fire seedling establishment (Miller et al., 2019;Tangney et al., 2020b).The mechanism suggests changes to fire season in strongly seasonal climates like southwestern Western Australia can reduce seedling survival by delaying germination from the early wet season to the late wet season, providing less time for growth before the onset of a dry summer.The absolute delay in seedling emergence following aseasonal fire in the present study (100%) exceeds the average of 75% in studies of postfire seedling establishment in seasonal and weakly seasonal climates reviewed by Miller et al. (2019).The results here are in contrast to patterns observed in less seasonal ecosystems including temperate oceanic climates (Tangney et al., 2022), which demonstrated selective delay in some species, primarily driven by seed dormancy type (Ooi, 2010).The outcome presented in this current study is consistent with results from other equally strong seasonal ecosystems (Grant, 2003;Miller et al., 2021).Soil temperatures increase rapidly through November in conjunction with seasonal drought conditions, which occur as early as mid-October (Figure 6) to reduce available soil moisture and clearly delineate a defined germination window from April through to the end of October.The germination window in the NJF is substantially shortened when fires occur in spring rather than summer or autumn, potentially explaining the complete germination delay identified after the spring burn in this study. This study raises the possibility that shifts in the predominant burn season may decrease post-fire recruitment and therefore forest resilience to climate change.Hazard reduction burning is often applied in spring for safety reasons, as spring fires are less intense and less likely to escape than fires in the historic fire season (summer and early autumn; Abbott, 2002;Burrows and McCaw, 2013).Fire seasons have also shifted in response to the decrease in rainfall, change in seasonality of rainfall and increase in temperatures experienced across southwestern Western Australia since the mid-1970s (Bates et al., 2008;Clarke et al., 2013).Aseasonal fire in a Mediterranean climate can cause a variety of adverse effects on recruitment, including increased seed mortality during fire and reduced seedling growth time before the onset of summer (Tangney et al., 2019;Miller et al., 2021).The results of this study suggest that changes to fire season have immediate adverse effects on recruitment by initiating a delay and potentially shifting community composition over time, as species from different seed dormancy classes show variable responses to the shifting regime.For example, the volumetric soil moisture was 7% to 7.4% on the day of the spring burn following a 10.6 mm rainfall event 4 days earlier, likely high enough to hydrate non-PY seeds and cause excess mortality from temperatures experienced during a cool fire (Tangney et al., 2021).Such a rainfall event preceding fire would be very unlikely in a regime dominated by late summer/early autumn burns before the onset of anthropogenic climate change.Future climate change in the region is predicted to include increased temperature, decreased rainfall, increased rainfall variation (Andrys et al., 2017), increased fire intensity and frequency and shifts to the fire season (Bates et al., 2008;Clarke et al., 2013), all of which will have impacts on recruitment in the NJF. Two broad groups of species were identified from the thermal germination results: 1) those species whose seeds have their T opt relatively close to their T max (Figure 3A) and 2) those species which produce seeds with a substantially greater temperature difference between their T opt and T max (Figure 3B).Seeds with their T opt close to their T max are limiting their risk of germination at suboptimal conditions to ensure seedling survival is increased.Further, seeds with a narrow window between their optimum temperature for germination and their maximum temperature may be more exposed to recruitment failure events as soil temperatures increase in line with local ambient temperature increases as the impacts of climate change accelerate (Ooi, 2012;Ooi et al., 2022).These increases in soil temperature may selectively reduce the ability for these species with narrow germination niches to germinate and emerge following fire, as post-fire soil temperatures may exceed their thermal limits.This may be further exacerbated following high severity fires (Ooi et al., 2022) where large volumes of canopy biomass are lost (Dixon et al., 2022), resulting in higher soil temperatures as more solar radiation hits the soil surface (Fu and Rich, 2002). All species aside from B. aquafolium (T opt = 5.7 • C) had their T opt between 10 • C and 20 • C, which are typical soil temperatures during winter in the NJF (Figure 6).Although most species showed a reduction in germination beyond 20 • C, some species (E.marginata, C. calophylla and A. manglesii) maintained high total germination and germination rates at 25-30 • C, indicating emergence after spring burns was not purely thermally limited.Median soil temperatures 6 cm below the surface never exceeded 30 • C over summer, which is within the bounds of germination temperatures for some species, however emergence would be severely delayed in the upper parts of the range.However, some emergence would still be expected if emergence was primarily thermally limited, so the absence of germination following the spring burn is attributed to a combination of water stress and high soil temperatures.As temperature increases in combination with increased water stress, C. calophylla and E. marginata germination and emergence would likely be significantly impacted (McChesney et al., 1995;White, 2020). Seedling mortality following fire in autumn ranged from 10-72%, which is typical following seasonal fire (Abbott, 1984), with most seedling mortality occurring before summer.The dominant tree species in this ecosystem, namely E. marginata and C. calophylla, show divergent strategies for recruitment.E. marginata displayed low emergence but similarly low mortality, whereas C. calophylla invests more into recruitment (Abbott, 1984), which in turn increases emergence at the cost of higher seedling mortality.Experimental findings by Abbott (1984) also found higher emergence in C. calophylla, but lower summer mortality in C. calophylla compared to E. marginata.C. calophylla seedlings are thought to be more drought-tolerant than those of E. marginata (White, 2020), so the higher summer mortality for C. calophylla in this study may be a reflection of density-dependent competition in higher density plots where C. calophylla seedlings were present (Harvey et al., 2011).Soil temperatures experienced in the months following the autumn burn and the spring burn displayed very similar patterns across each site, but soil moisture was consistently higher in the spring burn site, particularly following the first major post-summer rainfall event.The spring burn site was located lower in the landscape than the autumn burn site, so landscape position may explain the differences in soil moisture (Singh et al., 2021).Litter cover was reduced by a greater extent following the spring fire than following the autumn fire, and post-fire resprouter cover in the spring site was lower than in the autumn site.These physical differences between post-fire microsites may further explain the discrepancy in soil moisture, as vegetation and microbial development can be an important control on water infiltration to soils (Montaldo et al., 2008). Figure 2 : Figure 2: (A) Plot locations and burn extent for an autumn (seasonal) and spring (aseasonal) prescribed fire in Korung National Park, Western Australia.(B) Comparison of temperature and rainfall during the field trial with long-term temperature and rainfall at the Bickley weather station (Bureau of Meteorology, 2022). Figure 3 : Figure 3: Thermal performance curves modelling the effect of incubation temperature on total germination for 15 common Northern Jarrah Forest species.The solid line represents the mean modelled estimate, partially transparent lines are bootstrapped estimates (100 iterations), dashed lines are T opt and dotted lines are T max .(A) Thermal performance curves of species with narrow temperature differences between T opt and T max .(B) Thermal performance curves of species with a wider temperature divide between T opt and T max . Figure 4 : Figure 4: Seedling counts (A) and species richness (B) across two sites after a seasonal (autumn, 19/04/2021) and aseasonal (spring, 16/10/2021) burn in Korung National Park.Each site contained ten 1 m 2 plots.Species richness not assessed in June after the autumn burn as seedlings could not be identified.The black horizontal lines indicate the median value, filled circles indicate mean values, the box represents the interquartile range (IQR), and the whisker extends to a maximum of 1.5 * IQR, points beyond 1.5 * IQR are visualized by black dots. Table 1 : Nothern Jarrah Forest species selected for the thermal germination trials
v3-fos-license
2017-11-03T07:40:24.067Z
2007-11-16T00:00:00.000
7735723
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.jeos.org/index.php/jeos_rp/article/download/07030/232", "pdf_hash": "b36b9a9a12a1e218bfc2bdb6fd97ed58f0ab9c44", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:270", "s2fieldsofstudy": [ "Physics" ], "sha1": "b36b9a9a12a1e218bfc2bdb6fd97ed58f0ab9c44", "year": 2007 }
pes2o/s2orc
Polarization and coherence for vectorial electromagnetic waves and the ray picture of light propagation We develop a complete geometrical picture of paraxial light propagation including coherence phenomena. This approach applies both for scalar and vectorial waves via the introduction of a suitable Wigner function and can be formulated in terms of an inverted Huygens principle. Coherence is included by allowing the geometrical rays to transport generalized Stokes parameters. The degree of coherence for scalar and vectorial light can be expressed as simple functions of the corresponding Wigner function INTRODUCTION In this work we elaborate a complete geometrical formulation of paraxial optics for vectorial electromagnetic waves.By geometrical we mean that this propagation is fully described in terms of the light rays of standard geometrical optics.By complete we mean that includes without approximation all coherence phenomena.This is equivalent to define a Wigner function for vectorial waves, which in turn is equivalent to the prescription of a set of Stokes parameters to geometrical rays [1]- [4].This task is interesting since it merges in a single formalism geometrical and wave optics.This may provide physical insight and simple formulas for problems involving partially coherent and partially polarized light. The definition of a suitable Wigner function and ray Stokes parameters for vectorial waves is accomplished in Section 3 in parallel to the scalar case displayed in Section 2. Both include a formulation of propagation that can be expressed by an inverted version of the Huygens principle involving rays instead of waves.We will apply this formulation to the degree of coherence for vectorial waves.This is a fundamental nontrivial problem only recently addressed in depth.Contrary to the scalar case, for vectorial waves there is no unique degree of coherence, and currently several definitions coexist [5]- [23].After recalling the main proposals in Section 4 we examine their relationship with the geometrical picture in Section 5. COHERENCE AND WIGNER FUNCTION FOR SCALAR WAVES We first recall the geometrical Wigner formulation of paraxial optics for scalar waves.Although standard geometrical optics excludes coherent phenomena, if we replace ray intensity by Wigner function we include once for all coherence effects.In particular we can express the degree of coherence as a functional of the Wigner function.The price to be paid is that the Wigner function can take negative values so it cannot represent always light intensity. Definition and properties We will always consider the spatial-frequency domain so that the Wigner function is defined in terms of the cross-spectral density function as [24]-[28] where the angle brackets represent ensemble average, r and r are Cartesian coordinates in a plane orthogonal to the main propagation direction along axis z, p are the angular variables representing the local direction of propagation, and k is the wavenumber in vacuum. The connection between Wigner function and geometrical optics stems from the fact that r and p represent the parameters of a light ray, so that W assigns a number to each ray.The main properties of this formalism are: (a) The Wigner function provides complete information about second-order phenomena, including diffraction and interference, since its definition can be inverted to express the crossspectral density function in terms of the Wigner function where R = (r 1 + r 2 )/2 is the midpoint between r 1,2 . (b) In particular, the light intensity (irradiance) at a given point can be obtained by integrating the angular variables This is to say that the intensity at a given point r is the sum of the values of the Wigner function for all the rays passing through r with different p.We will refer to this sum as an incoherent superposition since the ray contributions W(r, p) are added independently without cross terms between rays. (c) The Wigner function cannot represent always light intensity since it can take negative as well as positive values [24]- [29].We may say that there are bright rays with positive W and dark rays with negative W [30]- [32].Dark rays are crucial to the completeness of the theory since they contain the coherence in two-beam interferometry as we shall see below. (d) Finally a crucial property for the geometrical interpretation of the Wigner function is that it is constant along paraxial rays where (r, p) and (r , p ) are the ray parameters at the input (z = 0) and output (z > 0) planes of a paraxial optical system. Inverted Huygens principle These properties can be summarized in a principle analogous to the Huygens principle but with inverted terms replacing waves by rays and coherent by incoherent superpositions.We can enunciate this principle in three steps [33]: (i) Each point acts as a secondary source of a continuous distribution of rays with parameters r, p, W(r, p).We stress that this is a continuous distribution of rays instead of the more familiar single ray at each point normal to a wavefront. (ii) The evolution of optical properties is given by the incoherent superposition of the optical properties of rays, as illustrated by the example of light intensity in point (b) above.We stress that this incoherence is a key feature of the theory independent of the actual state of coherence of the light.Coherence is expressed in a different way as we shall see later on. (iii) The effect of spatial-local inhomogeneous filters (i.e., transparencies) altering phase and amplitude is described in the wave picture by the product of the amplitude of the input wave with a transmission coefficient t(r), i. e., U(r) → t(r)U(r).In the geometrical picture this effect is described by the convolution of angular variables p of the input Wigner function W U with the Wigner function W t of the transmission coefficient where Coherence The degree of coherence for scalar waves can be expressed in terms of the Wigner function from different perspectives. Coherence as phase-difference average From the inversion formula in Eq. ( 2) expressing the crossspectral density function in terms of the Wigner function, we have that the degree of coherence at two points r 1,2 is the average of the phase difference duced by an ensemble of plane waves with wavevectors proportional to p with where the weight of each plane wave p is W(R, p), with R = (r 1 + r 2 )/2, and I 1,2 are the light intensities at points r 1,2 [33].This agrees well with common intuition since rays are usually understood as local plane waves, and partial coherence is usually understood as the result of phase fluctuations. Overall degree of coherence A global or overall assessment of the total coherence conveyed by a field state can be provided by the formula [14], [34]-[37] This can be expressed in terms of the Wigner function as Coherence in a Young interferometer Next we examine the ray picture of coherence in action, for example in a Young interferometer with two apertures of vanishing widths at points r 1,2 in the plane z = 0 (see Figure 1).It can be seen that the Wigner function after the apertures implies the existence of just three secondary sources of rays at z = 0 [23, 33, 38].Two sources are located at the apertures with W(r 1,2 , p) = W(r 1,2 ) ∝ I 1,2 .The Wigner function at these points does not depend on p so that the emission is isotropic and all rays at each aperture carry the same weight W(r 1,2 ).Moreover, since in this case the Wigner function is positive W(r 1,2 ) ∝ I 1,2 these sources emit bright rays exclusively.The third source is located at the midpoint R between the apertures with W(R, p) proportional to the degree of coherence µ between the fields at the apertures where δ is a constant phase.Therefore W(R, p) takes positive as well as negative values when p varies, so that this source emits bright and dark rays with different weights depending on p. Since there are three sources at z = 0 each observation point r in planes z > 0 is reached exclusively by three rays, one from each source, as illustrated in Figure 1.Their incoherent superposition gives the intensity distribution The contribution W(R, p) from the midpoint is actually the interference term, since it is the only one that depends on the observation point through the propagation direction specified by followed by the ray from the source at R to the observation point at r. This implies a close relation between the degree of coherence µ and the Wigner function in the midpoint W(R, p).More specifically [33,38]: (1) From Eq. ( 11) µ is proportional to the maximum modulus of the Wigner function at the midpoint when p is varied (2) The degree of coherence µ is proportional to the negativity of the Wigner function measured as the distance of W to its modulus (3) The degree of coherence is proportional to the amount of Wigner function at the midpoint measured as where the integration extends just to the region R between apertures. From this interferometric point of view, coherence is incompatible with standard geometrical optics.In other words, coherence in interferometry is the distance from the light state after the apertures to standard geometrical optics represented by the set of situations with positive semidefinite W. WIGNER FUNCTION FOR VECTORIAL WAVES In this section we provide a Wigner function that includes the polarization variables allowing us to generalize the results of the preceding section to vectorial waves. Definition and properties The Wigner function we are going to use is the translation to optics of a similar Wigner function introduced in mechanics to describe a closely related problem, the Wigner function of a particle with spin one half [39]- [41].This is equivalent to a transversal wave since in both cases we have a field with two components.Such a Wigner function can be expressed in optics as [1] W(r, p, Ω) = S(r, p) where Ω is a four-dimensional real vector that represents the Poincaré sphere, and S(r, p) is a four-dimensional real vector with components S 0 (r, p) = W x,x (r, p) + W y,y (r, p), where W ,m are the elements of the Wigner matrix This Wigner function depends on the spherical coordinates Ω representing the variables specifying the polarization state.The four real quantities S(r, p) in Eq. ( 19) are ray properties because of their joint dependence on (r, p), so we may refer to them as ray Stokes parameters in contrast to the standard point Stokes parameters s(r) where that do not depend on p and express the light intensity and polarization state at point r without reference to propagation direction. The properties of this Wigner function are fully equivalent to the scalar case [1]- [3]: The Wigner function provides complete information about second-order phenomena, since its definition can be inverted 07030-3 Journal of the European Optical Society -Rapid Publications 2, 07030 ( A. Luis to express the cross-spectral density tensor in terms of the ray Stokes parameters where σ (j) are the Pauli matrices, σ (0) being the identity, and R = (r 1 + r 2 )/2. (b) In particular, at each spatial point r the intensity and the polarization state, represented by the point Stokes parameters s 0 (r) and s 1,2,3 (r), respectively, are obtained from the ray Stokes parameters by integrating the angular variables This is equivalent to say that s(r) are given by the incoherent superposition of the ray Stokes parameters S(r, p) associated to all rays passing through the same point r with different propagation directions p. (c) The Wigner matrix may have negative eigenvalues so that the ray Stokes parameters may violate the ray analog of the relation always satisfied by the point Stokes parameters s 0 ≥ In accordance with the scalar case the rays satisfying S 0 ≥ S 2 1 + S 2 2 + S 2 3 ≥ 0 may be called bright rays, while the other ones, i. e., S 0 < S 2 1 + S 2 2 + S 2 3 or S 0 < 0 may be called dark rays. (d) Finally a crucial property for the geometrical interpretation of the Wigner function is being constant along paraxial rays in free space.The effect of polarization changing devices is described in some detail below. Inverted Huygens principle for vectorial light As in the scalar case, the above properties can be summarized in a principle analogous to the Huygens principle [42]: (i) Each point acts as a secondary source of a continuous distribution of rays with parameters r, p, S(r, p). (ii) These rays are superimposed incoherently, as illustrated by the example of the point Stokes parameters in point (b) above. (iii) Spatial-local inhomogeneous filters altering phase and amplitude (i.e., transparencies) are described in the wave picture by the product with transmission coefficients in the form where t ,j (r) are the corresponding transmission coefficients. In the geometrical picture these devices are described by expressing the output ray Stokes parameters S as angular convolution of the input ray Stokes parameters S with the action of the Wigner function of the Mueller matrix [3] where where the matrix t(r) has the matrix elements t j, (r) in Eq. ( 25).For homogeneous devices described by a Mueller matrix M we get the natural transformation DEGREES OF COHERENCE FOR VECTORIAL WAVES The proper definition of the degree of coherence for vectorial waves is a nontrivial problem.The increase of the number of degrees of freedom implies that an scalar quantity (the crossspectral density function) is replaced by a matrix (the crossspectral density matrix) so naturally there is no straightforward translation of µ from the scalar to the vectorial case.From the same reasons, several definitions can coexist since they will focus on different features of coherence with application to different situations or satisfy different symmetries [20].Here we can recall the main approaches to the problem. Intensity fringes in a Young interferometer A first approach to the degree of coherence at two points r 1,2 is derived directly in terms of the visibility of interference fringes in a Young interferometer with apertures at r 1,2 where only the intensity is measured in the observation plane, leading to [5]-[10] where and The main drawback of this definition is that it depends on the polarization state at r 1,2 .For example, for orthogonal polarizations E(r 1 ) • E * (r 2 ) = 0 we get µ 1 = 0, even if there is A. Luis perfect correlation between the fields at the apertures.More specifically, we say that µ 1 is not invariant under U(2) × U(2) transformations, i. e., under the action of unitary 2 × 2 matrices applied to the fields at the apertures.This corresponds in practice to place transparent phase plates at the apertures. Two similar strategies have been proposed to solve this difficulty.On the one hand, we can consider the maximum of µ 1 when arbitrary phase plates are placed in the apertures leading to [9,43] where λ ± ≥ 0 are the singular values of Γ 1,2 , i. e. On the other hand, we can consider the maximum of µ 1 when arbitrary phase plates followed by an arbitrarily oriented polarizer are placed in the apertures, leading to [18]-[20] where msv is the maximum singular value of the corresponding matrix. Stokes fringes in a Young interferometer Another approach that also focus on the Young interferometer is based on the visibility of the four systems of fringes obtained by measuring the four point Stokes parameters at the observation plane, leading to [11]-[17] This definition is invariant under U(2) × U(2) transformations. Overall degree of coherence An overall degree of coherence for vectorial waves µ G which parallels the scalar case Eq. ( 9) has been introduced as a weighted average of the local degree of coherence µ 2 [17] µ Fringes in arbritrary interferometers Finally, some other approaches consider all components on an equal footing so that de degree of coherence is a function of the whole Hermitian 4 × 4 correlation matrix Γ [44, 45] instead of defining it in terms of just the 2 × 2 complex matrix Γ 1,2 .This definition suits to the idea that arbitrary interferometers mix the four field components without taking into account to which wave they belong, so that the fringe visibility depends on the sixteen matrix elements E j (r m )E * (r n ) for j, = x, y and m, n = 1, 2. In this regard we can define the degree of coherence as the distance between Γ and the 4 × 4 identity matrix I 4 representing fully incoherent and fully unpolarized light in the form [22,23] µ This definition is invariant under the action of 4 × 4 unitary matrices, that includes the U(2) × U(2) invariance as a particular case. This definition is equivalent to the degree of polarization of the four-dimensional wave E = E x (r 1 ), E y (r 1 ), E x (r 2 ), E y (r 2 ) [14,46,47].This is interesting since in the scalar case the maximum degree of coherence that can be obtained by combining two waves E 1,2 is the degree of polarization of the two-dimensional wave E = (E 1 , E 2 ). In agreement with the idea that polarization is a manifestation of coherence we have that µ 3 combines the degree of polarization of the individual waves P 1,2 and µ 2 in the form [22] µ 2 3 = where I 1,2 = I(r 1,2 ) are the corresponding intensities. Following the same spirit µ 3 is closely related to the overall degree of coherence µ G in Eq. (38) after two apertures located at points r 1,2 , since for the field after the apertures we have [22] This is because the "diagonal" factors µ 2 2 (r, r) in the integration Eq. ( 38) contain the degree of polarization at r. Concerning interferometric visibility, µ 3 provides upper bounds to the visibility V of arbitrary two-beam interferometers in the form [22] 3 2 where I 1 + I 2 ≥ I is the intensity of the two interfering beams extracted from the original fields, and [48] where λ max,min are the maximum and minimum eigenvalues of Γ. A similar approach has been previously considered in terms of the normalized 4 × 4 correlation matrix L with matrix elements j (r m ) * (r n ) for j, = x, y and m, n = 1, 2, where [21] j (r ) = E j (r ) FIG. 1 FIG.1In a Young interferometer each observation point r at plane z > 0 is reached by just three rays arising from three secondary sources at z = 0 located at the apertures r 1,2 , and at the midpoint R, representing p the propagation direction of the ray reaching r from R.
v3-fos-license
2023-01-05T05:10:27.438Z
2023-01-03T00:00:00.000
255415165
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "3368a366430238ec6b921002d045dd9298592932", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:273", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "3368a366430238ec6b921002d045dd9298592932", "year": 2023 }
pes2o/s2orc
Oxymatrine ameliorates myocardial injury by inhibiting oxidative stress and apoptosis via the Nrf2/HO-1 and JAK/STAT pathways in type 2 diabetic rats The necessity of increasing the efficiency of organ preservation has encouraged researchers to explore the mechanisms underlying diabetes-related myocardial injuries. This study intended to evaluate the protective effects of oxymatrine (OMT) in myocardial injury caused by type 2 diabetes mellitus. A model of diabetic rats was established to simulate type 2 diabetes mellitus using an intraperitoneal injection of a single dose of 65 mg/kg streptozotocin with a high-fat and high-cholesterol diet, and diabetic rats were subsequently treated with OMT (60, 120 mg/kg) by gavage for 8 weeks. Thereafter, diabetic rats demonstrated notable decreases in left ventricular systolic pressure (LVSP), ±dp/dtmax, and in the activities of glutathione peroxidase, superoxide dismutase, and catalase. Moreover, we found notable increases in left ventricular end-diastolic pressure, fasting blood glucose, and malondialdehyde, as well as changes in cell apoptosis and decreased expression levels of Nrf2, HO-1, tyrosine protein kinase JAK (JAK), and signal transducer and transcription activator (STAT). Treatment with OMT alleviated all of the measured parameters. Collectively, these findings suggest that activation of the Nrf2/HO-1 and inhibition of the JAK/STAT signaling are involved in mediating the cardioprotective effects of OMT and also highlight the benefits of OMT in ameliorating myocardial injury in diabetic rats. Supplementary Information The online version contains supplementary material available at 10.1186/s12906-022-03818-4. Introduction Diabetic cardiomyopathy is a unique myocardial disease in patients with diabetes and is characterized by persistent hyperglycemia and cardiac dysfunction [1,2]. During continuous hyperglycemia, the excessive production or insufficient removal of mitochondrial reactive oxygen species (ROS) in the body leads to the occurrence of oxidative stress, which is a key influencing factor for diabetic microangiopathy, and is also a pivotal cause of diabetic cardiomyopathy [3,4]. Oxidative stress under hyperglycemic conditions contributes to reduced antioxidant capacity and also accelerates myocardial injury and initiates mitochondrial oxidative damage in diabetes mellitus, which deteriorates cell apoptosis and necrosis [5,6]. Nuclear factor erythroid 2-related factor 2 (Nrf2), an important activator of antioxidant responsive element (ARE), regulates downstream target genes such as HO-1 [7,8]. Nrf2, a key activator of ARE, which activates Nrf2 to dissociate from Kelch-like ECH-related protein 1 and enter the nucleus, forms a heterodimer with the macrophage activator protein in the nucleus, binds to the ARE sequence, and is subsequently regulated by ARE, which is a specific DNA-promoter binding sequence and an external regulatory region for the expression of phase II detoxification enzymes and cytoprotective protein genes. Previous studies have verified that the Nrf2/ HO-1 pathway plays an important role in oxidative stress, which is also involved in myocardial injury [9,10]. The tyrosine protein kinase JAK (JAK)/signal transducer and transcription activator (STAT) signaling pathway is an intracellular signaling pathway involved in the modulation of stress-response gene expression, by mediating signals from the cell surface to the nucleus, which is regarded as a pivotal player in the response to multiple cardiac injuries such as myocardial ischemia/reperfusion (I/R) injury. Inhibition of JAK/STAT signaling decreases the synthesis of TGF-β 1, the release of inflammatory mediators, and autophagy. According to a previous study, oxymatrine (OMT) suppresses JAK/STAT inflammatory signaling in gastric cancer through inhibition of the interleukin (IL)-21R-mediated JAK2/STAT3 pathway [11]. Moreover, OMT inhibits tumor growth and deactivates STAT5 signaling in a xenograft model [12]. OMT, the main component of Sophora flavescens Aiton, possesses extensive pharmacological effects, including antioxidant [13], antifibrotic [14], and antiinflammatory properties [15]. Moreover, it has been widely accepted as a traditional remedy in myocardial ischemic injury, as well as in renal, liver, intestinal, and brain I/R injury in animal models [16]. OMT also exerts a significant effect on aortic endothelial dysfunction via oxidative stress in diabetic rats [17]. However, OMT has not yet been reported to offer myocardial protection in diabetes-related myocardial injury in rats, which is associated with the Nrf2-mediated antioxidant response. Therefore, using a model of myocardial injury in type 2 diabetic mellitus (T2DM) rats, the aim of this study was to investigate the potential effect of OMT on myocardial injury and elucidate the role of the Nrf2/HO-1 and JAK2/ STAT3 signaling pathways in diabetic rats. Animals Male SD rats weighing 160-200 g were purchased by the Experimental Animal Center of Changsha Social Work College. The rats were given free access to food and water. All experiments were carried out in accordance with the guidelines for the Care and Use of Laboratory Animals (NIH Publication No. 85-23, revised 1996 and Ethics Committees in Science: European Perspectives) and were approved by Changsha Social Work College Medicine Animal Care and Use Committee (XIANG 202109). All possible efforts were made to alleviate animal suffering. Diabetic rat model development After 1 week of adaptive feeding, the animals were randomly assigned to one of the following groups: the control group (n = 10), T2DM group (n = 10), OMT (60 mg/ kg) group (n = 10), or OMT (120 mg/kg) group (n = 10) [17]. The control rats were fed a regular diet, while the experimental group rats were fed a high-fat diet (consisting of 70% standard laboratory chow, 15% carbohydrate, 10% lard, and 5% yolk powder) as previously described [18,19]. After 4 weeks of high-fat diet, the rats in the experimental group were injected with streptozotocin (STZ; 65 mg/kg, dissolved in 0.1 mol/L sodium citrate buffer; pH 4.4) intraperitoneally. The control group was intraperitoneally injected with a corresponding volume of citrate buffer (0.1 mol/L). Fasting blood glucose (FBG), oral glucose tolerance test (OGTT), and intraperitoneal glucose tolerance test (IPGTT) were conducted using venous blood samples to verify the establishment of a T2DM model. Rats in the T2DM groups were further randomly assigned into the OMT group (60 mg/kg) and the OMT group (120 mg/kg) (dissolved in saline) combined with a high-fat diet. Rats in the control and T2DM groups intragastrically received the same volume of saline once a day for 8 weeks. Measurement of left ventricular function index The rats were anesthetized with an intraperitoneal injection of sodium pentobarbital (concentration, 3%; 60 mg/ kg) [13,20], and fixed on the operating table in the supine position, with an incision in the middle of the neck and endotracheal intubation. The right common carotid artery of the rat was separated and inserted into the left ventricle. The arterial cannula was connected to the pressure transducer, and also connected to the BL-420 N biological signal acquisition and analysis system (Chengdu Techman Instrument) in order to record the LVSP, the left ventricle, the ±dp/dt max and LVEDP as previously described [21,22]. Subsequently, venous blood was collected for biochemical analysis. All the rats were sacrificed by exsanguination following anesthesia, the tissues were collected for the following experiments. Histological examination The hearts were collected and embedded in paraffin. The heart was sliced into 5-μm thick section, stained with HE and observed under a light microscope (Olympus Corporation). Histological examination was carried out in a blinded trial by an experienced pathologist. The scoring standard was recorded as follows: 0 indicated no damage, 1 indicated less than 25% damage, 2 indicated 25-50% damage, 3 indicated 50-75% damage, and 4 indicated more than 75% damage [18,23]. Evaluation of FBG, OGTT, and IPGTT The FBG, OGTT, and IPGTT plasma concentrations were determined spectrophotometrically using a Spec-traMax M5 instrument (Molecular Devices, LLC) and commercially available kits, following the manufacturer's instructions. Determination of the levels of MDA, GSH-Px, SOD, and CAT The supernatant from tissues was collected, and the activities of SOD, GSH-Px, MDA, and CAT were determined using the colorimetric kits [24]. Measurement of reactive oxygen species The procedures of ROS measurement were according to manufacturer's instructions. The tissue slices were incubated with DCFH-DA (cat# S0033, Beyotime, China) for 10 min at 37 °C. Following three washes with phosphate buffer solution for 1 min, the tissue slices were immediately photographed under an inverted fluorescence microscope. The mean fluorescence intensity was analyzed using the ImageJ software. Myocardial ultrastructural observation Ultrastructural examination was performed as previously described [11]. Fresh rat myocardium was obtained and cut into 1-mm 3 pieces, immersed in 2.5% glutaraldehyde at 4 °C and subsequently fixed. Sections were stained using uranyl acetate and lead bismuth citrate, and sections were detected using a transmission electron microscopy (Carl Zeiss AG). TUNEL staining The levels of apoptosis were assessed using a commercial kit (cat. S7110, Sigma Aldrich). Briefly, tissues were excised and fixed in 4% paraformaldehyde in PBS at room temperature for 24 h. The fixed tissues were embedded in paraffin and stained. The numbers of apoptotic cells and total myocardial cells were counted in three randomly selected fields (magnification, × 200) under an immunofluorescence microscope. The apoptosis rate was defined as the mean percentage of apoptotic cells. Reverse transcription-quantitative (RT-q)PCR Total RNA was extracted using a RNeasy Mini kit and purified using 75% ethanol. The purified total RNA (200 ng/sample) was reverse transcribed into cDNA using a transcription kit. qPCR reactions were performed in triplicate using a SYBR ® Green Master Mix and run on a LightCycler 480 system. The following thermocycling conditions were used: Pre-denaturation at 95 °C for 5 min; followed by 30 cycles of denaturation at 95 °C for 50 sec, annealing at 56.1 °C (Nrf2 and HO-1) and 59.4 °C (β-actin) for 50 sec, and extension at 72 °C for 50 sec; and a final total extension at 72 °C for 10 min. The following primers were used: Nrf2 forward, 5′-TTC CTC TGC TGC CAT TAG TCA GTC-3′ and reverse, 5′-GCT CTT CCA TTT CCG AGT CAC TG-3′, the length of the amplified product is 441 bp; HO-1 forward, 5′-ATC GTG CTC GCA TGA ACA CT-3′. and reverse, 5′-CCA ACA CTG CAT TTA CAT GGC-3′, the length of the amplified product is 441 bp; and GADPH forward, 5′-TCC CAT CAC CAT CTT CCA-3′ and reverse, 5′-CAT CAC GCC ACA GTT TCC-3′, the length of the amplified product is 632 bp. The primers were synthesized by Guangzhou RiboBio. The relative gene expression was quantified using the 2 -∆∆Cq method [25]. Western blot Protein was extracted from the myocardial tissue in RIPA buffer together with phosphatase and protease inhibitors (PMSF). The protein concentration was evaluated by BCA method. Tissue lysates were added with 4x Laemmlisample buffer, boiled and separated by SDS-PAGE. Proteins were transferred to a PVDF membrane (Bio-Rad, Hercules, CA). After nonspecific blocking with skim milk or BSA for 1 h, the membranes were probed with primary antibodies, including pSTAT3 (Abcam, 1:5000), STAT3 (Bioss, 1:1000), JAK2 (Cell Signaling Technology, 1:1000) and pJAK2 (Cell Signaling Technology, 1:1000) in 5 ml blocking buffer overnight at 4 °C. Next, the membranes were washed 4 times with Tris-buffered saline supplemented with 0.1% Tween 20 (TBST) and incubated with an appropriate HRP-conjugated secondary antibody. Membranes were washed 4 times with TBST, incubated with an ECL solution (Millipore) and d imaging was conducted using an BioRad imaging system. Statistical analysis Statistical analysis was performed using GraphPad Prism 6.0 (GraphPad Software, Inc.), and data were expressed as the mean ± standard deviation. Statistical differences were assessed using Student's t test between two groups, or using one-way ANOVA between more than two groups, followed by Tukey's post-hoc tests. The level of statistically significant difference was set at P < 0.05. Diabetic rats exhibit exacerbated dysglycemia and cardiac dysfunction Compared with the control group, diabetic rats showed significantly increased levels of non-fasting and fasting serum glucagon, accompanied by markedly impaired non-fasting PG, fasting PG, IPGTT, and OGTT, indicating that the diabetic rat model was constructed Fig. 1 Confirmation of a diabetic rat model establishment. OMT attenuates ventricular hemodynamic parameters and FBG level in diabetic rats. A Non-fasting plasma glucose, fasting plasma glucose, IPGTT, and OGTT following 7 days of streptozotocin injection. B LVSP; C LVEDP; D + dp/dt max ; E -dp/dt max ; F FBG; Data are expressed as the mean ± standard deviation (n = 8). * P < 0.05 compared with the control group. # P < 0.05 compared with the diabetic group successfully (Fig. 1A, P < 0.01). Compared with the control group, cardiac functions were markedly deteriorated, as confirmed by the decreases in LVSP (Fig. 1B, P < 0.05 or P < 0.01), + dp/dt max (Fig. 1D, P < 0.05 or P < 0.01) , and -dp/dt max (Fig. 1E, P < 0.05 or P < 0.01) , and the increases in LVEDP (Fig. 1C) and FBG (Fig. 1F, P < 0.05 or P < 0.01). These findings indicated marked increase in blood glucose and cardiac dysfunction in diabetic rats, suggesting that diabetes exacerbated myocardial injury, which was attenuated following OMT treatment. Effects of OMT on histopathology in myocardial tissues Ultrastructure of myocardial tissues was assessed by electron microscopy. As demonstrated in Fig. 2A, myocardial musculature was arranged regularly and neatly, the Z-line was clear, and abundant mitochondria were observed in the cytoplasm. In the diabetic rats, ultrastructural changes demonstrated notable heterogeneous subcellular and extracellular space abnormalities in the myocardial tissues, which were attenuated by OMT. As demonstrated in Fig. 2B, compared with the control group, diabetes caused focal confluent necrosis of the muscle fibers, with inflammatory cell infiltration, edema, and myophagocytosis, along with extravasation of red blood cells. OMT treatment significantly attenuated the aforementioned effects in myocardial tissues (Fig. 2C,, P < 0.05 or P < 0.01 or P < 0.001). Moreover, mild edema with a significant reduction in myocardial necrosis was observed. Effects of OMT on lipid peroxidation and inflammation in diabetic rats Colorimetric kits were used to detect the expression of inflammatory cytokines IL-6 and NF-κB in myocardial tissue. The results showed that the expression levels of IL-6 and NF-κB in diabetic rats were obviously increased compared with those in the control group ( Fig. 3A and B, P < 0.01). Compared with diabetic rats, the expression levels of IL-6 and NF-κB in the OMT Fig. 2 Effects of OMT on the histopathological changes and apoptosis in myocardial tissues in diabetic rats. A Ultrastructural changes determined using transmission electron microscopy; representative micrographs are magnified at 5000×; B Histopathological changes in myocardial tissues using H&E staining; representative micrographs are magnified at 200×; C Myocardial injury score was quantified. * P < 0.05 compared with the control group. # P < 0.05 compared with the diabetic group (60 mg/kg) (P < 0.05) and OMT (120 mg/kg) (P < 0.01) groups decreased in a dose-dependent manner. Next, the expression of oxidative stress-related indicators MDA, ROS, GSH, and SOD was detected by the corresponding kits. We found that the level of MDA in the myocardium of the diabetic rats was significantly increased, while the activity levels of GSH-Px, SOD, and CAT were significantly decreased ( Fig. 3C-F, P < 0.05 or P < 0.01). Compared with the diabetic rats, OMT treatment significantly reduced the levels of MDA in the myocardium and significantly increased the activity levels of GSH-Px, SOD, and CAT. These experimental results suggest that OMT reduces inflammation and lipid peroxidation. Effects of OMT on oxidative stress in myocardial tissues Oxidative stress is one of the critical causes of myocardial injury in diabetes [26]. Increased ROS and oxidative stress have been regarded as one of the important mechanisms contributing to diabetic cardiomyopathy. As shown in Fig. 4A and B, we found markedly increased ROS production in diabetic rats, suggesting that diabetes exacerbated myocardial oxidative stress, which was attenuated following OMT treatment (P < 0.05 or P < 0.01). Effects of OMT on apoptosis in myocardial tissues To investigate the effects of OMT on the levels of apoptosis in myocardial tissues, TUNEL assay was performed. As demonstrated in Fig. 5A and B, compared with the control group, a significant increase in the number of apoptotic cells was observed in diabetic rats (P < 0.05 or P < 0.01). However, OMT treatment significantly reduced the number of apoptotic cells in myocardial tissues. These results indicate that OMT may ameliorate myocardial injury partly through the inhibition of apoptosis. Effects of OMT on the expression levels of Nrf2 and HO-1 in myocardial tissues To determine whether OMT mediates the protection against oxidative stress, which is associated with the Nrf2/HO-1 signaling pathway, the levels of Nrf2 and HO-1 were determined by RT-PCR and western blot. As demonstrated in Fig. 6, the Nrf2 and HO-1 mRNA levels in the diabetic rats were significantly downregulated (P < 0.05). OMT treatment significantly increased the mRNA expression of Nrf2 and HO-1. Moreover, western blot analysis demonstrated that Nrf2 and HO-1 protein levels were notably upregulated, and OMT treatment decreased the levels of these oxidant-related proteins. Additionally, OMT promoted Nrf2 translocating from Effects of OMT on the phosphorylation of JAK2 and STAT3 The JAK-STATs signaling pathway has been shown to play a role in a variety of biological processes, including cell activation, proliferation, differentiation, autophagy, and apoptosis [27], and the activation of JAK2-STAT3 is involved in the protection of myocardial tissue [28]. However, whether the protective effect of JAK2-STAT3 is associated with amelioration of oxidative stress and the relationship of OMT with JAK2-STAT3 signaling are unclear. It has previously been reported that STAT3 is a critical downstream element of JAK2 [29]. Therefore, the effect of OMT on the JAK2/STAT3 signaling pathway was assessed. As shown in Fig. 7, there was a significant increase in the levels of phosphorylated JAK2 and STAT3 in diabetic rats (P < 0.05 or P < 0.01). However, OMT treatment significantly downregulated the levels of JAK2 and STAT3 phosphorylation. These results suggest that OMT exerts ameliorative effects on cardiac injury via suppressing JAK2 and STAT3 phosphorylation. Discussion Diabetic cardiomyopathy is characterized by persistent hyperglycemia that is caused by myocardial metabolism disorders and microvascular diseases, and it ultimately leads to myocardial structural damage and cardiac dysfunction [29]. The present study demonstrated that the heart function of diabetic rats was significantly decreased and the myocardial ultrastructural damage was significantly aggravated, which further confirmed that the myocardial injury of the diabetic rat model was successfully established. OMT is a main active gradient extracted from Sophora flavescens Ait. A number of studies have demonstrated that OMT exerts numerous pharmacological effects, including anti-viral, anti-inflammatory, antioxidant, antifibrotic [13][14][15][16][17], and antiapoptotic properties [5,13], which has attracted a high level of attention from investigators [6,[13][14][15][16]. In the present study, diabetic rats with myocardial injury were received OMT intragastrically for 8 weeks. The results demonstrated that OMT treatment significantly decreased FBG and improved the indicators of heart function and myocardial ultrastructure. In addition, OMT treatment decreased the level of blood glucose. These results suggest that OMT exhibits protective effects on cardiac function and ultrastructural damage in a dose-dependent manner in diabetic rats. Previous studies have demonstrated that oxidative stress and changes in the antioxidant defense systems deteriorate myocardial tissues in diabetic rats [26,30]. Further studies have also shown that a decreased antioxidant status is an important contributor to myocardial impairment in diabetic rats [31]. Therefore, deterioration of antioxidant systems may be an important mechanism underlying increased oxidative stress in diabetes. Moreover, the content of MDA and the activities of GSH-Px, SOD, and CAT, important antioxidant enzymes in the body, reflected the degree of oxidative injury. The results of the present study demonstrated that hyperglycemia notably increased the level of MDA in myocardial tissue and markedly decreased the activities of GSH-Px, SOD, and CAT, suggesting that hyperglycemia may cause severe oxidative stress injury in diabetic rats. OMT intervention reversed the aforementioned dysfunctions, suggesting that OMT may increase the activity of antioxidant enzymes, decrease the production of lipid peroxides, and reduce oxidative stress injury in diabetic rats. HO-1, the downstream target gene of Nrf2, is an important antioxidant protein [32]. HO-1, also known as heat shock protein 32, is a stress protein induced by multiple factors. In the presence of oxygen and NADPH, HO-1 degrades heme to produce biliverdin, free iron, and CO. Bilirubin, a derivative of biliverdin in the body, is a strong endogenous antioxidant that inhibits lipid peroxidation [33]. Therefore, HO-1 and its products exert antioxidant actions in diabetic dysfunctions [34]. Results of previous studies have confirmed that downregulation of the Nrf2/HO-1 antioxidant signaling pathway deteriorated oxidative damage in cerebral ischemia. Moreover, a previous study revealed that OMT activated the Nrf2/ HO-1 pathway and attenuated the oxidative stress injury caused by renal I/R [24]. These studies highlighted the involvement of the Nrf2/HO-1 pathway in diabetes-associated myocardial disturbances. The present study further demonstrated that oxidative stress accelerated the pathogenesis of myocardial dysfunction in diabetes. The expression levels of Nrf2 and HO-1 in the myocardial tissue were notably increased, and these expression levels decreased following treatment with OMT in a dosedependent manner. This finding suggests that OMT may enhance the antioxidant capacities via upregulation of the Nrf2/HO-1 signaling pathway in diabetic rats. Inflammation has been reported as one of the early events that accelerates the progression of diabetic myocardial injury. Our findings showed that OMT significantly lowered proinflammatory cytokines, showing significant anti-inflammatory activity. This result is consistent with the pronounced antioxidant effect of OMT, since oxidative stress is the main trigger for the release of proinflammatory cytokines [35,36]. In fact, oxidative stress and inflammation are inextricably linked, generating and amplifying each other [35]. Numerous studies have demonstrated that Nrf2 downregulates NF-κB, a transcription factor that affects multiple cytokines, including IL-6 [37]. Therefore, the decreased IL-6 level in the OMT-treated group may be related to the enhanced expression of Nrf2 protein and the subsequent decrease of NF-κB level. Increasing amount of evidence indicate that multiples of mechanisms involved in cell apoptosis in myocardial injury, including activation of SIRT1-Nrf2 [38], LKB-1/AMPK/Akt pathway and suppression of GSK-3β and p38α/MAPK pathway [39] and so on. And our understanding of diabetes-associated myocardial injury events at apoptosis has advanced substantially. The JAK/STAT pathway is involved in many important biological processes such as cell proliferation, differentiation, apoptosis and immune regulation [40] and several reports have proposed that JAK/STAT signalling is associated with cardiac dysfunction in diabetes [41,42]. In diabetes, IL-6 inhibits the viability and apoptosis of pancreatic beta-cells via modulation of microRNA-22 in the JAK/STAT signaling pathway [43]. Previous studies have demonstrated that high glucose induces the activity of the JAK/STAT signaling pathway, leading to TGF-β1 activation and the subsequent increase in cardiac fibroblasts [44]. Our results suggest that diabetes caused notable increases in the phosphorylation of JAK2 and STAT3 and the levels of TGF-β1. Previous studies have demonstrated that IL-6 activates the JAK/STAT signaling pathway; thus, the pronounced increases in p-JAK2 and p-STAT3 may be attributed to the elevated level of proinflammatory cytokines in diabetic rats [45]. Conversely, the overt anti-inflammatory activity of OMT revealed an inhibitory effect on p-JAK2, p-STAT3, and TGF-β1 [46]. OMT was found to downregulate JAK2 and STAT3 phosphorylation in multiple pathophysiological states [47,48]. Thus, the JAK2/STAT3 pathway may be considered to be of vital importance in ameliorating the effects of OMT against diabetic cardiomyopathy. However, there were some limitations regarding the application of OMT for myocardial injury in diabetic rats. For example, in vitro investigations into the effects of OMT are required. In addition, to determine whether the JAK2/STAT3 pathway mediates the cardioprotective activity of OMT, the use of an inhibitor or short hairpin RNA is required, and further studies must be carried out. Collectively, this study provides evidence that the cardioprotective effects of OMT may be associated with the inhibition of the Nrf2/HO-1 and JAK/STAT pathways and an increase in the activity of antioxidant enzymes in diabetic rats.
v3-fos-license
2018-04-03T05:10:58.563Z
2016-05-17T00:00:00.000
23733996
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://nottingham-repository.worktribe.com/preview/4782535/4442726-Two-Photon%20Excitation%20of-AAM%20(2).pdf", "pdf_hash": "5c52d9555ddaa9521d586c98a9c40a4aeb10427b", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:276", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "e2f3c99a196e51595c62ea39ab9fea36f7ed8c0e", "year": 2016 }
pes2o/s2orc
Two-Photon Excitation of a Plasmonic Nanoswitch Monitored by Single Molecule Fluorescence Microscopy : Visible light excitation of the surface plasmon band of silver nanoplates can effectively localize and concentrate the incident electromagnetic field enhancing the photochemical performance of organic molecules. Herein, the first single-molecule study of the plasmon-assisted isomerization of a photochrome-fluorophore dyad, designed to switch between a nonfluorescent and a fluorescent state in response to the photochromic transformation, is reported. The photochemistry of the switchable assembly, consisting of a photochromic benzooxazine chemically conjugated to a coumarin moiety, is examined in real time with Total Internal Reflection Fluorescence Microscopy in the presence of silver nanoplates excited with a 633 nm laser. The metallic nanostructures significantly enhance the visible light-induced performance of the photoconversion, which normally requires ultraviolet excitation. The resulting ring-open isomer is strongly fluorescent and can also be excited at 633 nm. These stochastic emission events are used to monitor photochromic activation and show quadratic dependence on incident power. The utilization of a single laser wavelength for both photochromic activation and excitation effectively mimics a pseudo-two-colour system. px variance limit scattering , Hodgson acknowledges the award of an Ontario Graduate Scholarship. P. F. Aramendia is a staff member of CONICET (Carrera del Investigador Científico, Consejo Nacional de Investigaciones Científicas y Técnicas, Argentina). Introduction The utilization of surface plasmon resonance (SPR) of noble metal nanoparticles (MNP) to manipulate light in the subwavelength regime with spatial and temporal control is of great interest due to potential applications in nanoscaled lithography, data storage and microscopy. [1] The SPR band is generated by the collective oscillation of free electrons upon interaction of the MNP with visible light. Conveniently, the shape and the spectral profile of the SPR band can be regulated by the changing size, morphology and surface coverage of the MNP. [2] When excited in the plasmon region, silver nanoparticles dramatically intensify the electromagnetic field and thus the photon flux experienced by molecules in the direct vicinity of the particles. [2b, 3] This consequently amplifies the inherent spectroscopic and photochemical behavior of molecules in close proximity to the metal surface, improving the efficiency and the performance of photochemical reactions in the visible and near-infrared wavelength regime. Interestingly, photochromic transformations have been successfully exploited for tailoring, probing and monitoring SPR effects with convenient spectroscopic measurements. [4] Photochromic compounds switch reversibly between two isomeric states with distinct absorption spectra under optical control. [5] While photochromic transformations normally involve single-photon photoexcitation of the organic chromophore, the nanoparticle-mediated plasmonic enhancement of the electromagnetic field allows for the stimulation of multi-photon photochromic processes even at moderate illumination intensities that would otherwise be inadequate for two-photon excitation. [6] In this context, nanoparticles can act as antennae, delivering excitation energy to molecules in close proximity to their surfaces. A number of examples have been reported, including a successful twophoton ring opening of diarylethenes. [4c, 4d] Since however, these systems can only be monitored by absorption techniques, they are not amenable for single-molecule fluorescence microscopy studies. Photochromic transformations can instead be designed to activate fluorescence under optical control, owing to the chemical engineering of the conjugation between a photochromic unit and an organic fluorophore. [7] As a consequence of the pronounced structural and electronic modifications associated with a photochromic transformation, the light-induced interconversion of the photochromic unit can be exploited to switch on the emission of the complementary fluorophore. The activation of fluorescence upon photochromic conversion offers the opportunity to apply single-molecule techniques, where the photogenerated emission translates into a detectable signal for stochastic events at the single-molecule level. [8] For this study, we have selected the photochromic system shown in Figure 1. Upon ultraviolet illumination in acetonitrile, the 2H, 4H- [1,3] benzooxazine ring in 1a opens to generate the zwitterionic isomer 1b on a subnanosecond time scale, where 1b spontaneously reverts to the original isomer 1a. [9] [10] The structural transformation of the photochrome brings the coumarin moiety into conjugation with the cationic fragment of the light-generated isomer and bathochromically shifts its absorption band by ca. 160 nm. As a result, the selective excitation of the newly generated absorption band centered at 580 nm allows the detection of the fluorescence of this species at 685 nm, which is readily observable following the photochromic transformation. The generation of an emissive isomer allows for monitoring of the photochromic conversion in real time, using Total Internal Reflection Fluorescence Microscopy (TIRFM). The conversion of 1a into 1b normally requires excitation at wavelengths shorter than 450 nm, and is typically achieved using ultraviolet light. Herein, we used silver nanoplates (AgNP) in an attempt to establish whether the transformation shown in Figure 1 could instead be stimulated by excitation at 633 nm. Whereas compound 1a is transparent at this wavelength, AgNP show strong plasmon absorption ( Figure S1, Supporting Information). Furthermore, the transient absorption spectrum of 1b ( Figure S1) shows that 633 nm is also a suitable excitation wavelength to observe fluorescence. This design allows both activation (the transformation of 1a into 1b) and excitation (of 1b) to be conveniently achieved by the utilization of a single laser, simplifying the experimental setup and minimizing the associated cost. Our efforts were thus directed at the possibility to achieve fluorescence activation via the plasmon-mediated photochromic transformation of 1a, as a consequence of the visible excitation of the SPR band of silver nanoplates (rather than the organic chromophore), and to exploit this process to detect emissive molecules. Results show that this transformation is indeed possible, and suggest a non-linear process attributed to the plasmon-mediated enhancement of the performance of the photochromic transformation facilitated by the interaction of 1a with the strong plasmonic field. Results and Discussion TIRFM samples consisted of a thin polymer layer of poly(methyl methacrylate) (PMMA) doped with 1a and deposited over a monolayer of triangular shaped AgNPs coated on a glass coverslip. Scanning Electron Microscopy (SEM) of the synthesized AgNP ( Figure S2, Supporting Information) reveal triangular shaped nanocrystals with size distribution of 90 ± 9 nm. Relative to their spherical counterparts, triangular silver plates are known to generate stronger plasmonic fields, which are localized at the vertices of the prisms. [11] The formation of clusters of closely spaced nanoparticles can further enhance performance by constraining plasmon effects to the nanoscaled gaps between the vertices. [12] The AgNP monolayer was prepared by using APTES as a coupling agent. AFM images of the substrates show the presence of large AgNP agglomerates as well as complete surface coverage of the substrate ( Figure S3, Supporting Information). A solution of 1a in poly(methyl methacrylate) was then spin coated atop the functionalized slides in order to obtain polymer films approximately 100 nm thick. The construction and the preparation of the sample ensures that molecules dispersed atop the AgNP monolayer will be in close proximity to the enhanced field produced by excitation of the localized surface plasmon of AgNP, with the gaps between the densely packed nanostructures filled by polymer. [13] With full coverage of the slide with AgNP and a polymer film 100 nm thick, the enhanced field is expected to cover roughly 20% of the sample. [14] This is sufficient to ensure that most molecules of 1a will be sufficiently close to AgNP and within the field enhancement region. A fraction of the small population of 1b in thermal equilibrium with 1a will experience field enhancement. TIRFM imaging and analysis of a PMMA solution of 1a spin coated atop a AgNP monolayer after 633 nm excitation at full laser power density (83 W/cm 2 ) reveal a significant increase in the mean fluorescence intensity as well as in the number of detected events with respect to 1a imaged in the absence of the AgNP (see Supporting Video 1). For an immediate visual comparison, Figure 2 displays the contrast between two regions of a coverslip, where only one of them is functionalized with AgNP. Small populations of fluorescent molecules are evident even in the absence of AgNP, as a consequence of spontaneous thermal activation due to equilibrium between the closed (1a) and the open form (1b). The emission spectrum of such bursting events confirmed that they indeed correspond to emission from 1b, as shown in Figure 2c. AgNP monolayers on a glass coverslip are highly scattering under the selected experimental conditions. The background scattering from AgNP, measured by imaging PMMA spin coated atop the nanostructures, was systematically analysed at different excitation powers in order to evaluate possible interference in identifying actual events of interest. The temporal intensity profiles for representative background trajectories are shown in Figure S4 (Supporting Information). Although the average base intensity of each scattering trajectory may vary according to the non-uniform distribution of the NP on the glass surface, on the time scale of the experiment (100 s, Figure S4, Supporting Information) it remains approximately constant for a given laser power, and in no cases were bursting signals or sharp jumps in intensity observed. Indeed, analysis of residuals of the scattering profiles ( Figure S5, Supporting Information) show that the variance is normally distributed, which confirms the normality of the background signals over time. In contrast, analysis of the variance of the residuals for intensity trajectories extracted from TIRFM image sequences corresponding to PMMA films containing 1a (10 -6 mol/kg pol) spin coated atop AgNP shows numerous instances of substantial drift from linear behavior. This phenomenon is indicative of the presence of bursting events in an intensity trajectory. Since this method considers only the absolute drift in the variance, real bursting events are identified independent of their intensities, and can thus be detected even in cases where scattering from dense clusters of nanoparticles (NP) precludes the intensity-based identification of the emission of a few single molecules directly atop the NP. Being completely independent of relative profile intensities, this method relies upon the absolute drift in signal variance in order to identify trajectories containing valid bursting events, which are immediately detected, while scattering profiles are excluded from further analysis. To better illustrate the validity of the analysis, an illustrative example is represented in Figure S6 (for scattering) and Figure S7 (for a bursting event) of the Supporting Information. Indeed, the protocol described allows for rapid, streamlined identification of ROIs that contain potential bursting events by permitting a preliminary screening of thousands of trajectories in a fraction of the time that would be required for manual burst identification, and with significantly greater accuracy. However, manual visual examination of the individual traces making up the pool of trajectories remaining after the screening process was still necessary in order to identify trajectories containing multiple bursting events ( Figure S8a) and to confirm that each and every bursting event contained within these trajectories was valid. That is, trajectories affected by camera artefacts or the presence of dust particles on the optics or sample surface could potentially pass the automated screening test if any of these effects resulted in a substantial drift to the variance of the residual. Such trajectories were excluded from further analysis, and any apparent bursting events in which the entire duration of the burst was not observed from start to finish were also discarded ( Figure S8b and S8c, Supporting Information). Once the procedure described above had been applied, the actual bursting events were manually counted. The number of counted events as a function of the power of the exciting laser for compounds 1a and 2 in the presence of AgNP, as well as for compound 1a in the absence of the MNP, is illustrated in Figure 3 and Table S1 From the results illustrated in Figure 3 and Table S1 it is clear that the number of detected fluorescent bursts attributed to 1b in the presence of AgNP is significantly higher than the number observed for the compound imaged in the absence of AgNP. In principle, this remarkable behavior could be the result of: 1) nanoparticle-induced enhancement of the emission intensity of the small population of 1b present in equilibrium with 1a (i.e., events are detected more easily due to enhancement of the fluorescence); 2) thermally activated transformation of 1a to 1b via plasmonic heating, or 3) the ability of the AgNP to promote the formation of 1b upon excitation of the SP band through a two-photon process. In order to rule out the first hypothesis, we conducted control experiments by imaging a solution of compound 2 in PMMA spin coated atop AgNP. Compounds 2 and 1b share the same chromophore and thus the analysis of 2 atop AgNP provides reliable evidence of the expected enhancement for 1b atop AgNP at the same laser power ( Figure S9, Supporting Information). However, the dependence of the number of events on the power of the exciting laser differs significantly from a sample comprising 1a atop AgNP in PMMA. Double logarithmic plots ( Figure 4) show that the dependence of number of bursting events on the excitation power for films of 1a in the presence of AgNP has a slope m of 2.0 (Figure 4a). In contrast, compound 2 spin coated atop AgNP (Figure 4c) shows a dependence closer to a slope m of 1, albeit with some curvature in the plot, similar also to the film doped with 1a in the absence of the nanostructures (Figure 4b). This behaviour indicates that only in the case of 1a@AgNP there is a clear non-linear dependence corresponding to a two-photon process, also justifying the use of a parabola for this specific system in Figure 3. These data demonstrate that the nanoplates promote the transformation of a significant population of 1a into 1b upon excitation of the SP band at 633 nm. The value of the slope together with the lack of absorbance of 1a at 633 nm ( Figure S1, Supporting Information) show that two-photon excitation at 633 nm, promoted by the stimulation of the surface plasmon band of AgNP at the same wavelength, is responsible for improving the performance of the photochromic transformation of 1a into 1b. Our second hypothesis above involves the possibility of thermal activation of 1a. The area in close proximity to the surface of MNP can reach high temperatures upon excitation of the plasmon band. [15] In order to evaluate whether the ring-opening reaction (1a ® 1b) could be due to thermal stimulation, we prepared thin PMMA films of 1a deposited over coverslips functionalized with spherical gold nanoparticles (AuNP, 80 nm in diameter) and recorded TIRFM image sequences upon excitation of the AuNP plasmon band at 543 nm. Compound 1b also absorbs at 543 nm ( Figure S1, Supporting Information) and thus can also be excited at this wavelength in order to detect its fluorescence. Due to the impracticality of physically heating the microscope objective at high temperatures without seriously damaging the equipment, the AuNP based system was chosen as a reliable model to test whether the isomerization was photochemically or thermally driven. Indeed, the local field enhancement produced by spherical AuNP is significantly smaller than for AgNP with triangular geometry [11][12]16] and therefore a plasmon assisted two-photon isomerization process would be unlikely. However, light-to-heat conversion remains highly effective. [17] It should be noted that the parameters for this study require careful selection of experimental conditions. The use of a different material (gold vs. silver), a different laser excitation wavelength, and different plasmonic properties are all important considerations in the quantification of the heating by gold and silver nanostructures. However, most of these parameters are readily available or easy to estimate (see Supporting Information). In this case we have concluded that 80 nm AuNP excited at 543 nm provide a reasonable comparison through which to evaluate the possible contribution of plasmonic heating to the 1a ® 1b interconversion. According to our calculations, the heat generated via excitation of spherical AuNP at 543 nm using 100% of the laser power is comparable to that generated by excitation of AgNP at 633 nm at 15.7% of excitation capacity (see Supporting Information). Using the power settings available at 633 nm, we detected 63 events for polymer films doped with 1a and spin coated atop AgNP at 14% of the total excitation power at lEx = 633 nm and 160 events at 20% power (Table S1). If the experimental observations in Figure 4a resulted from significant thermal activation, then imaging of samples of 1a in PMMA spin coated atop AuNP should provide a number between 63-160 events. However, following the same protocol for counting bursting events described above, we detected only 9 events attributable to 1b atop AuNP. Therefore, the heat generated upon excitation of the SP of AgNP is insufficient for promoting the conversion of 1a into 1b. Having eliminated nanoparticle-induced enhancement of emission and thermal activation as possible causes for the light induced 1a ® 1b conversion, the only reasonable explanation remains that AgNP are able to promote the formation of 1b through a two-photon process mediated by laser excitation of the surface plasmon band of AgNP in the visible region where 1a has no absorption. Conclusions We have demonstrated that visible light excitation of the surface plasmon band of AgNP improves the performance of an activatable photochromic dyad that is transparent in the visible region. In the presence of AgNP, a photoinduced photochromic isomerization that normally requires ultraviolet irradiation (1a ® 1b) can be performed with moderate excitation powers using a simple experimental setup. Results indicate that AgNP effectively promote two-photon excitation, and consequently enhance the fluorescence activation of the switchable probe. Our approach demonstrates that even with photochromic systems that do not display any photochemical activity under visible light illumination, the use of plasmon-enhanced fields can improve performance and allow real time monitoring of the dynamic process using single-molecule fluorescence microscopy. Materials Silver nanoplates (AgNP) were prepared according to literature protocols. [4] Specifically, silver seeds were obtained by 5 min UVA illumination of an aerated solution of 0.2 mM Irgacure 2959, 0.2 mM AgNO3 and 1 mM trisodium citrate in a Luzchem photoreactor. The growth from nanoparticles to nanoplates was performed by illuminating an oxygenated solution of the seeds with 590 nm air-cooled LEDs for 24 h. SEM images were acquired with a JEOL JSM-7500F field emission scanning electron microscope. Gold nanoparticles (AuNP) were prepared with a growth seed method according to literature protocol. [31] Compounds 1a and 2 were synthesized following literature procedures. [16] Methods Solvents were purified with a LC Technology Solutions Inc. SPBT-1 Bench Top Solvent Purification System. Chemicals were purchased from Sigma-Aldrich or Fisher Scientific. All the reactions were monitored by thin-layer chromatography, using aluminum sheets coated with silica (60, F254). NMR spectra were recorded at room temperature with a Bruker Avance 300. Mass spectral analysis was performed with a 6890N Network GC System equipped with a 5973 Mass Selective Detector from Agilent Technologies. ESI mass spectra in positive mode were acquired with a Micromass Q-TOF I. High-resolution EI mass spectra were acquired with a HRes, Concept S1, Magnetic Sector mass spectrometer and were conducted in the John L. Holmes Mass Spectrometry Facility at the Department of Chemistry and Biomolecular Sciences, University of Ottawa. Absorbance and emission spectra were recorded using a Cary 50 UV−Vis spectrophotometer and a PTI spectrofluorometer, respectively, using a quartz cuvette with a path length of 1 cm. Laser flash photolysis was performed with a Q-switched Nd:YAG-laser (355 nm, 10 mJ/pulse) in a LFP-111 system (Luzchem Research Inc., Ottawa, Canada), in 1×1 cm LFP-Luzchem cuvettes or glass slides (Fisher Scientific). The absorbance of the samples at 355 nm was ~0.3. Single-Molecule Imaging and Analysis Single-molecule fluorescence experiments were performed within thin poly(methyl methacrylate) (PMMA) films containing the photochromefluorophore assembly 1a deposited over AgNP-functionalized glass coverslips. All glassware (vials, pipets and coverslips) for single-molecule experiments were cleaned with piranha solution (H2O2:H2SO4 1:3) for 1 h and then rinsed thoroughly with MilliQ water. The slide surface was functionalized with aminopropyltriethoxysilane (APTES) by immersing the coverslips in a 2% v/v aqueous APTES solution for 2 h, then washing in an ultrasound bath with MilliQ water, and drying under a nitrogen flow. The nanoparticle monolayer was obtained by adding a drop of concentrated AgNP solution to the coverslip, rinsing after 1 h with water and then drying with nitrogen. Polymer films were spin coated (2500 rpm, 45 sec.) from a 1% w/w PMMA solution in acetonitrile containing 1a or 2 (10-6 mol kg -1 [pol]) onto the functionalized coverslips. Atomic Force Microscopy (AFM) imaging was performed under air using a Molecular Imaging PicoPlus AFM working in non-contact mode, using AFM probes from Budget Sensors (Tap150-G) with a nominal resonance frequency of 150 kHz (force constant of 5 N m -1 ). Fluorescence imaging was performed with an Olympus FV1000 TIRF microscope (Olympus, Japan) equipped with He-Ne CW lasers (633 nm, 05-LHP-991 and 543 nm, 05-LGP-193) and an EM-CCD (Rolera EM-C2, Q-Capture). The Olympus FV1000 is also coupled to a Fluorescence Lifetime Imaging (FLIM) system (MicroTime 200, PicoQuant, Germany). A beam splitter cube was used to reflect the excitation light into the oil immersion TIR (Total Internal Reflection) objective (100X, N.A. 1.45, Olympus, PLAPO). The fluorescence emission collected was in the spectral range of 655-725 nm. The laser power density at the sample was measured at 83 W/cm 2 at 100% of the laser power at lEx = 633 nm, and 26 W cm -2 at lEx = 543 nm. Each frame of the TIRFM image sequences recorded consists of a 501 × 502 pixel (px), 80 × 80 µm image with a pixel size of 159 nm. Fluorescence spectra of stochastic emission (bursting events) were recorded with the coupled FLIM system, equipped with a frequency doubled picosecond pulse diode laser (637 nm, 100 ps, 40 MHz, LDH-P-FA-530L, PicoQuant). The laser beam was collimated and focused through a fiber-coupling unit. A beam splitter Z638rdc (Chroma) was used to reflect the excitation light into the oil immersion TIR objective. The epi-fluorescent signal was passed through a 560 nm long pass filter and collected by a Shemrock SR-163 spectrograph (Andor Technology, South Windsor, USA). Analysis of TIRFM image sequences (100 frames/image sequence, integration time per frame = 999 ms) was carried out using a combination of ImageJ (NIH), MATLAB (MathWorks) and OriginLab software. In brief, 3 × 3 px regions of interest (ROIs) were selected based on the automated identification of stochastic emission representing the formation of 1b. After background subtraction was performed with ImageJ (rolling ball algorithm) bursting events were examined graphically. This was done by first using ImageJ to measure the mean fluorescence intensity inside each ROI for every frame in a 100 s image sequence. The data were tabulated and imported into OriginLab, where frame numbers were converted to units of time. Mean intensity was then plotted as a function of time to generate unique intensity profiles for every ROI. Graphical residual analysis was then performed in order to identify fluorescence bursting events, which are characterized by a variance of the signal residual exceeding the positive limit of the variance exhibited by a distribution of residuals representing scattering by AgNP only ( Figure S6; compare with S7, Supporting Information). The latter was obtained from TIRFM imaging of PMMA spin coated atop AgNP in the absence of 1a, 1b or 2. Further details on this protocol are available in the Supporting Information.
v3-fos-license
2022-11-04T19:23:48.063Z
2022-10-28T00:00:00.000
253274900
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1944/15/21/7584/pdf?version=1667877862", "pdf_hash": "2dd84a5652aa7e521d6c829f7be0b31790bae420", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:279", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "d42c2495bda10eb633adaf93d746e965cc9a4fd0", "year": 2022 }
pes2o/s2orc
The First Examples of [3+2] Cycloadditions with the Participation of (E)-3,3,3-Tribromo-1-Nitroprop-1-Ene The first examples of [3+2] cycloaddition reactions between 3,3,3-tribromo-1-nitroprop-1-ene (TBMN) were explored on the basis of experimental and theoretical approaches. It was found that reactions involving TBMN and diarylnitrones realized with full regio- and stereoselectivity lead to respective 3,4-cis-4,5-trans-4-nitroisoxazolidines. The regioselecticity and the molecular mechanism of title processes was analyzed on the basis of the advanced DFT computational study. Introduction It is generally known that the trihalomethyl CX 3 (THM) functional group is an important element of many bioactive structures [1][2][3]. The possibility of the easy and selective introduction of this type groups into heterocyclic systems and exhibits [3+2] cycloaddition reactions involving TMH-functionalized alkenes [4][5][6]. Including the same alkene system, both TMH as well as the nitrogroup give an attractive channel for the further potential functionalization of target molecules [7]. So, TMH substituted conjugated nitroalkenes attract the attention of various research centers. In particular, 3,3,3-trichloro-1-nitroprop-1-ene was recently intensively tested as a component of different type cycloadditions involving diazocompounds [8,9], nitrile N-oxides [10], and especially nitrones [11][12][13][14]. Regarding 3,3,3-trifluor-1-nitroprop-1-ene, many interesting scientific works are also available at the present time [4][5][6]15,16]. In contrast, only some incidential contributions regarding the physicochemistry of 3,3,3-tribromo-1-nitroprop-1-ene (TBMN) are available [17,18]. In particular, only one simple example of [4+2] cycloaddition with the participation of TBMN and simple aliphatic dienes is known [17]. Any other examples of 32CAs of TBMN were not described for this time. Due to the issues mentioned above, within this work we initiated systematic studies in the area of cycloaddition reactions with the participation of TBMN. In particular, we decided to explain the reaction course of model 32CAs between series of Z-C-aryl-N-phenylnitrones (1a-g) and TBMN (2). In the framework of our research, we analyzed (i) regio-and stereochemical aspects of these reactions under experimental conditions and (ii) the mechanistic aspect of observed reaction channels based on results of DFT computational study. We hope these studies will be helpful in understanding the role of the CBr 3 group in the cycloaddition process and in enriching the knowledge of TBNM reactivity. Our research started from the reaction involving nitrone 1c. During the search for acceptable conditions of the cycloaddition process, we performed tests under different temperatures, molar ratio of reagents, and solvents as well as reaction times. The reaction progress was checked using the HPLC system. It was found that the cycloaddition [3+2] proceeded easily at r.t. in benzene with the reagent molar ratio 1:2. After 17 h the conversion of addents was completed. HPLC analysis confirmed the presence of only one product in the postreaction mixture. This compound was isolated using crystallization from ethanol and identified on the basis of spectral techniques. In particular, absorption bands typical of the nitro group [20], C-Br bonds [21] and isoxazolidine ring [13] were identified in the IR spectrum. Then, we did a high resolution mass spectrum obtained by the atmospheric pressure chemical ionization technique. The APCI mass spectrum is characterized by protonated molecular ion and a few prominent fragment ions. The elemental composition of protonated molecular ion with m/z 518.8565 confirmed the molecular formula C16H14N2O3Br3. The major fragment ions (m/z values and molecular formulas) are summarized. One of the fragmentation peaks which corresponds to the ion formed by elimination of Our research started from the reaction involving nitrone 1c. During the search for acceptable conditions of the cycloaddition process, we performed tests under different temperatures, molar ratio of reagents, and solvents as well as reaction times. The reaction progress was checked using the HPLC system. It was found that the cycloaddition [3+2] proceeded easily at r.t. in benzene with the reagent molar ratio 1:2. After 17 h the conversion of addents was completed. HPLC analysis confirmed the presence of only one product in the postreaction mixture. This compound was isolated using crystallization from ethanol and identified on the basis of spectral techniques. In particular, absorption bands typical of the nitro group [20], C-Br bonds [21] and isoxazolidine ring [13] were identified in the IR spectrum. Then, we did a high resolution mass spectrum obtained by the atmospheric pressure chemical ionization technique. The APCI mass spectrum is characterized by protonated molecular ion and a few prominent fragment ions. The elemental composition of protonated molecular ion with m/z 518.8565 confirmed the molecular formula C 16 H 14 N 2 O 3 Br 3 . The major fragment ions (m/z values and molecular formulas) are summarized. One of the fragmentation peaks which corresponds to the ion formed by elimination of C 2 HOBr 3 allowed us to assume that the cycloaddition reaction may lead to isomers 3c or 4c (nitro group positioned in 4). Unfortunately, assignment of the stereochemistry is not possible based only on mass spectrometry analysis. Further data was obtained from the NMR spectra. In particular, the product obtained via reaction 1c+2 was selected as a model to determine the configuration of the obtained products. The HSQC spectrum shows a correlation between doublet of doublets signal of the proton with the carbon C(4) located at 95.8 ppm, while carbon C(3) is located at 74.8 ppm, and C(5) is at 89.5 ppm. The HMBC 1 H-13 C spectrum provides more information about the localization of the CBr 3 and phenyl groups. Both carbons C(3) and C(5) show the correlation with a proton from C(4), but only carbon C(3) shows the correlation with protons from the phenyl group. It was confirmed that the phenyl moiety is attached directly to C(3). On the other hand, the same spectrum shows only two correlations of carbon CBr 3 located at 37.5 ppm, with protons bonded with C(4) and C(5). Due to non existence of the third correlation, the CBr 3 moiety is located directly with carbon C(5). Therefore, the nitro group must be in the C(4) position. On the 1 HNMR spectrum of the considered structure, two doublets and one doublet of doublets are observed in the area characteristic for isoxazolidine ring protons. The HSQC spectrum shows very clearly, that doublet of doublets corresponding to one proton at 5.60 ppm with JH-H = 8.4 Hz and JH-H = 4.7 Hz belong to carbon C(4). The first doublet corresponds to one proton at 5.80 ppm correlated with C(5). Its J-coupling is equal to the value JH-H = 4.7 Hz, which is very characteristic for trans position between two substituents in similar isoxazolidine, in this case, between nitro moiety from C(4) and CBr 3 located at C(5). The second doublet corresponds to one proton at 5.05 ppm correlated with C(3). The J-coupling is equal to JH-H = 8.3 Hz and is a characteristic of the cis position for substituents on the isoxazolidine ring. To summarize, the configuration of 3,4-cis-4,5-trans-2,3-diphenyl-4-nitro-5-tribromomethylisoxazolidine 4c can be assigned. In a similar way, we analyzed all reactions with nitrones 1a-g. In all cases, only respective 3,4-cis-4,5-trans-2-phenyl-3aryl-4-nitro-5-tribromomethylisoxazolidines were isolated as single cycloaddition products. It was established that independently of the substituents' nature in the nitrone molecule, these 32CAs with the participation of TBNM are realized with full regio-and stereoselectivity. It is interesting that similar 32CAs involving 1-nitroprop-1-ene proceed without full stereoselectivity and lead to the mixture of 3,4-cis-and 3,4-trans isomers with the ratio 5.7:1 [22]. This suggest the important influence of the volume of the substituent at β-position of nitrovinyl moiety on the reaction stereoselectivity. The regioselectivity of the reactions can be easy to explain in the framework of Molecular Electron Density Theory [23]. This approach has been successfully applied for the explanation of a number of bimolecular reactions [24][25][26][27], including [3+2] cycloadditions. It was found that the global electrophilicity of the nitroalkene 2 is equal to 3.23 eV. Within the unique electrophilicity scale, this component should be treated as an evidently electrophilic component. On the other hand, the global electrophilicity of benzonitrile N-oxide is equal to only 1.67 eV. This value is typical for moderate electrophiles. It should be noted here that the replacement of hydrogen atom in the benzene moiety of N-oxide to the electron-donating groups stimulate the further decreasing of the electrophilic properties. On the other hand, the introduction of an electron-withdrawing group stimulates increase of the ω values. However, in all cases, 2 is a more electrophilic agent than the second component. So, global properties of N-oxides 1a-g should be fully characterized by using global nucleophilicity indices. All compounds from this group exhibit global nucleophilicity in the range of 3.39-3.98 eV. In conclusion, all considered processes should be treated as evidently polar, and for the interpretation of the courses, the analysis of local nucleophile-electrophile interactions can be applied. In particular, it was found that the most nucleophilic reaction center at all nitrones >C=N(O)-moieties is always located on the oxygen atom (1.47-1.64 eV) ( Table 1). On the other hand, the most electrophilic center at the nitrovinyl moiety of the nitroalkene is assigned with the β-carbon atom (0.71 eV). The interactions of the mentioned reaction centers must lead to the formation of respective 4-nitroisoxazolidines. At the end, we performed the exploration of reaction profiles for the better understanding of the cycloaddition nature. For this purpose, the data obtained from the DFT wb97xd/6-311+G(d) (PCM) computational study was applied (Tables 2 and 3). For the resolving of mechanistic aspects of different-type pseudocyclic organic reactions, a similar level of the theory was used [28][29][30]. The DFT calculations for the model cycloaddition 1c+2 shows that the enthalpy and Gibbs free energy factors are favoring the formation of 3,4-cis-3,4-trans cycloadduct, which was isolated from the postreaction mixture. Next, we decided to examine the molecular mechanism of the formation of a heterocyclic ring on the experimentally observed reaction path. It was found that the conversion of addents into a target 4c molecule is realized via two critical points. Firstly, the pre-reaction molecular complex (MC) is formed. This is accompanied by the reduction of the enthalpy of the reaction system by 9 kcal/mol. Within MC, both substructures adopt the orientation, which determine the further direction of the new bond formation. Subsequently, any new sigma bonds are not formed at this stage. Additionally, MC should not be considered as a chargetransfer complex because the GEDT indice is equal to 0.00e. The subsequent transformation of the MC leads to the area of transition state (TS). This stage requires the enthalpy of the activation, which is equal to 4.9 kcal/mol. Within TS, key interatomic distances are essentially decreased to 2.574 Å and 1.656 Å (C3-C4 and C5-C5), respectively ( Figure 1). Therefore, TS should be considered as evidently asynchronous. Additionally, it exhibits a clearly polar nature, which is confirmed by great GEDT value (0.66e). The detailed analysis of the IRC trajectories show, due to Domingo and Rios-Gutierrez terminology [19], that the analyzed process should be interpreted as "one step cycloaddition". Methods and Procedures 3.1. Experimental 3.1.1. Analytical Techniques HPLC analyses were done using a Knauer UV VIS detector (LiChrospher 18-RP 10 µm column, eluent: 80% methanol). M.p. were estimated on the Boetius apparatus and were uncorrected. IR spectra were derived from the FTS Nicolet IS 10 spectrophotometer. 1 HNMR spectra were recorded on an AV 400 Neo spectrometer or on a Bruker Avance III 600 spectrometer and are reported in ppm using deuterated solvent as an internal standard (CDCl 3 at 7.26 ppm). Data were reported as s = singlet, d = doublet, dd = doublet of doublets, m = multiplet. 13 CNMR spectra were recorded on an AV 400 Neo 101 MHz spectrometer or on a Bruker Avance III 151 MHz spectrometer and were reported in ppm using deuterated solvent as an internal standard (CDCl 3 at 77.2 ppm). 19 FNMR spectrum was recorded on an AV 400 Neo spectrometer and reported in ppm using deuterated solvent as an internal standard CDCl 3 . High-resolution mass spectrometry (HRMS) measurements were performed using Synapt G2-Si mass spectrometer (Waters, Milford, MA, USA) equipped with an atmospheric pressure chemical ionization (APCI) source and quadrupole Time-of-Flight mass analyzer. The mass spectrometer was operated in the positive ion detection mode with a discharge current set at 4.0 µA. The heated capillary temperature was 350 • C. To ensure accurate mass measurements, data were collected in centroid mode and mass was corrected during acquisition using leucine enkephalin solution as an external reference (Lock-SprayTM), which generated reference ion at m/z 556.2771 Da ([M+H] + ) in positive APCI mode. The results of the measurements were processed using MassLynx 4.1 software (Waters) incorporated with the instrument. Materials The components of the 32CA were synthesized in accordance with procedures described in the literature. In particular, Z-C-aryl-N-phenylnitrones (1a-g) were prepared via condensation between N-phenylohydroxylamine and respective arylaldehydes [31]. The 3,3,3-bromo-1-nitroprop-1-ene (2) was obtained via three-step method, starting from nitromethane and bromal (see Supplementary Material for the details) [17,18]. Commercially available (Sigma Aldrich, St. Louis, MO, USA) chemicals have been used as solvents and components for the further synthesis of addends for considered 32CA. Cycloaddition between Z-C-aryl-N-Phenylnitrones (1a-g) and TBMN (2) General Procedure A solution of 3,3,3-bromo-1-nitroprop-1-ene (0.02 mol) and appropriate nitrone (0.01 mol) in dry benzene (25 mL) was mixed at room temperature for 24 h. The post-reaction mixture was filtered, and the solvent was evaporated in vacuo. The isolation of the reaction products from the post-reaction mixtures was performed via crystallization from ethanol. Pure products were identified on the basis of HR-MS, IR and NMR spectral data. Computational Details The quantumchemical calculations reported in this paper were performed using wb97xd functional with the 6-311+G(d) basis set included in the GAUSSIAN 09 package [32]. Optimizations of the critical structures were performed with the Berny algorithm, whereas the transition states (TSs) were calculated using the QST2 procedure. Subsequently, TSs on considered paths were localized via alternative methodology by gradually changing the distance between reaction centers (with optimization after each step). It should be underlined that in this way TSs identical as previously have been obtained. Localized critical points were successfully verified by frequency calculations. It was found that all reactants and products were characterized by positive Hessian matrices. Subsequently, all TSs showed only one negative eigenvalue in their diagonalized Hessian matrices. Next, their associated eigenvectors were confirmed to correspond to the motion along the reaction coordinate under consideration. For further verification of TSs, IRC calculations were performed. The solvent effect on the reaction paths was included using the polarizable continuum model (PCM) [33]. Global electron density transfer between substructures (GEDT) [34] was calculated according to the equation GEDT = Σq A where q A is the net charge and the sum is taken over all the atoms of nitroalkene. New σ-bonds development (l) was expressed in the correlation to distance between the reaction centers in the transition structure (rTSX-Y) and the same distance in the corresponding product (rPX-Y) [35]: Electronic properties of reactants were estimated according to recommended earlier relationships [36,37]. ω = µ 2 /2η, µ ≈ (E HOMO + E LUMO )/2, η ≈ E LUMO − E HOMO . Global nucleophilicities (N) [38] were calculated using equation: N = E HOMO − E HOMO (tetracyanoethene). The local electrophilicity (ω k ) on the atom k was calculated using index ω and respective Parr function P +k [39]: ω k = P + k ·ω. The local nucleophilicity (Nk) on the atom k was calculated using index N and Parr respective function Pk [39]: N k = Pk ·N. Conclusions This experimental and theoretical studies shed light on the course of the unique examples of [3+2] cycloaddition reactions with the participation of the very poorly known 3,3,3-bromo-1-nitroprop-1-ene. In particular, [3+2] cycloadditions between mentioned nitroalkene and diarylnitrones realize with full regio-and stereoselectivity and lead to respective 3,4-cis-4,5-trans-4-nitroisoxazolidines. So, the 32CA selectivity is fundamentally different than other similar processes with the participation of diarylnitrones and conjugated nitroalkenes. Additionally, the DFT wb97xd/6-311+G(d) (PCM) shows without any doubt that all explored processes should be considered as polar and are realized via one step. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ma15217584/s1, Table S1. The m/z values and a molecular formula of the major fragment ions in APCI mass spectra of 4a-4g. Funding: This research received no external funding. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
v3-fos-license
2023-10-22T15:07:01.413Z
2023-10-19T00:00:00.000
264386519
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1021/acs.energyfuels.3c02652", "pdf_hash": "020a4156f1f10b856c43edc55cb38c46d8f393b5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:280", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Materials Science" ], "sha1": "737ce33c520774884e5144750eddaeea7d32972a", "year": 2023 }
pes2o/s2orc
Influence of Long-Term CaO Storage Conditions on the Calcium Looping Thermochemical Reactivity Long-term storage capability is often claimed as one of the distinct advantages of the calcium looping process as a potential thermochemical energy storage system for integration into solar power plants. However, the influence of storage conditions on the looping performance has seldom been evaluated experimentally. The storage conditions must be carefully considered as any potential carbonation at the CaO storage tank would reduce the energy released during the subsequent carbonation, thereby penalizing the round-trip efficiency. From lab-scale to conceptual process engineering, this work considers the effects of storing solids at low temperatures (50–200 °C) in a CO2 atmosphere or at high temperatures (800 °C) in N2. Experimental results show that carbonation at temperatures below 200 °C is limited; thus, the solids could be stored during long times even in CO2. It is also demonstrated at the lab scale that the multicycle performance is not substantially altered by storing the solids at low temperatures (under CO2) or high temperatures (N2 atmosphere). From an overall process perspective, keeping solids at high temperatures leads to easier heat integration, a better plant efficiency (+2–4%), and a significantly higher energy density (+40–62%) than considering low-temperature storage. The smooth difference in the overall plant efficiency with the temperature suggests a proper long-term energy storage performance if adequate energy integration is carried out. ■ INTRODUCTION −10 TCES based on CaL has been typically considered to be integrated into concentrating solar power (CSP) due to the compatible charge and discharge temperatures. 11,12Solar radiation drives the endothermic decomposition of CaCO 3 into CaO and CO 2 in a solar reactor. 3,13The reaction products are separately conducted to storage reservoirs and then brought back together in the carbonator reactor on demand for energy production.The heat released in the reverse exothermic reaction is exploited in a power cycle (i.e., CO 2closed Brayton cycle) to produce electricity. 14,15The equilibrium temperature in a CO 2 atmosphere at 1 bar is ∼895 °C. 16Consequently, for achieving fast calcination in CO 2 , the reaction temperature has to be maintained above ∼950 °C. 17,18The carbonation reaction is normally proposed at ∼800−850 °C to ensure a rapid reaction and high thermoelectric efficiency. 14,19,20−30 Lately, there has been progress in the design of more efficient and potentially cost-effective plant schemes.−38 One of the key advantages of CaL as a TCES system is the possibility of long-term energy storage.Current state-of-the-art thermal storage (TES) technologies are mainly based on molten salts.In this case, the need to trace the system to maintain temperatures over 200 °C to avoid the solidification of the molten salts entails a substantial increase in costs. 27,39−46 Understanding the storage stage influence is crucial to optimize the whole process.An excessively high storage temperature of the reactants might deteriorate the CaO reactivity due to sintering. 18,47The atmosphere is also relevant because of the high reactivity of CaO with H 2 O and CO 2 .Thus, even small contents of moisture and CO 2 might partially convert CaO into Ca(OH) 2 or CaCO 3 during the storage period, thereby incurring undesired energy release during the storage step. 48he present work explores the impact of the storage conditions of the reactants on the multicycle activity of CaO for TCES.While previous research has evaluated the influence of using different storage temperatures for the CaO silos, there are no studies considering the influence on CaO multicycle performance.Thus, the performance of limestone has been evaluated when a storage step is inserted between the calcination and carbonation cycles.Three different storage temperatures have been explored; 50 and 200 °C under CO 2 and 800 °C under N 2 .This work also contemplates the influence of critical parameters such as temperature, atmosphere, particle size, and time.Finally, the impact of the storage temperature over the round trip efficiency is also assessed from a process engineering perspective. ■ MATERIALS AND METHODS Materials.Limestone was provided by KSL Staubtechnik GmbH from the standard Eskal series.According to the supplier, the limestone has 99.1% content in CaCO 3 but also contains as impurities 0.45% MgO, 0.25% SiO 2 , 0.1% Al 2 O 3 , and 0.04% Fe 2 O 3 .Samples with two well-defined particle sizes were used: 80 and 150 μm.−51 The samples are referred to hereafter as C80 and C150, respectively. Particle size distribution (PSD) data are listed in Table 1, while the frequency distributions of the particle sizes for the samples are plotted in Figure 1.As can be seen, the samples present a narrow PSD. ■ EXPERIMENTAL METHODS The multicycle activity was studied in a thermogravimetric analyzer Q5000 developed by TA Instruments, equipped with a high sensitivity balance (<0.1 μm).This instrument allows for high heating and cooling rates (∼300 °C/min) from room temperature to 1000 °C, using infrared halogen lamps that heat the silicon carbide furnace where the sample is placed.These high heating rates are necessary to simulate realistic Ca-Looping conditions in which the material undergoes calcination and carbonation under different atmospheres and temperatures. Two experimental schemes were used in this work.The first one comprises an initial heating step to a calcination temperature of 950 °C in a CO 2 atmosphere.The calcination temperature is maintained for 3 min.Then, the atmosphere is switched to N 2, and purged for 5 min to ensure the complete removal of CO 2 from the system.This is done to avoid carbonation while cooling to the storage step in order to correctly assess the results and discriminate the influence of CO 2 in the storage step.The system is cooled to the desired storage temperature (200 and 50 °C).Once the storage temperature is achieved, the atmosphere is again switched to CO 2 and maintained for the planned storage time.To evaluate the influence of storage on subsequent carbonation stages, the sample is heated in nitrogen to a carbonation temperature of 800 °C at 300 °C/min, when the atmosphere is changed to CO 2 .For example, Figure 2 shows a scheme of the test involving storage in CO 2 at 50 °C. The second scheme represents a CSP CaL process scheme in which solids are stored at high temperatures to simplify the reactors' thermal integration. 15The storage step is introduced at high temperatures in a N 2 atmosphere between the calcination and carbonation stages.The calcination is performed at 950 °C in CO 2 , and then, the atmosphere is switched to N 2 .The system is then cooled to a storage temperature (800 °C), which lasts 1 h in this atmosphere.Then, to proceed with the multicycle tests, the atmosphere is changed to CO 2 , and the carbonation is triggered at 850 °C during 5 min. Energy & Fuels work to evaluate the multicycle performance of the samples.The total CaO conversion (X T,N ) in each cycle is defined as the sum of the CaO conversion during storage (X sto ) (which is undesirable since the heat is not being released to the power cycle) and the conversion in the subsequent carbonation stage (X CaO,N ) Moreover, X T,N may be expressed as where m T,N is the total sample mass after the storage step and the subsequent carbonation stage and m N is the sample mass after calcination (before the storage starts).W CaO and W COd 2 are the molar masses of CaO and CO 2 , respectively.The undesired conversion of CaO that would take place during the storage step is (4) where m sto is the sample mass after the storage step at the Nth-cycle.X sto considers the nondesired reaction of CaO with CO 2 during storage.The heat released during this step would be wasted. From the above equations, the CaO conversion in the carbonation stage of the cycles can be obtained once the values of X T,N and X sto,N are calculated (5) Energy Storage Density of the Calcined Material.The energy storage density (in GJ/m 3 ) of the calcined materials can be quantified from the energy density per mass (6) Here, m COd 2 is the CO 2 uptake during carbonation, computed from the CaO conversion data, ΔH R is the enthalpy of the reaction per kg of CO 2 (4045.5 kJ/kg CO 2 ), and m N is the sample mass after calcination.The energy storage density is then calculated from eq 8 (7) where ρ is the density of the calcined material, assuming a porosity of 50% (for CaO, it results in a density of 1670 kg/m 3 ).Given this value, the maximum theoretical volumetric energy density for CaO would be 3.7 GJ/m 3 . ■ RESULTS AND DISCUSSION Effect of Storage in CO 2 at Different Temperatures.Optimum storage conditions for CaO are essential to ensure a proper plant's overall performance.The storage temperature imposes rules for thermal energy integration of the reactors.Besides, the material's reactivity during the storage step should also be considered an essential parameter that could reduce the available energy released into the power cycle. Figure 3 shows the behavior during storage of CaO derived from the decomposition of C80 limestone particles at different temperatures under a CO 2 atmosphere.Thus, in these experiments, a freshly calcined sample is rapidly cooled down to a set storage temperature.Only when the desired temperature is reached, CO 2 is injected into the system (as described in Figure 2).Almost immediately after CO 2 injection, the sample mass increases due to the carbonation.Two carbonation stages are evident; a very fast reactioncontrolled stage followed by a slower, diffusion-controlled carbonation stage. 21,52,53The higher the temperature, the more significant the fraction of CaO that converts into CaCO 3 during the fast carbonation stage.Likewise, diffusioncontrolled carbonation is equally promoted at higher temperature.It is evident from Figure 3 that storing the solids under pure CO 2 at 600 and 400 °C is undoubtedly detrimental as the carbonation reaction is kinetically very favored at these temperatures.A storage temperature of 600 °C approaches the conditions used in CO 2 capture applications, where carbonation occurs at 650 °C under a less rich CO 2 atmosphere (∼10−15% vol.). 54,55On the other hand, it appears that at temperatures below 200 °C, the carbonation results in essentially similar values of CaO conversion (X sto = 0.04).Thus, cooling the CaO all the way down to room temperature might not be necessary to preserve the reactivity of the material for subsequent carbonation cycles.Consequently, storage at temperatures higher than 200 °C must be avoided if the material is stored under a pure CO 2 atmosphere.The storage temperature of 200 °C can be used as a proper comparison with the current molten salt storage temperature, 56 while the storage temperature at 50 °C is a reference for room temperature storage.Remarkably, in the case of CaO particles, there are no significant issues if particles are cooled from 200 °C, so there is no critical process limitation as in the case of molten salts. Effect of Storage Time in CO 2 .The storage of the reaction products is expected to last for at least some hours (if not days, as potentially occurred in TCES systems).Consequently, the time that CaO can be stored without activity loss in subsequent cycles is relevant to the overall process efficiency.Figure 4 shows the time evolution of the mass percentage gained during the storage step in CO 2 for C80 and C150 limestone particles.For this study, an experiment similar to that presented in Figure 2 was performed but with extended storage times.Two different storage temperatures were compared: 50 and 200 °C. As seen in Figure 4, the main part of the overall conversion occurs within the first 30−50 min.Moreover, about half of the total mass gain occurs during the first 6 s after CO 2 is injected, regardless of particle size.Noticeably, the observed behavior is similar to the carbonation profiles at higher temperatures: a fast controlled reaction phase lasts only a few seconds, followed by a slower diffusion-controlled reaction phase. 21,52,53It can be inferred from the experiments in Figure 4 that long-term storage is feasible as the loss of active material is very small due to the limited fast controlled reaction phase at 50 and 200 °C and the very slow diffusion-controlled carbonation kinetics.Note that after 1 week, the mass percentage gained is still lower than 2.2%, indicating that long-term storage at 50 and 200 °C is feasible even in a CO 2 atmosphere.Considering these results, a storage time of 60 min was used in the subsequent experiments since higher solids storage times would not substantially alter the obtained results. Effect of the Storage Step on the Multicycle Performance.Figure 5a storage step at 50 °C after each calcination stage.Figure 5b corresponds to a close view of the first cycle; it shows that the calcination temperature of 950 °C ensures a rapid conversion of CaCO 3 into CaO.A progressive deactivation of CaO toward carbonation as the number of cycles increases is evident in Figure 5a due to sintering, which is further promoted in the CO 2 -rich atmosphere because of the higher temperature required for calcination. 18,23igure 6 compares the multicycle performance in terms of CaO conversion (X CaO ) for C80 and C150.CaO conversions are calculated from the thermogravimetry experiments according to eq 5. Three different storage conditions were explored: (i) 50 °C in CO 2 , (ii) 200 °C in CO 2 , and (iii) 800 °C in N 2 .It can be readily concluded from the plots that the overall behavior of the materials does not depend on particle size in the range 80−150 μm.In all three operating conditions studied, the CaO conversions dropped from about 0.65−0.70 in the first cycle down to about 0.17 after 20 calcination and carbonation cycles.Table 3 lists the CaO conversion attained at the 1st and 20th cycles for C80 and C150 under the different experimental conditions.The conversion values obtained agree with previous reports for similar particle sizes. 57These results demonstrate that the samples present a similar behavior regardless of the storage conditions implemented. Figure 7 compares the results obtained in terms of volumetric energy density, calculated as described in eqs 6 and 7. Red bars correspond to the values calculated using the CaO conversion during the carbonation stage, X CaO , that is, the energy that can be recovered.On the other hand, blue bars account for the fraction of the energy density wasted during the storage step.Both storage conditions in CO 2 (200 and 50 °C) yield good results in terms of the capability to preserve the CaO reactivity.As the reactivity of the material decreases with the subsequent cycles, so does the fraction of material that reacts during the storage step.Thus, while non-negligible at a low cycle number, after 5 cycles, the CaO becomes unreactive at low temperatures, and the energy lost during storage becomes negligible.Obviously, in the experiments testing storage in N 2 , there is no energy wasted during storage, as this is implemented in an inert atmosphere. Table 4 includes the accumulated energy density, calculated as the sum of the energy density (D v ) of each of the 20 calcination/carbonation cycles performed.The energy density values obtained are roughly similar for C80 particles.On the other hand, C150 particles exhibit worse long-term performance when stored in CO 2 at 200 °C. Considerations on the Industrial-Scale TCES Process Integration.This section broadens the focus of the study to assess how a given storage condition can affect the overall performance (net solar-to-electricity and energy density) of the thermochemical storage system on an industrial scale.Different process flow diagrams (PFDs) were compared, involving low (CaO storage at 50 °C) and high (800 °C) temperatures.The analysis is constructed upon the multicycle CaO conversion data reported in the previous section (Table 3).The multicycle CaO conversion was assumed to remain constant, as in cycle 20.Considering the operation of the plant, if the conversion eventually dropped below that level, it could be compensated by introducing a fraction of fresh material (makeup).Furthermore, it was assumed that there is no significant change in multicycle CaO conversion by increasing or decreasing the storage time, as indicated in Section 3.2. The temperature in the solid storage vessels constrains the configuration of the process scheme and the efficiency since a more complex heat exchanger network is required in cases with low-temperature solid storage.Table 5 resumes the main assumptions for the CSP CaL schemes at low-temperature and high-temperature storage, which are taken from refs 13 and 15. Figure 8 shows a conceptual representation of these process schemes.Complete information about assumptions and process modeling can be found in the referenced papers since, for the comparison, the original configuration of the PFD is faithfully followed.Low-temperature storage allows energy storage without thermal losses (even in seasonal energy storage).Because of the high temperatures in both the calciner Energy & Fuels and the carbonator reactors, low-temperature storage involves a significant drop in the temperature of the materials throughout the cycle.It requires an optimized heat integration to achieve adequate system efficiency.When high-temperature solid storage is considered, the process scheme is simplified by requiring fewer heat exchangers because of a lower temper-ature difference between the reactors and the storage.However, thermal losses increase at high temperatures, as well as problems related to the material's high-temperature cohesion. 58This is not a minor matter as the increase in cohesiveness in the material as the temperature increases 59 would negatively affect the fluidization of the material to extract it from the storage tank to complete the carbonation cycle.To improve this situation, some coatings, such as silica or titania, could be added to the particles to improve their flowability. 60From a life cycle and environmental assessment perspective, there are no significant differences between the storage of solids at high or low temperatures. 43iven the advantages and disadvantages of storing solids at high temperatures, the cycle performance analysis provides valuable information for the overall process design. Both PFDs were designed and simulated under steady-state conditions.The different PFDs (Figure 8) were modeled from the process scheme and the assumptions indicated in each reference work.Solar side losses were not considered for comparison since all the schemes are based on the same particle receiver size and temperature.Thus, the net thermalto-electric efficiency was compared for each case.As is typical in many previous studies, 2 this efficiency was calculated as a weighted average throughout the day, assuming 8 h of constant solar irradiation in the "sun mode" and 16 h without radiation in the "night mode".This involves a solar multiple (SM) of 3, with the SM being the receiver design thermal output ratio to the power block design thermal input.Under this simplified approach, the efficiency of the plant was calculated according to eq 8. 19 (8) where W ̇net,sun and W ̇net,night are the net power produced in "sun" and "night" modes, respectively, Δt sun = 18 h, and Q ̇input is the net solar power entering the power plant. Equation 9 has been proposed to describe the overall energy storage capacity of the system to provide a more realistic measure of the energy density of the overall storage system. 15t is closely related to plant expenses and is a critical factor for accounting for the size of the vessels needed for both gas and solid storage.Reactors or heat exchangers are not included in the volumetric energy storage density since it is considered only the energy storage stage.Sensible heat stored accounts for around 40% of the high-temperature storage scheme (Figure 8b).(9) where X is the conversion, ΔH R is the reaction enthalpy (GJ/ kmol), c p,i is the specific heat of component i (MJ/kmol•K), T reactor is the decomposition reaction temperature (K), T i,vessel is the storage temperature of component I (K), υ i is the specific volume of component i at storage conditions (m 3 /kmol), ε i is the internal porosity of component i, and ϕ is the particle packing density, whose value is set to 0.6 as a standard value for the random loose packing fraction of irregularly shaped particles under gravity. The experimental results (Table 3) show that multicycle CaO conversion does not vary significantly within the 80−150 μm particle size.Regarding temperature, CaO conversion slightly increases when considering low-temperature (and prolonged) solids storage.Figure 9a illustrates the net thermal-to-electric efficiency for the two particle sizes of solids and storage temperatures.By a comparison of the effect of the temperature on the thermal-to-electric efficiency, it can be seen that higher efficiency is achieved at higher storage temperatures.High-temperature storage implies a more straightforward and efficient energy integration process.In any case, the difference in the overall performance for each analyzed case is slight (+2−4%), reinforcing the finding that the system suffers a smooth penalty when storing the material at low temperature (allowing the energy storage in the long term) if adequate energy integration is carried out. In addition to storing energy in the long term with a reduced energy penalty, storing solids at low temperatures presents a fundamental advantage regarding the flowability of the material.According to ref 58, when increasing the material storage temperature from 25 to 500 °C at a consolidation stress of 1500 Pa, the tensile strength increases from around 50 Pa to above 700 Pa, directly impacting the material flowability.Hourly term storage temperatures higher than 500 °C would involve reaching the behavior of a very cohesive and nonflowing solid, 59 highly penalizing the process operation. Regarding the overall energy density (Figure 9b) estimated by eq 9, the value is highly enhanced for high temperature due to the increase in sensible energy storage.Since the CaO conversions for high-and low-temperature storage are similar, there is not much impact associated with the number of solids to be stored (which penalizes the overall energy density).It is important to note that for calculating the required solid storage volume, the packing factor has been considered independent of the temperature (a constant value of 0.6 61 ).However, the labscale test showed that the packing density smoothly increases with the temperature. 59CONCLUSIONS This work assesses the influence of the implementation of a storage phase on the multicycle performance of CaCO 3 with different PSDs.The innovation of the study resides in the consecution of the main objective: evaluate the influence of Energy & Fuels temperature, atmosphere, time, and particle size during the storage step.This allows for the assessment of the long-term energy storage performance, one of the key advantages of thermochemical energy storage systems.Limestone particles of particle sizes in the range of 80−150 μm were tested for multicycle performance using schemes implementing a storage phase.Three different storage conditions were tested: 200 °C matching molten salts, 50 °C as room temperature in a CO 2 atmosphere, and 800 °C for industrial plant integration in N 2 .Results show that storage temperatures below 200 °C in CO 2 result in very slight residual carbonation and are mostly limited to the first few cycles.Thereafter, the decay in reactivity due to the progressive sintering of the material protects the cycled CaO to react significantly during the storage phase.Storage time does not significantly impact the residual carbonation as it mostly occurs the first 5−10 min after CO 2 injection.For longer periods, the loss of active material becomes negligible.Thus, long-term storage at low temperatures appears to be viable even in a reactive atmosphere such as CO 2 . For effective CaO conversion, the best performance was obtained for the C80 sample at a low-temperature storage step (at 50 °C, X CaO,20 = 0.126).However, the difference with the other conditions tested are slight, thereby leading to the conclusion that there is limited dependence of conversion on the storage step. From a process engineering perspective, storing solids at low temperatures can significantly improve their flowability, which could have a crucial effect on the overall process operation.Although a higher storage temperature facilitates energy Energy & Fuels integration and system efficiency, the analysis results show that the effect of storing solids at ambient temperature is only 2− 4% less as compared with that at high temperature.These results confirm the potential of the CaL process as a storage system in the long term.The lower energy density of the system when storing at low temperatures (losing the contribution of storing sensible heat) could be compensated by introducing a certain fraction of limestone makeup to enhance the average multicyclic conversion, which would considerably improve the energy storage density. The results obtained are of utmost importance because the CaL system's main advantage is the possibility of storing the products of the reaction in the long term.Understanding this stage is fundamental for a profound knowledge of the CaL TCES system.Moreover, the results presented in this work could help in the development of more realistic engineering models. Figure 2 . Figure 2. Schematic diagram of the experimental procedure for evaluating storage tests.Blue and yellow indicate which experiment segments were carried out in N 2 or CO 2 atmospheres, respectively. Figure 3 . Figure 3.Time evolution of the storage conversion for calcined C80 samples, maintained in a CO 2 atmosphere at different temperatures. displays the time evolution of the mass along multiple calcination−carbonation cycles recorded for C80 particles.The multicycle scheme includes a 60 min Figure 4 . Figure 4. Mass percentage gains as a function of time for samples tested at 50 and 200 °C in CO 2 .(a) C80 and (b) C150.Legend is shared for both graphics and represents the temperature of the storage step.The red ellipse highlights the value for 1 week storage. Figure 5 . Figure 5. (a) Time evolution of temperature and sample mass (C80 sample) recorded in the thermogravimetric analysis during multicycle calcination/carbonation tests using a 60 min storage step at 50 °C.(b) Close-up view of the first cycle.Calcination and carbonation were carried out in a CO 2 atmosphere for 5 min at 950 and 800 °C, respectively.Blue and yellow highlight the segments under N 2 or CO 2 atmospheres, respectively. Figure 6 . Figure 6.(a) Multicycle evolution of the CaO conversion, calculated from eqs 4 and 5, for C80 and C150.(b) Close-up of the last 10 cycles for C80 and C150.Legend is shared for both graphs.Unfilled symbols represent samples submitted to the 200 °C storage step, and cross-filled symbols represent samples submitted to the 50 °C storage step.Solid symbols represent CaO conversion when a storage step at a high temperature (800 °C) in N 2 is considered (5 min calcination and carbonation at 950 and 800 °C, respectively). Figure 7 . Figure 7. Volumetric energy density values as a function of the cycle number for C80 particles tested by including in the multicycle experiment a storage step at (a) 50 and (b) 200 °C in CO 2 and (c) 800 °C in N 2 .Values were calculated using eq 7. Figure 8 . Figure 8. PFDs evaluated: (a) storage at low temperature (based on ref 13) and (b) storage at high temperature (based on ref 15). Figure 9 . Figure 9. (a) Plant efficiency and (b) overall energy density as a function of the storage temperature and the particle size. Table 2 summarizes the experimental conditions for the evaluation of the multicycle performance used in this work.The PSD was measured by laser diffraction using a Mastersizer 2000 from Malvern.The samples were sonicated for 30 min and dispersed in distilled water to avoid aggregation. CaO Conversion and Residual CaO Conversion. The conversion of CaO to CaCO 3 is the main parameter used in thisTable 1 . PSD Parameters of the Two Limestone Samples a a Dv(10), Dv(50), and Dv(90) indicate the percentiles, meaning that 10, 50, and 90% of the sample is smaller than the given size, respectively. Table 2 . Experimental Conditions for the Different Calcination/Carbonation Tests Carried Out in This Work Table 4 . Accumulated Volumetric Energy Density of Limestone Samples with a Storage Step at 200 and 50 °C in CO 2 and 800 °C in an Inert Atmosphere (N 2 ) Table 5 . Main Assumptions in the CSP CaL Model
v3-fos-license
2016-11-01T19:18:48.349Z
2016-03-18T00:00:00.000
14579248
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1038/srep34705", "pdf_hash": "094745d409095618d44a5b1f7fb65001997787fd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:284", "s2fieldsofstudy": [ "Medicine" ], "sha1": "094745d409095618d44a5b1f7fb65001997787fd", "year": 2016 }
pes2o/s2orc
Differential Canalograms Detect Outflow Changes from Trabecular Micro-Bypass Stents and Ab Interno Trabeculectomy Recently introduced microincisional glaucoma surgeries that enhance conventional outflow offer a favorable risk profile over traditional surgeries, but can be unpredictable. Two paramount challenges are the lack of an adequate training model for angle surgeries and the absence of an intraoperative quantification of surgical success. To address both, we developed an ex vivo training system and a differential, quantitative canalography method that uses slope-adjusted fluorescence intensities of two different chromophores to avoid quenching. We assessed outflow enhancement by trabecular micro-bypass (TMB) implantation or by ab interno trabeculectomy (AIT). In this porcine model, TMB resulted in an insignificant (p > 0.05) outflow increase of 13 ± 5%, 14 ± 8%, 9 ± 3%, and 24 ± 9% in the inferonasal, superonasal, superotemporal, and inferotemporal quadrant, respectively. AIT caused a 100 ± 50% (p = 0.002), 75 ± 28% (p = 0.002), 19 ± 8%, and 40 ± 21% increase in those quadrants. The direct gonioscopy and tactile feedback provided a surgical experience that was very similar to that in human patients. Despite the more narrow and discontinuous circumferential drainage elements in the pig with potential for underperformance or partial stent obstruction, unequivocal patterns of focal outflow enhancement by TMB were seen in this training model. AIT achieved extensive access to outflow pathways beyond the surgical site itself. becular meshwork micro-bypass stent (TMB, iStent ® G1, Glaukos Corporation, Laguna Hills, CA, USA) and trabectome-mediated ab interno trabeculectomy (AIT, trabectome, Neomedix, Tustin, California, USA) are the only MIGS approved by the Food and Drug Administration of the United States (USFDA). The TMB is a heparin-coated titanium stent measuring 1 × 0.3 mm that is inserted through the trabecular meshwork (TM) into Schlemm's canal 6 . AIT is a plasma surgery ablation technique that uses a 550 kHz bipolar electrode tip to remove the TM 1 . Ablation is performed over 90 to 180 degrees, allowing to tap into several drainage segments in comparison to single-access MIGS procedures 7 . The main shortcoming of both procedures is that they are relatively difficult to master and that surgical success of bypassing or removing the TM depends on a functioning downstream collector channel system. There is currently no method to choose the surgical site based on where reduced flow areas are. Similarly, there is no way to precisely locate where the remaining outflow resistance resides in patients where outflow enhancement fails 3 despite an otherwise correct surgical technique that should have eliminated the primary resistance of the TM 8,9 . Fellman et al. described that upon careful observation a successful TM ablation correlates with change of the episcleral fluid wave and outcomes of AIT 10 . Past studies have quantified the bulk outflow of aqueous humor 11 or modeled focal outflow mathematically 12 . Tracers, such as cationized ferritin 13 and fluorescent beads 14,15 , allow to highlight areas of high flow through the TM, but either have cytotoxic effects 16 or do not easily permit the examination of elements downstream of the TM. The purpose of this study was to develop a well calibrated, quantitative, differential canalography technique to assess conventional outflow enhancement. We hypothesized that this method could highlight outflow enhancement obtained by TMB and AIT. We developed a MIGS training model using enucleated pig eyes. Results Canalograms could be obtained in 41 out of 42 eyes (Fig. 1). Collector channels of the outflow network could be readily visualized using a new two-dye reperfusion technique. The initial filling times for fluorescein (FU) and Texas red (TR) were measured in pilot eyes to determine the proper dye sequence. These times were not significantly different (p = 0.06, n = 12). A normalization coefficient corrected TR values to match FU at select time points, thereby allowing the comparison of flow rates before and after each intervention. FU demonstrated an average increase of 56 ± 8% fluorescence units compared to TR over 4 quadrants ( Supplementary Fig. S1). Eight eyes were used here to achieve > 80% power (α = 0.05, two-tailed). Significant differences existed in the inferonasal (IN; p = 0.028), superonasal (SN; p = 0.048), and superotemporal (ST) quadrants (p = 0.040). The chromophore fluorescence intensities in the perilimbal region graphed over 15 minutes for each dye showed relatively linear increases in intensity over time as the dyes crossed the TM and the downstream outflow tract ( Fig. 2A). The intensity slopes of TR and FU were characteristic for each dye. TR had a lower peak intensity than FU. Dyes did not exhibit chromophore quenching at the concentrations used in our experiments (Fig. 2B). The angle of porcine eyes could be readily visualized by gonioscopy as done in human patients (Fig. 3). TMB implantation proceeded under gonioscopic view using the standard inserter but without viscoelastic (Fig. 3A). AIT could be performed in similar fashion and with tactile feedback that matches human eyes (Fig. 3B). Histology of the angle showed the characteristic pectinate ligaments and the large, wedge-shaped TM of porcine eyes as well as multiple Schlemm's canal-like structures in variable locations (Fig. 4A). TMB implantation presented as a single lumen that bypassed and displaced the TM and created connections Schlemm's canal-like structures (Fig. 4B). In contrast, AIT caused a near complete removal of the trabecular meshwork and direct connection to Schlemm's canal-like structures (Fig. 4C). Thermal, coagulative damage to surrounding structures was mostly absent. Time lapses of differential canalograms revealed two different outflow patterns. TMB implantation (Fig. 5A) showed a more focal outflow pattern that extended primarily radially from the location of insertion. The stented quadrant was also usually the first quadrant to show dye filling. AIT eyes (Fig. 5D) had increased flow most commonly beginning in the IN or SN, which then extended circumferentially toward centrifugal collector channels. Discussion The trabecular meshwork, a complex sieve-like tissue that permits fluid passage by giant vacuoles, variable pores, and transcytosis 17 , has long been considered to be the principal cause of decreased outflow in primary open angle glaucoma 9 with most of the resistance thought to be residing in the juxtacanalicular tissue or the inner wall of Schlemm's canal 18,19 . However, more recent experimental 20 and clinical 21 evidence suggests that a large portion of this resistance is located further downstream. Disruption 22 or ablation of TM 4,21 would be expected to achieve an intraocular pressure close to that of episcleral venous pressure around 8 mmHg 23 but this is rarely the case 3 and failure rates vary considerably from study to study 1,3,21,24 . Although the procedures discussed here are considered minimally invasive, they are difficult to learn because they are performed on a scale that is approximately 200-fold smaller than that of traditional glaucoma surgery. Maintaining visualization during these procedures is difficult as they require concurrent movement of a surgical goniolens in one hand and MIGS applicators or ablation hand pieces in the other. Because the target tissue, the trabecular meshwork, is in very close proximity to highly vulnerable and well-vascularized structures such as the ciliary body band and iris root, novice surgeons can produce serious complications without adequate practice. To address the absence of a microincisional glaucoma surgery model, we created an ex vivo porcine eye system 25 that can also be used to quantify how much outflow improvement was achieved by the trainee using a dye infusion technique 25,26 . We found that pig eyes provide a highly realistic and inexpensive practice environment that can serve to hone skills before first surgeries in patients. In our experience, human donor eyes rejected for use in corneal transplantation do not provide sufficient corneal clarity for learning or mastering angle surgery, and are too expensive to allow large numbers. This training model is a powerful preparation tool and does not have to be limited to the procedures discussed here but can also help to master scaffold devices 27 , ab interno sub-Tenon stents 28 , or suprachoroidal shunts 29 , as we can confirm. Porcine eyes are well suited for this model because they share many features that are similar to human eyes but are lacking in other non-primate eyes 30 : overall size and shape are comparable; they have a large, wedge-shaped TM that allows TMB and AIT to be performed under the required, direct gonioscopic visualization 1 ; possess circumferential drainage segments within the angular aqueous plexus 31 that are considered analogous to Schlemm's canal 30,32,33 . A quantitative morphometric analysis of the human versus the porcine outflow tract highlighted the anatomic similarities: human eyes have a Schlemm's canal with depths varying from 10-25 microns and widths of 200 to 400 microns, while the angular aqueous plexus segments of porcine eyes measure 5 to 30 microns deep and 15-150 microns wide 30 . As we recently described, the Schlemm's canal like segments in the pig are typically functionally connected and allow circumferential flow 25 . The porcine outflow 34,35 tract displays biochemical glaucoma markers 32 and giant vacuoles 36 as seen in human eyes. In order to provide a technique to compare local outflow enhancement from TMB and AIT, we developed a two dye perfusion technique that allows to compare pre-and postsurgical function. The trabecular meshwork and the outer wall of Schlemm's canal are impermeable to many larger molecules or particles but can be easily passed by water-soluble fluorescent dyes. We selected the organic fluorophores TR and FU because they are readily available and have a very favorable toxicity profile 37 . Spectral domain optical coherence tomography has also recently been used to visualize the aqueous spaces of Schlemm's canal, the collector channels 38 , and the intrascleral venous network 15 yet this method does not allow to determine actual flow. Gold nanorods can be used as a Doppler contrast agent to estimate flow 16 but this method is limited by inflammation 39 . We had to use a second dye that is different from the first one in postsurgical canalograms because molecules that can flow through the TM may also eventually diffuse into the interstitial space after some time and wash out incompletely. To ensure a valid comparison of pre-and postprocedural canalograms, we thoroughly tested both dyes to account for their characteristic properties. A 19% delay in TR initial filling time in pilot experiments was sufficient evidence for using FU followed by TR for all of the eyes. Choosing this order avoided false positive flow enhancement after the surgical procedures. TR has a molecular weight of 625 g·mol −1 , almost twice of the molecular weight of FU (332 g·mol −1 ). The two dyes revealed different baseline fluorescence intensities and slope magnitudes, thus resulting in variations in fluorescent intensities at similar time points within both time lapses. Although the relationship between dye concentration and fluorescence intensity is initially linear 40 , very high concentrations of these dyes can result in a decrease of fluorescence intensity as a result of dynamic quenching, an effect described by the Stern-Volmer equation in which excited chromophore molecules will interact with each other and lose energy through processes other than fluorescent emission 41 . Our testing of logarithmic concentrations of these dyes and measuring their emitted fluorescence through ImageJ ensured that the concentrations used for FU and TR here did not exhibit dynamic quenching. A normalization coefficient corrected TR values to match FU thereby allowing a direct comparison of flow rates before and after each intervention 42,43 . The purpose of this study was to developed an ex vivo training system that allows new surgeons to practice in a safe and highly realistic environment. We also wanted to create a differential, quantitative canalography method that uses slope-adjusted fluorescence intensities (to avoid quenching seen at high intensities) of two different chromophores to better highlight outflow enhancements obtained by TMB and AIT. Although outflow patterns of the two MIGS modalities examined here are different, it is important to recall that circumferential flow is more restricted in this pig eye model due to segmentation of the perilimbal outflow tract and that these results do not represent the performance in human eyes. The TMB is designed for insertion into a single lumen human Schlemm's canal while in the pig placements may be more commonly imperfect and only partially into Schlemm's canal like segments. Another limitation concerns the size of the aqueous plexus segments themselves. These segments can have a smaller width than the lumen of a singular human Schlemm's canal 30 . It is possible that the intracanalicular portion of the TMB stents may at least be partially obstructed by tissue between segments of the aqueous plexus or the outer wall itself, a challenge that can be observed in histological sections of human eyes as well 1 . Similarly, although TMBs were implanted under direct visualization with confirmed placement, small, unintended variations in technique may impact placement and performance in this porcine model more than in human eyes. Therefore, conclusions about the clinical performance of either TMB or AIT in human eyes based on the results presented here cannot be extrapolated. The amount of circumferential flow likely correlates with the number of drainage segments that a procedure can access effectively. A single point of access to the outflow tract, as delivered by TMB, is thought to enable flow over approximately 60 degrees in human eyes 7 and might be considerably less here. In contrast, AIT can ablate TM over up to 180 degrees in experienced hands thereby providing access to 180 plus 60 degrees, totaling to 240 degrees of outflow segments 1 . We limited ablation in this study to 90 degrees of the nasal angle because this amount is achievable by most surgeons with ease. It is important to keep in mind that despite differences in amount of angle access and high versus low immediately achieved outflow enhancement, little is known about long-term changes of the outflow system or wound healing and foreign body reaction to microimplants. In conclusion, we present an ex vivo training model for microincisional glaucoma surgery in pig eyes. We introduce a new differential canalogram technique that controls for dynamic quenching of two different chromophores, and allows for quantification of outflow enhancement after trabectome-mediated ab interno trabeculectomy and trabecular micro-bypass in this species. Methods Preparation and Pre-Perfusion of the Eyes. Correctly paired, enucleated porcine eyes from a local abattoir were processed within two hours of sacrifice. Each eye was identified as left or right and copiously irrigated with phosphate buffered saline (PBS, Thermo Fisher Scientific, Waltham, MA). Extraocular tissues were trimmed to the length of the globe. Eyes were placed on cryogenic vial cups (CryoElite Cryogenic Vial #W985100, Wheaton Science Products, Millville, NJ) to encompass the optic nerve in a compression-free mount. Six eyes were first perfused with FU (AK-FLUOR 10%, Fluorescein injection, USP, 100 mg/ml, NDC 17478-253-10, Akorn, Lake Forest, IL) followed by TR (sulforhodamine 101 acid chloride, 10 mg, Alfa Aesar, Ward Hill, MA) to establish whether the order of the dyes would affect the perfusion rate or the intensity of fluorescence; another six eyes underwent the reverse order. Initial filling time, recorded as the time at which the dyes could first be observed entering the proximal outflow tract structures (the perilimbal regions), were recorded for each eye quadrant using ImageJ (ImageJ 1.50b, http://imagej.nih.gov/ij, Wayne Rasband, National Institutes of Health, Bethesda, MD). Eight further control eyes (4 left, 4 right) were run with FU and TR with no intervention in between. After FU and TR time lapse analyses were performed at each eye's respective half-maximum FU intensity frame, fluorescence units were compared in all four quadrants. Next, FU and TR were sequentially perfused in a single eye with time lapses (CellSens, Olympus Life Science, Tokyo, Japan) taken as described below. Raw fluorescent intensities of the perilimbal flow patterns were collected every minute for a total of 15 minutes for each dye. When the canalograms of the control eyes revealed consistency between the two chromophores, a normalization coefficient (c = 1.56) was computed to adjust TR to match FU at a single relative time point in each time lapse pair, allowing for a comparison of flow rates before and after each intervention. We then determined the fluorescent intensities for FU and TR in each eye at the same relative time points of half-maximum fluorescence of FU as measured in ImageJ. This kept the time factor constant and allowed to compare outflow rates per quadrant in microliters per minute. For this computation, the aqueous humor flow rate of three microliters per minute of porcine eyes was divided by the relative fluorescence of each quadrant measured in control eyes and used as the baseline to compare postsurgical flow. Differential Canalograms. The fluorescent tracer reperfusion technique was used in 41 eyes to quantify outflow changes from TMB implantation or AIT. Whole pig eyes were prepared, mounted, and pre-perfused with DMEM as described above. Fluorescein was then gravity-infused at a concentration of 0.0332 mg/ml in clear DMEM for 15 minutes. The chromophore flow pattern was recorded as a time lapse using a stereo dissecting microscope (Olympus SZX16 and DP80 Monochrome/Color Camera; Olympus Corp., Center Valley, PA) equipped for fluorescent imaging (Chroma 49002 GFP cube and Chroma 49004 DSRED cube, Chroma Technology Corp, Bellow Falls, VT). En face images were obtained every 20 seconds at 580 × 610 pixel resolution with 2 × 2 binning and 14 bit depth (CellSens, Olympus Life Science, Tokyo, Japan). FU infusion was then stopped and the needle removed. A surgeon experienced in microincisional glaucoma surgery (NAL) performed all AITs and TMB implantations as described below (Fig. 3). The temporal, clear corneal incision was sealed in a watertight fashion using cyanoacrylate. Clear DMEM containing TR at a concentration of 0.28 mg/ml was infused for 15 minutes with a time lapse recorded. At conclusion, the eyes were processed for histology. TMB eyes were marked at the site of implantation and the stent was removed. All eyes were rinsed in PBS, hemisected, and fixed with 4% paraformaldehyde at room temperature followed by PBS for 48 hours before being placed in 70% ethanol. A corneoscleral wedge from the surgical site was paraffin-embedded for histological processing, cut at 10 μ m thickness, and stained with hematoxylin and eosin (H&E). Trabecular Meshwork Bypass Implantation Technique. Sixteen pig eyes underwent TMB implantation (iStent, Glaukos Corporation, Laguna Hills, CA, USA). The surgical technique was analogous to that used in human eyes 45 . With the temporal side of the eye facing the surgeon, a clear corneal incision was created 2 mm anterior to the temporal limbus with a 1.8 mm keratome. The loaded TMB applicator was inserted into the anterior chamber and advanced toward the TM. The stent was driven through the TM using a gentle sweeping motion. Only the proximal end of the stent remained visible in the anterior chamber (Fig. 3A). The stent was ejected from the applicator and the applicator tip was removed. A drop of cyanoacrylate was used to seal the temporal, clear corneal incision. Ab Interno Trabeculectomy Technique. After infusion with FU, 17 eyes underwent AIT, performed analogous to AIT in human eyes 1 . Eyes were positioned under a surgical microscope with the temporal side directed toward the surgeon. A 1.8 mm keratome was used to create a clear corneal incision 2 mm anterior to the temporal limbus. The inner third was slightly flared to improve mobility and eliminate striae from torque. The eyes were then tilted by 30 degrees toward the nasal side and a goniolens (Goniolens ONT-L, #600010, NeoMedix Inc., Tustin, CA) was placed on the cornea to visualize the nasal chamber angle. The tip of the trabectome handpiece was inserted into the anterior chamber with constant irrigation, and gentle goniosynechiolysis with the smooth base plate was performed to disinsert pectinate ligaments (Fig. 3B). The TM was engaged and Schlemm's canal entered with a left and upward movement. TM ablation at 1.1 mW ensued toward the left for 45 degrees with appropriate rotation of the goniolens to maintain visualization. The tip was then disengaged from the TM, rotated 180 degrees within the eye, and positioned at the original starting location. A 45 degree ablation was performed towards the right. The handpiece was removed, and the temporal, clear corneal incision closed watertight with a drop of cyanoacrylate. Time Lapse Analysis. Individual time lapses with FU and TR were analyzed using ImageJ software 46,47 . For each fluorescein time lapse, the half-maximum perilimbal fluorescence was calculated, and the appropriate frame containing perilimbal fluorescence that best matched that value was selected for analysis. The same frame number was used in the corresponding Texas Red time lapse for each eye. A masked observer measured raw fluorescence intensities from these frames by outlining the quadrants containing fluorescent outflow channels. Each pig eye was divided into four quadrants: inferonasal (IN), superonasal (SN), superotemporal (ST), and inferotemporal (IT). Quadrant outlines began at the limbus of each quadrant to exclude quantification of fluorescence of the dye in the anterior chamber. Scientific RepoRts | 6:34705 | DOI: 10.1038/srep34705 Statistics. Student's paired two sample t-test was used to compare outflow changes in the same eyes before and after each intervention. The unpaired t-test was utilized to detect any significant differences between right and left eyes. A sample size of at least seven eyes was calculated as the minimum number of control eyes needed to detect a significant difference between the two-dye reperfusion technique with 80% statistical power and an alpha error of 0.05 on a two-tailed matched pair comparison. Similarly, pilot data in the experimental groups revealed a sample size of at least 16 eyes in each group to reliably detect significant changes before and after each intervention using the same test parameters for power and alpha error.
v3-fos-license
2018-04-03T03:17:55.570Z
2017-03-27T00:00:00.000
2380875
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0174206&type=printable", "pdf_hash": "863dd690a1c67cea0a5a50b47b3603946fdac36b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:285", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "863dd690a1c67cea0a5a50b47b3603946fdac36b", "year": 2017 }
pes2o/s2orc
A tissue-specific role for intraflagellar transport genes during craniofacial development Primary cilia are nearly ubiquitous, cellular projections that function to transduce molecular signals during development. Loss of functional primary cilia has a particularly profound effect on the developing craniofacial complex, causing several anomalies including craniosynostosis, micrognathia, midfacial dysplasia, cleft lip/palate and oral/dental defects. Development of the craniofacial complex is an intricate process that requires interactions between several different tissues including neural crest cells, neuroectoderm and surface ectoderm. To understand the tissue-specific requirements for primary cilia during craniofacial development we conditionally deleted three separate intraflagellar transport genes, Kif3a, Ift88 and Ttc21b with three distinct drivers, Wnt1-Cre, Crect and AP2-Cre which drive recombination in neural crest, surface ectoderm alone, and neural crest, surface ectoderm and neuroectoderm, respectively. We found that tissue-specific conditional loss of ciliary genes with different functions produces profoundly different facial phenotypes. Furthermore, analysis of basic cellular behaviors in these mutants suggests that loss of primary cilia in a distinct tissue has unique effects on development of adjacent tissues. Together, these data suggest specific spatiotemporal roles for intraflagellar transport genes and the primary cilium during craniofacial development. Introduction Primary cilia are ubiquitous, microtubule-based extensions that protrude off a plethora of cell types throughout development. Interest in primary cilia biology has grown exponentially over the last decade, mostly due to the identification of ciliopathies, a growing class of human syndromes that occur as a result of aberrant cilia function [1]. Although there is no established phenotypic criterion for diagnosis of a ciliopathy, it has been hypothesized that a ciliopathy a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 recombines within the dorsal neural tube giving rise to neural crest cells (NCCs) and a portion of dorsal neuroectoderm in the developing midbrain (Fig 1A and 1B) [19][20][21]. Neural crest cells make up the majority of the cranial mesenchyme and make numerous contributions to the craniofacial complex, most notably to the facial mesenchyme and skeletal elements. Second, we utilized the Crect driver [22] to target recombination to cells within the surface ectoderm (Fig 1C and 1D). The developing surface ectoderm houses many signaling centers that are important for directing craniofacial development such as the frontonasal ectodermal zone (FEZ) and the nasal pits [23][24][25][26][27]. The Crect driver recombines in the surface ectoderm of the developing face (n = 17; 53%); however, we also observed a less defined recombination pattern (n = 15; 47%; S1A,B). Finally, we implemented the AP2-Cre driver [28]. AP2-Cre-mediated recombination occurs in NCCs, surface ectoderm and neuroectoderm (Fig 1E and 1F). The neuroectoderm, particularly the forebrain, serves as the scaffold upon which the face develops [29]. In addition to physically supporting facial development, the neuroectoderm also serves as an important signaling center, supplying the face with essential molecular inputs that help to guide midfacial development [23,30]. Temporal onset and spatial domains of recombination for all three drivers used are summarized in Table 1. To confirm the efficacy and specificity of all three drivers we carried out immunostaining at both e11.5 and e14.5 for the ciliary marker Arl13b on Kif3a f/f ;Wnt1-Cre, Kif3a f/f ;Crect and Kif3a f/f ;AP2-Cre embryos and found cilia were absent from neural crest alone, surface ectoderm alone and both neural crest and surface ectoderm, respectively (S2 Fig). To address the role of individual ciliary proteins and the cilia in each of these tissues, we next conditionally ablated three distinct IFT ciliary genes (Kif3a, Ift88 and Ttc21b) and analyzed the resulting facial phenotype. Loss of Kif3a in tissue-specific domains of the craniofacial complex generates a range of phenotypes KIF3A is a member of the kinesin superfamily and functions as an anterograde IFT protein [31]. To examine the role of Kif3a in tissues contributing to the craniofacial complex we conditionally excised Kif3a using each of the three drivers detailed above and examined craniofacial domains frequently affected in ciliopathies. Our previous work identified a widened midline, as determined by an increased distance between the nasal pits (internasal distance), as the distinguishing feature of Kif3a f/f ;Wnt1-Cre embryos [6,17]. As expected, the first obvious phenotype in Kif3a f/f ;Wnt1-Cre embryos at e11.5 was a significant increase in the internasal distance (n = 5), relative to wild-type embryos (n = 28) (Fig 2A, 2B and 2Y). In contrast, Kif3a f/f ;Crect embryos, in which Kif3a was lost in the surface ectoderm, did not have a significant difference in internasal distance when compared to wild-type embryos (Fig 2C and 2Y, n = 10). In addition to the Crect recombination pattern reported in Fig 1, we also observed Kif3a f/f ;Crect mutants with the alternate, broader pattern of recombination (S1C and S1D Fig). Regardless of which recombination pattern was present, the craniofacial phenotypes generated were relatively similar (S1C-S1F Fig). In Kif3a f/f ;AP2-Cre embryos, in which Kif3a was lost in NCCs, surface ectoderm and some neuroectoderm, a significant increase in the internasal distance and medially rotated nasal pits were observed (Fig 2D and 2Y). We continued our analysis of each mutant at e11.5 to determine if loss of Kif3a in different tissues and tissue combinations had an effect on cell differentiation, cell proliferation or cell death. We first examined the earliest stages of cell differentiation and formation of the skeletal condensations by performing peanut agglutinin (PNA) immunostaining. We observed domains of PNA staining in Kif3a f/f ;Wnt1-Cre mutants that were laterally shifted, relative to wild-type embryos (n = 3) (Fig 2E and 2F). Similar to the pattern observed in Kif3a f/f ;Wnt1-Cre mutants, a lateral displacement of the early condensations was observed in Kif3a f/f ;Crect (n = 3) and Kif3a f/f ;AP2-Cre (n = 3) mutants (Fig 2G and 2H). Thus, despite a shifted domain, the process of differentiation did not appear to be impaired in any of the ciliary mutants. To determine if loss of Kif3a in various tissues of the craniofacial complex impacted cell proliferation and cell death, we performed immunostaining for phosphohistone H3 (PHH3) and cleaved caspase 3 (CC3) in the developing frontonasal prominence and palate at e11.5 (see areas analyzed for quantification in S3A and S3B Fig). We found that loss of Kif3a had tissue specific effects on cell proliferation and cell death ( Table 2, S1 Table). Loss of Kif3a within NCCs of the frontonasal prominence caused a significant decrease in cell proliferation, whereas loss of Kif3a in surface ectoderm or a combination of NCCs, surface ectoderm and neuroectoderm did not cause a significant change in proliferation relative to wild-type embryos (Fig 2I-2L and 2Z). Furthermore, loss of Kif3a in the surface ectoderm significantly increased the amount of cell death within the frontonasal prominence, whereas cell death was not significantly impacted when Kif3a was lost in NCCs alone (Wnt1-Cre), or a combination of tissues (AP2-Cre) (Fig 2M-2P and 2AA). In the developing palatal shelves, loss of Kif3a did not cause a statistically significant change in cell proliferation (Fig 2Q-2T and 2BB); however, a significant increase in cell death was observed within the palatal shelves of Kif3a f/f ;Crect and Kif3a f/f ;AP2-Cre mutants (Fig 2U-2X and 2CC). Thus, taken together, these results suggest a tissue specific function for Kif3a and the cilium in the developing craniofacial complex. We continued our analysis of these tissue-specific mutants at e14.5. Again, the most striking phenotype of Kif3a f/f ;Wnt1-Cre embryos was the severe midline widening (Fig 3A, 3B and 3CC; n = 5). The midline of Kif3a f/f ;Crect embryos was also significantly wider, highly dysmorphic and featured numerous tissue nodules (Fig 3C and 3CC; n = 9). Similarly, Kif3a f/f ; AP2-Cre embryos had a significantly widened midline with a dysmorphic frontonasal prominence (Fig 3D and 3CC; n = 7). Despite an overall significant increase of internasal width, we observed some variability in the severity of the midfacial phenotype in Kif3a f/f ;AP2-Cre embryos (S4A and S4B Fig). All three mutants also presented with cleft palate (Fig 3E-3H). We also examined the development of the mandibular prominence within these three mutants. Consistent with our previous reports, Kif3a f/f ;Wnt1-Cre embryos had micrognathia (undersized jaw) and aglossia (no tongue) (S5A and S5B Fig). In contrast, the tongue was clearly present in Kif3a f/f ;Crect embryos, yet there were tissue hyperplasias similar to those seen on the developing frontonasal prominence (S5C Fig). The developing mandible of Kif3a f/f ;AP2-Cre embryos resembled that of the Kif3a f/f ;Wnt1-Cre embryos, presenting with aglossia and mild micrognathia (S5D Fig). Histologically, the midfacial widening in Kif3a f/f ;Wnt1-Cre embryos was apparent by the presence of a duplicated of the nasal septum (Fig 3I and 3J) [6]. The nasal septum is a cartilaginous structure derived from NCCs that occupy the frontonasal prominence. Both Kif3a f/f ;Crect and Kif3a f/f ;AP2-Cre embryos presented with a duplicated nasal septum (Fig 3K and 3L), yet with varying degrees of penetrance and severity (Table 3 and S4D and S4E Fig). We next analyzed each mutant to determine if cell proliferation and cell death were aberrant in either the developing frontonasal prominence or palate (see areas analyzed for quantification in S3C and S3D Fig). Cell proliferation was significantly increased in the developing frontonasal prominence in all three mutants (Fig 3M, 3O, 3Q, 3S and 3DD; Table 3); however, we also observed a significant increase in cell death in Kif3a f/f ;Crect mutants ( Fig 3N, 3P, 3R, 3T and 3EE). We next examined how loss of Kif3a affected palatal development at e14.5 in all three mutants. Neither cell proliferation nor cell death was significantly altered in the palate of Kif3a fl/fl ; Wnt1-Cre mutants (Fig 3U, 3V, 3Y, 3Z, 3FF and 3GG). Cell proliferation and cell death were, however; significantly increased in Kif3a f/f ;Crect and Kif3a f/f ;AP2-Cre mutants (Fig 3W, 3X, 3AA, 3BB, 3FF and 3GG). Taken as a whole, these data again suggest that Kif3a and cilia in different craniofacial tissues have distinct roles in regulating craniofacial development, as each Cre-driver resulted in a unique phenotype and alterations in cellular behaviors. We next set out to determine if these phenotypes were specific to Kif3a. Loss of Ift88 in tissue-specific domains of the craniofacial complex phenocopies Kif3a mutants IFT88 is another anterograde IFT protein essential for ciliogenesis [32]. To determine if the craniofacial phenotypes we observed with ablation of Kif3a were specific to Kif3a itself, or due to impaired ciliogenesis, we repeated our previous approach and conditionally knocked out Ift88 with the same drivers used to conditionally ablate Kif3a. As mentioned above, the first distinguishing feature of Kif3a f/f ;Wnt1-Cre embryos was a widened midline at e11.5, as determined by internasal distance [6,33]. Ift88 f/f ;Wnt1-Cre embryos at e11.5 also had a significant increase in the internasal distance ( Fig 4A, 4B and 4Y; n = 5), albeit slightly less severe than While the overall morphology of the mutants appeared extremely similar, we again examined cell differentiation, cell proliferation and cell death of Ift88 tissue-specific mutants. We examined the earliest stages of cell differentiation and found that, similar to Kif3a mutants, Ift88 mutants contained altered domains of PNA positive cells (Fig 4E-4H). We next examined cell proliferation and cell death. Loss of Ift88 within NCCs (Wnt1-Cre) of the frontonasal prominence caused a significant increase in cell proliferation, whereas loss of Ift88 in surface ectoderm (Crect) or a combination of NCCs, surface ectoderm and neuroectoderm (AP2-Cre) did not cause a significant change in proliferation relative to wild-type embryos at e11.5 ( Fig 4I-4L and 4Z). Furthermore, the loss of Ift88 in surface ectoderm (Crect) significantly increased the amount of cell death within the frontonasal prominence, whereas cell death was The characteristic duplication of the nasal septum was again present in the Ift88 f/f ;Wnt1-Cre embryos (Fig 5I and 5J). We did not observe a duplicated nasal septum in Ift88a f/f ;Crect embryos ( Fig 5K), whereas Ift88 f/f ;AP2-Cre embryos did have a duplicated septum (Fig 5L). We again examined cell proliferation and cell death within this area. Within the frontonasal prominence, cell proliferation was significantly increased in Ift88 f/f ;Wnt1-Cre embryos ( Fig 5M, 5O and 5DD; Table 5), similar to that observed in Kif3a f/f ;Wnt1-Cre embryos. Conversely, we observed Ift88 f/f ;Crect and Ift88 f/f ;AP2-Cre embryos had a slight, but significant decrease in proliferation within the frontonasal prominence (Fig 5Q, 5S and 5DD). With respect to cell death, we observed a significant increase in CC3-postive cells in both Ift88 f/f ;Wnt1-Cre and Ift88 fl/fl ;Crect embryos ( Fig 5N, 5P, 5R and 5EE). No significant change between the number of CC3-positive cells was detected in Ift88 fl/fl ;AP2-Cre embryos relative to control embryos ( Fig 5T and 5EE). Interestingly, there was no significant change in cell proliferation or cell death in any of the Ift88 mutants within the developing palate at e14.5 (Fig 5U-5BB, 5FF and 5GG). In sum, the gross craniofacial phenotypes between Kif3a and Ift88 mutants were relatively conserved, despite some differences in cell behaviors in affected areas. Together these data further supported the hypothesis that the cilium has distinct roles in individual tissues of the craniofacial complex, while additionally suggesting that ciliary proteins themselves may also have specific functions within each tissue. Loss of Ttc21b in tissue-specific domains of the craniofacial complex does not phenocopy Kif3a or Ift88 mutants KIF3A and IFT88 are both anterograde intraflagellar transport proteins that function in the IFT-B complex to facilitate the transport of molecular cargo from the cell body to ciliary tip. TTC21B (also known as Ift139 and Thm1) is a retrograde intraflagellar transport protein that functions in the IFT-A complex in the retrograde transport of molecular cargo from the ciliary tip to the cell body [13]. To determine if ciliary proteins that function in distinct areas of the cilium affect craniofacial development differentially, we repeated our experimental strategy with Ttc21b f/aln mice, which have one floxed allele and one allele that contains the alien mutation, a null allele of Ttc21b [13]. Kif3a f/f ;Wnt1-Cre and Ift88 f/f ;Wnt1-Cre embryos at e11.5 have significant midfacial defects characterized by an increase in the internasal distance (Figs 2 & 4). In contrast, Ttc21b f/aln ; Wnt1-Cre embryos do not have a significant difference in internasal distance when compared to wild-type embryos (Fig 6A, 6B and 6Y; n = 4). Similar to Kif3a f/f ;Crect and Ift88 f/f ;Crect mutants, Ttc21b f/aln ;Crect embryos did not have a significantly wider internasal distance, yet their nasal pits were patent due to a failure of fusion between the frontonasal, lateral nasal and maxillary prominences (Fig 6C and 6Y; n = 5). Ttc21b f/aln ;AP2-Cre embryos, in which neural crest, surface ectoderm and some neuroectoderm were affected, also did not display midfacial widening (Fig 6D and 6Y; n = 4). These results again supported a hypothesis that the cilium plays tissue specific roles in craniofacial development. Furthermore, the observation of distinctly different phenotypes occurring with the same drivers suggested that individual ciliary genes have a unique function in each tissue. We next analyzed each mutant to determine if loss of Ttc21b in different tissues and tissue combinations had an effect on cell differentiation, cell proliferation and cell death. Once again, differentiation of early skeletal condensations was examined via PNA staining. Despite the pattern of the early condensations being different, PNA positive domains were still detected in all mutants (Fig 6E-6H). To determine if cellular processes including cell proliferation and cell death were altered in these mutants, we examined PHH3 and CC3 staining in the developing frontonasal prominence and palate. Cell proliferation was significantly reduced in the frontonasal prominence of Ttc21b f/aln ;Wnt1-Cre mutants, yet there was no significant change in proliferation within Ttc21b f/aln ;Crect or Ttc21b f/aln ;AP2-Cre mutants (Fig 6I-6L and 6Z; Table 6). Cell death was also altered in a tissue-specific manner. There was a significant reduction in cell death within the frontonasal prominence of Ttc21b f/aln ;Wnt1-Cre mutants, a significant increase in cell death in Ttc21b f/aln ;Crect mutants and no change cell death in Ttc21b f/aln ; AP2-Cre mutants relative to wild-type embryos (Fig 6M-6P and 6AA). There were also significant changes in cell proliferation and cell death within the developing palate. Loss of Ttc21b in NCCs (Wnt1-Cre) caused a significant reduction in cell proliferation, loss of Ttc21b in surface ectoderm (Crect) caused a significant increase in cell proliferation, and loss of Ttc21b in NCCs, surface ectoderm and neuroectoderm (AP2-Cre) caused a significant reduction in proliferation ( Fig 6Q-6T and 6BB). Finally, we examined how loss of Ttc21b in various tissues affected cell death within the developing palate. Whereas loss of Ttc21b in NCCs (Wnt1-Cre) or a combination of NCCs, surface ectoderm and neuroectoderm (AP2-Cre) had no effect on cell death in the developing palate, Ttc21b f/aln ;Crect embryos had a significant increase in cell death relative to wild-type controls (Fig 6U-6X and 6CC). We continued our analysis of these tissue-specific mutants at e14.5. In contrast to the striking midfacial phenotype of both Kif3a f/f ;Wnt1-Cre, and Ift88 f/f ;Wnt1-Cre, there was no measurable midfacial defect in Ttc21b f/aln ;Wnt1-Cre embryos (Fig 7A, 7B and 7CC; n = 4). The frontonasal prominence-derived midline of Ttc21b f/aln ;Crect embryos was dysmorphic, but not significantly wider than that of wild-type embryos (Fig 7C and 7CC; n = 6). The Ttc21b f/aln ; AP2-Cre frontonasal prominence appeared morphologically normal, yet measured significantly wider than wild-types (Fig 7D and 7CC, n = 4). We next examined the development of the palate in these mutants. The palate of both Ttc21b f/aln ;Wnt1-Cre and Ttc21b f/aln ;Crect were cleft, however Ttc21b f/aln ;Wnt1-Cre palatal shelves appeared to be elevated and patent due to either developmental delay or palatal insufficiency, whereas the palatal shelves in the Ttc21b f/ aln ;Crect were hypoplastic and dysmorphic (Fig 7E-7G). The palate of Ttc21b f/aln ;AP2-Cre embryos appeared normal (Fig 7H). (Fig 7I-7L). Thus, Ttc21b associated craniofacial phenotypes were less severe than those observed in Kif3a and Ift88 ablations. Discussion Ciliopathies are a broad class of diseases that affect various cells and tissues throughout the body. A review by Irigoin and Badano suggested that to fully understand both the biology of cilia and the pathology that arises when they are defective, the organelle must be examined both at different time-points during development and on different cell types [34]. Herein, we addressed this suggestion and evaluated the craniofacial (and neural; see accompanying manuscript by Snedeker et al.) phenotypes that arise when three different ciliary genes were conditionally deleted in various tissues of the craniofacial complex (Table 8). We observed that both tissue and gene identity contributed to the phenotypes produced in the developing face. These findings pose several interesting questions related to the role of cilia and ciliary proteins during development of the face and brain. Severity of phenotype does not linearly correlate to the combination of tissues affected We set up our experimental design to examine how loss of ciliary function would impact craniofacial development when it occurred in either NCCs (Wnt1-Cre), surface ectoderm (Crect), or a combination of NCCs, surface ectoderm and neuroectoderm (AP2-Cre). We had originally hypothesized that the resulting phenotype from AP2-Cre embryos would be the combination of those phenotypes observed with the Wnt1-Cre and Crect drivers. Interestingly, conditional ablation with AP2-Cre, did not consistently have a combinatorial or more severe phenotype than mutants created with Wnt1-Cre or Crect. There are several explanations for this finding. First, many events in craniofacial development occur as a result of sequential tissue-tissue interactions. For example, signaling centers in the surface ectoderm and neuroectoderm signal to adjacent NCCs during craniofacial development [25,27,35]. Thus, loss of cilia in the surface ectoderm or neuroectoderm could affect the recombined tissue itself (autonomous) by altering key signaling centers, and in turn, this could affect the adjacent NCCs (nonautonomous). However, when adjacent tissues lose cilia, as with the AP2-Cre driver, tissue-tissue signaling is globally disrupted. This could potentially alleviate some phenotypic presentations as aberrant signals are unable to be received by the adjacent tissue also lacking cilia. Second, the observation that loss of cilia in multiple tissues does not necessary correlate with a more severe phenotype could also be accounted for by the fact that some tissues may utilize cilia to a greater extent than others, thus generating epistatic and hypostatic tissues. Finally, it is possible that the timing of recombination in NCCs and surface ectoderm within the AP2-Cre driver occurs slightly later than that in the Wnt1-Cre or Crect, respectively, thus allowing for some important signaling to occur without incidence. Further molecular analyses must be performed to elucidate how each tissue interprets the loss of functional cilia. Conditional knockout of IFT-B genes results in more severe phenotypes than IFT-A genes Intraflagellar transport (IFT) is a cellular process in which molecular motors transport IFT particles (A and B) and cellular cargo along microtubules. In a ciliary context, IFT is essential for ciliogenesis as it transports tubulin subunits to the tip of growing cilia [36][37][38]. Within the cilium, IFT-B particles are moved from base-to-tip (anterograde transport) via kinesin-2 motors, whereas IFT-A particles are moved from tip-to-base (retrograde transport) via dynein motors [38][39][40][41]. KIF3A is a kinesin-2 motor protein that forms a heterotrimeric complex that is essential for anterograde transport. Furthermore, it is believed to be involved in other cellular processes, including neuronal transport, melanosome movement, and secretory pathway transport [42]. Loss of Kif3a results in the complete loss of the axoneme [43]. IFT88 is a member of the IFT-B complex, which also carries cargo in an anterograde direction. Loss of Ift88 produces truncated cilia in which the axoneme extends just beyond the transition zone [32]. Despite being separate proteins, both KIF3A and IFT88 are essential for ciliogenesis and anterograde IFT. The similarities in their role within the cilium likely account for the similar phenotypes generated when they are conditionally deleted out of various cell types and tissues. Our analyses herein and from our previous work [17]; however, consistently observed that Kif3a conditional mutants generated slightly more severe phenotypes (internasal width and degree of nasal septum separation) both phenotypically (Figs 3 and 5) and molecularly [33]. We surmise that the increased severity of phenotypes generated via the loss of Kif3a is due to a TTC21B (also known as Ift139 and Thm1) is an IFT-A protein that participates in retrograde transport. The aln mutation in Ttc21b, which produces a Ttc21b-null mutant, generates shorter, wider cilia that have a bulb-like structure at their distal tips [13]. Despite being structurally aberrant, these cilia are not as functionally compromised as those generated via the loss of IFT-B components Kif3a and Ift88, and thus are likely able to carry out more ciliary function. These findings are consistent with the less severe phenotypes generated in Ttc21b f/aln mutants (Figs 6 and 7). Currently, there is no 'characteristic' phenotype used to diagnose a craniofacial ciliopathy. However, it is possible that differences in rate of protein degradation between KIF3A, IFT88 and TTC21B following Cre-recombination could also contribute to the variable phenotypes. To definitively test this hypothesis, reliable and robust antibodies for all three proteins would be necessary and protein turn-over assays would have to be performed in each individual tissue. Given that we have documented the loss of cilia after recombination (S2 Fig) [17], we speculate that the phenotypic difference observed are most likely due to the degree to which the cilium is compromised in each mutant. Determining if there is a characteristic phenotype generated depending upon which component of the cilium is compromised (e.g., basal bodies, transition zone or axoneme) could greatly assist in disease diagnosis and therapeutic approaches. Loss of ciliary proteins affects various signaling pathways in distinct ways. Recently, determining the role for the cilium in coordinated signal transduction has dominated research within the field. A plethora of studies have examined how cilia contribute to the signaling of various molecular pathways including Hedgehog, Wnt, PDGF, etc., [3,5,41,44]. For some pathways, receptors are preferentially localized to the ciliary membrane [45][46][47]. For other pathways, loss of the cilium disrupts the transduction or activity of the pathway itself [13][14][15][48][49][50][51][52][53]. Several of the phenotypes observed in ciliary mutants resemble phenotypes generated when the above mentioned signaling pathways are impaired. A gain of Shh activity [54], loss of Wnt activity [55] or a loss of PDGF [56] activity all produce some degree of midfacial widening, similar to that observed in several of the mutants generated in this study. In light of the established role of cilia in the transduction of multiple signaling pathways as well as the similarities in the phenotypes produced when either the cilia or the signaling pathway is impaired, it is likely that the molecular basis for the phenotypes reported herein are due to a pleiotropic effect on several signaling pathways. Understanding precisely how the cilium transduces these signals, as well as the role of each signal in individual tissues, will be extremely valuable in assessing the basis for ciliopathic phenotypes. Furthermore, determining if specific ciliary proteins have a greater impact on the transduction of some signaling pathways versus others would be of great interest. If this hypothesis were proven true, then targeting the protein's function, independent of the role in ciliogenesis, would allow for defined manipulation of molecular signaling without impacting the cilium as an organelle. Together, studies such as these could provide new avenues of therapeutic intervention for ciliopathies. Are all cilia created equal? Cilia are frequently referred to as ubiquitous organelles and thought to be highly conserved throughout not only the embryo, but also among various species. Although there is a high degree of conservation, there are specialized cilia within the body including those within the inner ear, the olfactory epithelium, and the retina [57]. What makes these cilia specialized is the cadre of ciliary genes expressed within the cells and tissues they arise from. Thus, for all other cilia within the body, it would be expected that their conserved and ubiquitous nature would be accompanied by conserved and ubiquitous expression of the majority of ciliary genes. Despite this being the dogma of the ciliary field, a significant number of studies report distinct expression patterns for ciliary genes. Kif3a is predominantly expressed in brain, although trace amounts of Kif3a transcript are detected in various tissues [31]. Ift88 is most robustly expressed in testis, brain, kidney, lung and pancreas [58,59]. In contrast, in other vital organs, such as the heart, spleen, and liver, Ift88 expression is nearly undetectable [58]. In murine embryos, Ttc21b is broadly expressed at e6.5 and e7.5 [60]. At e8.5 it maintains a broad expression pattern with more robust levels of expression in the more posterior neural tube and somites. At e9.5-10.5 Ttc21b expression can be detected in a number of tissues, but most significantly in limbs, eyes and dorsal neural tube (See accompanying manuscript Snedeker et al.,). Thus, the expression patterns of these three ciliary genes clearly show that their expression is not ubiquitous. Thus, it is likely that cilia in certain regions of the embryo have a unique transcriptome ("ciliome") that could confer unique function to the cilium, providing an explanation as to why ciliopathies present with a variety of phenotypes. Our findings within the craniofacial complex, as well as the developing brain (see accompanying manuscript by Snedeker et al.,) suggest that variable phenotypes in ciliopathies are due to unique spatiotemporal expression of ciliary genes or distinct roles for ciliary genes within tissues that contribute to the development of these organ systems. Furthermore, they present an opportunity to study the cilia not as static organelles, but as dynamic signaling hubs that determine how a cell responds to its molecular environment. Our ongoing studies use the mutants generated herein to address these possibilities and aim to determine if modulating expression of certain ciliary proteins can alter the functionality and sensitivity of the cilium to molecular stimuli. Mouse strains and husbandry All mouse alleles used in this study have been previously published: Ttc21b tm1c(KOMP)Wtsi-lacZ (Ttc21b flox ) allele [61]; Kif3a tm2Gsn (Kif3a flox ) [62]; Ift88 tm1Bky (Ift88 flox ) [63]. Timed matings were established and noon on the day of mating plug was designated embryonic day (e) 0.5. This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the Institutional Animal Care and Use Committee of the Cincinnati Children's Hospital Medical Center (protocol number IACUC2013-0113). Animals were housed in ventilated racks with automatic water and feeders providing Purina 5010 autoclavable rodent laboratory chow with a 12 hour light-dark cycles. Certified technical personnel and registered veterinary technicians provide daily observation and handling of lab animals. Signs of dehydration and pain as indicated by hunched and lethargic behavior were monitored to assess animal health. All euthanasia and embryo harvests were performed after isoflurane sedation to minimize animal suffering and discomfort. Animal euthanasia was via cervical dislocation. Genotyping DNA was isolated from tissue samples of embryos. Genotype was determined by PCR using the primer sets listed below. Published protocols were used for all genotyping except for the Ttc21b aln allele where a custom Taqman assay was employed (Invitrogen; details available upon request). Expected products sizes are denoted in parentheses. Embryo processing Embryos were harvested at either e11.5 or e14.5, dissected, and imaged. All embryos were fixed in 4% PFA and paraffin embedded. Paraffin sections were cut to 10μm thickness. Cell counts Cell counts were performed using ImageJ software and the Cell Counter feature. Imaging equipment Whole mount images were taken using a Leica M165FC microscope. All other images were taken using a Leica DM5000 B microscope. Safranin-O staining Sections were de-paraffinized and rehydrated. Sections were then stained with Weigart's hematoxylin, rinsed in water and briefly stained with Fast Green (FCF) solution. Sections were rinsed briefly in 1% acetic acid then stained with Safranin-O. Sections were dehydrated and mounted with Permount (Fisher Scientific). Statistics Three embryos for each mutant genotype were collected and sectioned. Staining for cell proliferation and cell death were performed on serial sections. Counts were performed on each section and significance of cell proliferation and cell death were determined using the student's ttest. Boxplots were generated using BoxPlotR. (TIF) S1 Table. Cell counts for PHH3 and CC3 in all mutants at e11.5 and e14.5. Average cell counts, standard deviation and P-values for PHH3 and CC3 staining in epithelial or mesenchymal tissues of FNPs and palatal sections at e11.5 and e14.5. Green boxes indicate significance. N's refer to the number of sections counted across 3 separate embryos. (XLSX)
v3-fos-license
2022-08-19T15:16:17.694Z
2022-08-17T00:00:00.000
251653911
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/jat/2022/3915467.pdf", "pdf_hash": "3845341b194ca16bf562b63ac137d0d4de1c32f1", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:286", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "00c2bc78d051d767d3d50a08082ff5eb686503c3", "year": 2022 }
pes2o/s2orc
Analysis of Traffic Accident Based on Knowledge Graph , Introduction e prosperity of the transportation industry not only brings positive benefits to the society but also brings negative impacts that are difficult to reconcile [1]. Among them, the frequent occurrence of road traffic accidents has become one of the important factors inhibiting the steady development of cities. Faced with the severe road traffic safety situation, it is urgent to carry out research on road safety analysis and active prevention and control of road risks. e traffic management departments have accumulated a large amount of accident data in their daily management work. How to discover and reuse the potential value of these data, and dig out the potential laws and inducing factors of accidents, has become a major research hotspot today [2]. In 2017, "the 13th Five-Year Plan for Road Traffic Safety" issued by the State Council pointed out that the comprehensive collection of traffic accident data can promote the improvement of traffic safety big data to provide data basis and theoretical analysis support for the improvement of traffic safety [3]. At present, the domestic traffic control departments have accumulated a large number of original accident data by adopting standardized accident information collection technology, including the specific data of "people, vehicles, roads, and environment" related to the accident. However, the data value has not been fully excavated and remains in the descriptive statistics of the four indicators of the accident. How to excavate the hidden value of traffic accident data to prevent and reduce the occurrence of traffic accidents, and combine the relevant advanced traffic safety technology [4][5][6][7][8] to put it into practical application instead of "empty talk," has become one of the key directions of current research. ere are many ways of traffic accident data mining. At present, there are three main research angles. (1) Descriptive analysis of accident data based on traditional statistical analysis. e road traffic accident information system of China contains more than 60 items of accident data, which can describe the situation objectively and comprehensively situation when the accident occurs. Simple statistical analysis of the data collected in the system is the basic reference for China to formulate traffic safety management planning [9]. In addition, in 2005, the Shanghai Traffic Police Corps, together with Tongji University and the German Volkswagen Group, carried out the research on road traffic accidents in China and statistically analyzed the national standardized accident data collected. e mechanism of traffic accidents was summarized [10]. (2) Based on big data algorithm, the hidden value information of accident data is mined deeply from the point of view of data association and data collision. e literature [11] adopted the analysis method of data mining technology and multistandard decisionmaking method to mine the French traffic accident database BAAC, ranked the importance of the mining association rules by ELECTRE method, and selected the association rules with higher ranking as the basis for formulating accident prevention strategies and policies, so as to improve the road safety environment in France. e literature [12][13][14] used Apriori algorithm or FP growth algorithm to mine association rules among various factors in traffic accidents to provide data decision support for accident early warning and traffic safety management. (3) Visual analysis of accident data is carried out based on data understanding. e combination of human cognition and machine cognition through the human-computer interface improves the human ability to understand huge and complex data. e literature [15] designed and implemented a multiview association visualization analysis system based on the spatial semantic enhancement model of traffic accident data, revealing the spatial semantic mode of traffic accidents for users and contributing to indepth analysis of the causes of traffic accidents. Knowledge graph (KG) [16] is a new research method of data mining, which represents the mutually independent entities and their relationships in the objective world in the structured form of the graph and forms the basic units of the graph in the form of "entity-relation-entity" triple. e relationship is the link that connects the entity, and both the entity and the relationship can have attributes, thus forming a network structure [17,18]. In short, through the graph structure pattern of triple [19], KG transforms the knowledge in the objective world into a structure that can be understood and processed by machines and can display intuitive and visual characteristics to be understood and reused by human beings. e literature [20] established a traffic knowledge graph based on multisource and heterogeneous data, which combines the four elements of "people, vehicles, roads, and environment." At the same time, based on the relationship among three kinds of traffic events (traffic accident, traffic congestion, and traffic feedback), the traffic reason graph was established, and the recognition of text traffic incident and Weibo traffic incident was realized by using knowledge and reason graph. It provided a solution for finding traffic problems and early warning of traffic accidents. e literature [21] (2021) used Word2vec word vector model to extract and classify the keywords (accident features and accident cause attributes) in traffic accident text and generated the knowledge graph of traffic accident domain based on Neo4j and realized the visual analysis of it based on Gephi. Traffic accident data, as the basic data of traffic safety research, can provide multidimensional data support for traffic safety management decision-making. e integration of knowledge related traffic accident based on KG can effectively give full play to the value of data resources and inject knowledge support into traffic safety management decision-making, which has important research value to improve traffic safety. Based on the structured traffic accident case data, this paper establishes the graph structure and visual traffic accident knowledge graph, that is, the traffic accident knowledge graph that integrates "people, vehicles, roads, and environment." e knowledge is stored by using Neo4j graph database. e multidimensional and multilevel analysis of accidents is realized by using Cyber sentences, including accident portraits, accident classification, accident statistics, and accident association path analysis. The Definition of Knowledge Graph and Its Constituent Elements Taking the word "knowledge graph" apart analyzes its meaning. First of all, "knowledge," from a philosophical perspective, is the achievements obtained by human beings from various ways of life and production, which come from the objective world, and is the systematic understanding of human beings to summarize, refine, and sublimate all kinds of facts, descriptions, information, and so on. From the "system of knowledge and wisdom of data information" described by Rowley J, that is, the DIKW system [22], as shown in Figure 1, the formation of "knowledge" goes through a process from data to information and then to the transformation of knowledge, which is the cognition of human beings after processing data. e ultimate destination of "knowledge" is the "wisdom" used by human beings, that is, the application of knowledge. Second, the meaning of "graph" is to form a network structure in the form of "graph." In the study of graph theory, in 1878, "graph" was first proposed by Sylvester [23]. A graph is composed of multiple nodes and multiple edges as shown in Figure 2, which shows a simple graph composed of five nodes and six edges. To sum up the above definition, the knowledge graph is to express the knowledge in the form of graph. e nodes of a graph represent concepts or entities, and the edges represent the relationship between nodes. With an objective fact that "Taiwan is a provincial administrative region of China" as an example, the semantic relationship of this fact can be expressed as a language that can be understood by machines in the form of "China, provincial administrative region, Taiwan." Among them, "China" and "Taiwan" are two nodes, and the edge is used to indicate that the relationship between the two nodes is a "provincial administrative region." Some domestic experts and scholars have also made some descriptions on the de nition of knowledge graph. e literature [24] de ned it as: " e knowledge graph describes concepts, entities and their relationships in the objective world through a structured way, expressing the information of the Internet into a form closer to the human cognitive world. It provides a better ability to organize, manage and understand the massive information on the Internet." e literature [25] de ned it as: "Knowledge graph is essentially a knowledge base of semantic network [26], that is, a knowledge base with directed graph structure, in which the nodes of the graph represent entities or concepts, and the edges of the graph represent various semantic relationships between entities or concepts." e unique de nition of knowledge graph has not been given in academic circles, but the commonness of these de nitions is that entities and relations are the basic elements of knowledge graph. (1) Entity: it refers to the concrete things that exist in the objective world, corresponding to the ontology in the semantic network. For example, in a tra c accident, the entity can be the name of the person involved in the accident, illegal behavior, license plate, a certain road, rainy day, and so on. (2) Concept: it is also known as the type of entity, which is an abstract generalization of things that share common characteristics. For example, the concept of "person" is a summary of "the name, age, and illegal behavior of the person involved in the accident," in which "person name and illegal act" are the entity of "person." (3) Relationship: it refers to a connection between different concepts or entities. For example, there is a "party to the accident" relationship between "accident" and "person." In addition, attributes can be used to describe certain characteristics of an entity or relationship. For example, the attributes of age, gender, and so on, can be included in the entity of "name." General KG and domain KG are the two major categories of current KG. e former, as its name implies, is a general domain-oriented and common-sense knowledge graph, which mainly serves web search and encyclopedic questions and answers, such as DBpedia [28], Yago [29], and Freebase [30], while the latter is oriented to a speci c vertical professional eld with professional knowledge, such asnancial knowledge graph [31], medical knowledge graph [32], and so on. e tra c accident knowledge graph constructed in this paper is obviously aimed at the domain knowledge graph of the vertical professional eld of transportation. Construction of Traffic Accident Knowledge Graph Tra c accident knowledge graph is a knowledge graph oriented to the professional eld of transportation, and its construction process and objectives need to be determined according to the requirements of professional knowledge. Following the construction principle of "beginning with demand and ending from application," this paper divides the life cycle of constructing tra c accident knowledge graph into ve stages, that is, knowledge demand, knowledge modeling, knowledge extraction, knowledge storage, and knowledge application as shown in Figure 3. Among them, knowledge modeling, knowledge extraction, and knowledge storage are the core links of constructing knowledge graph. Knowledge modeling means that based on the analysis of data sources, concepts and relational patterns are selected to form a knowledge structure that meets knowledge demand, that is, concept and relational schema design. Knowledge extraction is to extract knowledge elements from multisource data. Knowledge storage is to store the acquired knowledge in the database for the application of knowledge. As the data of this study are structured road tra c accident case data, the main problem is how to map the entities, relationships, and attributes contained in a larger number of data into the database completely. Based on the above life cycle diagram, the speci c process of constructing the tra c accident knowledge graph is described as follows: rst, according to the existing data sources, the knowledge demand for tra c accident analysis is determined. en, knowledge extraction of entities, relationships, and attributes is carried out from structured data. All elements are written into the Neo4j graph database to complete the construction of the tra c accident knowledge graph. Finally, based on this KG, multidimensional visualization analysis of accident data is realized by the Cypher query statement. Knowledge Demand Analysis of Tra c Accidents. e application value of knowledge graph in tra c accidents is mainly re ected in the following aspects: (1) Knowledge graph can play a comprehensive data supporting role in accident analysis. On the basis of the knowledge graph of traffic accidents, multilevel and multidimensional accident analysis can be carried out according to the knowledge semantic relation network, which broadens the ideas of accident analysis and accident prevention. (2) e accident knowledge network formed by the knowledge graph can present all kinds of accident query results in a visual way. Given an entity, it can search for another entity along its relationship path and finally display the accident query results with a network diagram composed of entities and relationships, which is a visual approach of accident knowledge interpretation. (3) Knowledge graph can realize the accident risk analysis with human as the core. e accident risk database of the driver is established by integrating the driver's accident record, traffic violation, driving vehicle, age, driving age, and other information into the map. rough the selection of risk characteristics and quantitative construction of accident risk early warning model, accident early warning for drivers, especially long-distance truck drivers, can reduce the occurrence of serious accidents. At present, the knowledge graph is still in the exploratory research stage in the aspects of traffic accident knowledge integration and accident prevention. With the continuous development of artificial intelligence and big data technology, knowledge graph will play a huge advantage and wide application prospect in accident analysis, accident risk prediction, traffic management decision support, and other aspects. e data used in this paper are traffic accident case data, which is structured relational data. e data cover the period from January to December 2017, with 9,941 pieces of data. Each row of data represents the information recorded by the traffic police after an accident, including the accident number, the location of the accident (road), the time point of the accident, the type of road, the jurisdiction to which the road belongs, the cause of the accident, the form of the accident, the identity of the parties involved in the accident, negligence and illegal behavior, the license plate information of the vehicle involved in the accident, and the type of vehicle. Combining the above advantage analysis and data source structure, the traffic accident knowledge graph constructed in this paper aims to effectively mine the value information contained in the traffic accident case data. A graph database containing traffic accident characteristic factor information and accident result information is established, based on which multidimensional and multilevel and visual accident analysis is realized, such as accident portrait, accident distribution, accident statistics, and so on. Knowledge Modeling Design of Traffic Accident. Knowledge modeling design includes conceptual pattern and relational pattern, which is for the abstract mapping of the concepts and relationships of real things. A concept is an abstract description of a certain type of entity in the objective world. Entities of the same type may have different attributes. Relationship is to explain the existence of some kind of link between entities, which is diverse. According to the demand of accident knowledge, this paper initially forms the core concept of traffic accident and then combines professional knowledge to expand the relationship between entities, so as to lay a good foundation for knowledge extraction [33]. e core concepts in the field of traffic accidents established in this paper are shown in Table 1. e field relationship of traffic accidents is shown in Table 2. In order to distinguish concepts, relations, and attributes, this paper adopts different distinguishing marks to express them. e concept labels are in English and uppercase. e relationship labels are in English and the content words are in uppercase. e attribute labels are in English, and the content words are in lowercase, such as concept tag: ACCIDENT, PEOPLE, VEHICLE, ENVI-RONMENT, etc.; relationship tag: Located_in, Juris-diction_over, Weather, etc.; attribute tag: accident_time, person_age, person_gender, etc. According to the concept and relationship description in the field of traffic accidents, the conceptual and relational pattern structure diagram of traffic accident knowledge graph finally formed is shown in Figure 4. Knowledge Extraction of Traffic Accident. Entity is an objective thing in the real world, a concrete example in the concept layer, and a key knowledge element in the knowledge graph. Entity extraction is a process of recognition, that is, entities with specific meaning are identified in some way. e entity scope in the field of traffic accidents studied in this paper is mainly the concepts listed in the conceptual pattern design. In general, entities and relationships have strong personalized features, and they are constantly increasing and updating, so knowledge extraction needs to be carried out according to the given corpus and features. Knowledge extraction technology refers to the technology of obtaining entities and relationships from multisource and heterogeneous data, which is not only the basis of constructing knowledge graph but also the derivation of big data technology. With the explosive growth of data, how to obtain useful knowledge from the data is the current technical difficulty. According to the different data source structure, it can be divided into knowledge extraction of structured data, semistructured data and unstructured data. e corresponding extraction method is shown in Figure 5. e accident data based on this paper are a structured traffic accident data table, which is mapped into RDF triple by direct mapping method, so as to realize the knowledge extraction of entities and relations. According to the concepts and relationship patterns designed above, a total of 17182 entities and 51992 relationships are extracted. e extraction results of different types of entities and relationships are shown in Table 3. Storage and Visualization of Traffic Accident Knowledge Graph Based on Neo4j. Neo4j is a graph database based on Java, which is used to store graph structure data of entities and relationships. Two entities and relations form a knowledge unit, and the relationship is used to connect two nodes, which is directional. Both entities and relationships can have attributes, which have names and various values. Tags are used to distinguish between different types of entities and relationships. CREATE statement, LOADCSV, and Neo4j import are currently the main methods to import triple data into Neo4j in batches [33]. e running speed, advantages and disadvantages, and of application of the three data batch import methods are shown in Table 4. is paper mainly uses the method of Neo4j import is mainly used to realize the batch import of data, that is, the py2neo module package in Python is used to realize the rapid import of CSV files and the creation of nodes and relationships. Finally, the traffic accident domain knowledge, mainly involving accident cases, is formed as shown in Figure 6. Case Analysis of Traffic Accidents Based on Knowledge Graph e knowledge application of domain knowledge graph should be based on the needs of this field. is paper constructs a knowledge network around the knowledge elements of "people, vehicles, roads, environment, and accident results," which makes the accident analysis results visual. rough the analysis of traffic accidents, the summary of the occurrence law of accidents and the multidimensional classification and statistics of accidents are realized, which provides data support for improving the level of traffic safety management. Cypher is a declarative query language in Neo4j, which can query and update data efficiently. e commonly used statements include MATCH (used to match the pattern of the graph), WHERE (used with MATCH to add constraints to the pattern of the graph), and RETURN (to determine the type of results returned, which can be entities and relationships of the graph structure, or tables). e traffic accident analysis process based on knowledge graph is shown in Figure 7. Accident Portrait. Enterprise portraits, customer portraits, product portraits, and so on are one of the major applications of knowledge graph. is kind of portrait is that the knowledge graph fuses multisource data to make a more comprehensive description of the characteristics of entity objects and presents them in a visual way. is paper takes a single road traffic accident as the center and comprehensively describes the accident situation through its associated entities, relationships, and attributes, which constitutes an "accident portrait." e traffic accident portrait based on the knowledge graph can describe the location, time, cause of the accident, basic information of the accident parties, negligence and illegal behavior, and the final handling of the accident, namely, the determination of responsibility, and so on. Taking an accident portrait as an example is as shown in Figure 8. At 1 : 25 on January 1, 2017, an intervehicle Neo4j must be disabled during the import process; you cannot reimport new data using import in an established database Over ten million nodes Journal of Advanced Transportation 7 Accident Classi cation. All accident cases are carried on the accident classi cation inquiry according to a certain classi cation standard. is classi cation standard can be called accident characteristic dimension, including accident location, accident form, accident cause, and so on. According to the relationship between all kinds of entities associated with the feature dimension, the query statement conforming to Cypher is constructed. In the tra c accident knowledge graph, in order to realize the accident classi cation query according to the accident location, namely, the name of road, each name in the concept "ROAD" is regarded as an entity. ere is a relationship "Located_in" between "ACCIDENT" and "ROAD." e returned result is all the relationship paths between the two concepts. As a result, the Cypher query sentence is as follows: "MATCH p (n: "ACCIDENT")-[r: Located_in] -> (m: "ROAD") RETURN p" e returned results that visually show the distribution of the accident location are as shown in Figure 9. As can be seen from the picture, there are many accidents on Donghuan Road. erefore, it is necessary to further investigate the hidden dangers of this road. Accident Statistics. Accident statistical indicators usually have statistical indicators and classi cation indicators. Statistical indicators usually refer to four indicators commonly used in tra c accident data statistics (the number of accidents, the number of deaths, the number of injuries, and direct economic losses). e classi cation indicators include the location of the accident, the time of the accident, and so on. Users can select indicators as required to construct Cypher query statements. e returned result of "RTURN" is de ned by using the "COUNT" and "SUM" commands alone or in combination. If the result users need to return is a table, they can use the "ORDERBY" command to sort the query results. 4.3.1. e Location Analysis of Accident-Prone Places. e number of accidents is selected as the statistical index, the road as the classi cation index. e name of road is selected as an entity. ere is a relationship "Located_in" between the concept "ACCIDENT" and "ROAD." e results need to be returned as "the name of road" and "the type of road," and the number of accidents is counted and sorted in descending order. From this, the Cypher query sentence can be constructed as follows: Displaying the returned results in tabular form is as shown in Figure 10. It can be seen that the top 10 roads with more accidents include Xinghu Street, Donghuan Road, Modern Avenue, Jinji Lake Avenue, Fengting Avenue, Zhongxin Avenue East, Zhongyuan Road, Songtao Street, Loujiang Avenue, and Weisheng Road. Analysis of Time Elements of Accident-Prone Places. e time elements of accident-prone places can be year, month, and day, or they can be subdivided into hours, minutes, and seconds. Since the graph constructed in this paper stores the time element as the attribute of the accident, the query of it needs to construct the Cypher statement according to the characteristics of the attribute. is section selects the month, week, and time of the accident for analysis. e number of accidents is sorted in descending order according to the attributes of each entity in the concept "ACCIDENT" including month, week, and time. e results to be returned are "month," "week," "hour," and "the number of accidents." e number of accidents is counted Figure 11. From the monthly statistics of traffic accidents (Figure 11(a)), March, May, and April were the top three months with more than 750 accidents per month and an average of more than 25 accidents per day. is is mainly due to the fact that March to May is the month of rapid economic development after the Spring festival. Economic development can be inseparable from the development of the transportation industry, while excessive traffic flow caused accidents. e lowest number of accidents occurred in August, which is mainly due to the extreme heat and small road traffic flow. From the weekly statistics of traffic accidents (Figure 11(b)), the number of accidents on weekdays (Monday to Friday) was basically maintained at more than 700 accidents, while the number of traffic accidents on Sundays is the smallest. is is mainly due to the large commuting traffic on weekdays. From the time statistics of traffic accidents (Figure 11(c)), the occurrence of traffic accidents is mainly concentrated in the morning and evening peak, which corresponds to the time period from 7 : 00 to 9: 00 and 17:00 to 19:00. is is mainly due to the rush hour in the morning and evening; drivers and pedestrians who were in a hurry tend to ignore the safety of traffic travel and coupled with the increase of traffic flow, passenger flow, and the conflict between them, which can easily lead to accidents. In addition, the morning and evening peak coincides with the alternating time of day and night, and the switching between street lights and natural light at night can easily lead to deviation in the driver's perspective and line of sight, and then misjudge some traffic conditions. Wrong driving behavior decisions are also the main cause of accidents. Analysis of the Characteristics of Accident Parties. e characteristics of accident parties mainly refer to gender and age. According to the goal of accident analysis, the Cypher sentence can be constructed as follows: "MATCH (n: "PEOPLE") RETURN n. person_gender AS Gender, n. person_age_group AS Age, COUNT ( * ) AS Population ORDER BY Population DESC" e results returned are as shown in Figure 12. Except for people with unknown gender records, male accident victims aged 18 to 50 are the most accident-prone, which was more than twice as many as female accident victims between 18 and 50 years old. Although male drivers made up a high proportion of all drivers, their greater risk of accidents was also related to their aggressive driving behavior. Analysis of Accident Association Paths. ere may be one or more intermediate entities among different entities in the knowledge graph, in which an association path is formed by the relationship. rough the analysis of the associated path of accidents, it can find that there are some related accident groups. By focusing on the analysis of intermediate entities, problems existing in traffic management can be found. e length of the associated path can be represented by the number of intermediate entities, as shown in the following formula: (1) In the formula, L represents the length of the associated path and N represents the number of intermediate entities, If the length of an associated path is 2, there is an entity in the middle. e concept of the entity at the beginning and end of the path is "ACCIDENT" and "DEPARTMENT," respectively, and the concept of intermediate entity is "ROAD." From this, the Cypher statement can be constructed as follows: "MATCH p � (n1: "ACCIDENT")-[r1]-(n2: "ROAD")-[r2]-(n3: "DEPARTMENT") RETURN p" e results returned are as shown in Figure 13. It can be clearly seen from the picture that the number of accidents associated with the road "Zhongxin Avenue East" is the largest, and the "Hudong Squadron" of the area to which the road belongs should conduct a special investigation of its traffic safety risks. Conclusion and Prospect To study the analysis method for massive traffic accident data, based on the knowledge graph, this paper constructs a traffic accident knowledge graph, which integrates the four elements of "people, vehicle, road, and environment." e knowledge hidden in a large number of structured traffic accident case data stored by the traffic management department is effectively acquired and reused by using the knowledge graph, and the multidimensional and multilevel analysis of traffic accident data is realized. e visual mesh graph is used to directly show the relationship between all kinds of traffic accident knowledge. e research results obtained not only are helpful for researchers to understand the characteristics of traffic accidents and the relationship between causative factors in a more intuitive way but also can provide direct and effective knowledge support and decision-making basis for traffic management departments to implement reasonable traffic management measures. It is helpful to the prevention of traffic accidents and the overall improvement of traffic safety environment, so it has important application value. In addition, the method system adopted in constructing traffic accident knowledge graph enriches the theory of traffic data mining in theory and has a certain theoretical research significance. In the follow-up research, the following points are worth paying attention to: (1) e knowledge scope of traffic accidents is wide. is paper mainly takes the structured traffic accident case data of specific regions provided by the traffic management department as the knowledge source and constructs the knowledge graph centering on "people, vehicles, roads, and environment." However, due to the single source of knowledge, the constructed KG has certain limitations and does not have universal applicability. In addition to structured data, the carriers of traffic accident-related knowledge exist in a large number of unstructured text records, web pages, pictures and videos. Knowledge extraction from multisource heterogeneous data is the focus of research in the next stage. (2) e traffic knowledge graph of superior quality can provide comprehensive and reliable knowledge support for various decision-making needs, such as travel decision, safety management decision, and so on. Its application scenarios and values are considerable. is paper only studies one branch of traffic field, namely traffic accident, and only studies the application value of knowledge graph from the angle of accident analysis. e traffic branch studied by knowledge graph and the application value of knowledge support need to be further expanded and excavated. (3) Due to the data source whose no latitude and longitude coordinates, only the accident frequency calculated by traffic accident knowledge graph is used to determine accident-prone roads, which has certain limitations. In the follow-up, the geographical coordinates corresponding to the accident location will be integrated into the knowledge graph, and the graph algorithm [34][35][36] will be used to further improve the accuracy of accident-prone road determination.
v3-fos-license
2018-12-12T21:10:35.087Z
2016-06-13T00:00:00.000
54732081
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://epress.lib.uts.edu.au/journals/index.php/AJCEB/article/download/4828/5449", "pdf_hash": "efe06d7770b6056f5adf9eebf4580b22660cc6b0", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:287", "s2fieldsofstudy": [ "Political Science" ], "sha1": "efe06d7770b6056f5adf9eebf4580b22660cc6b0", "year": 2016 }
pes2o/s2orc
Factors determining the success of public private partnership projects in Nigeria The implementation of public private partnership (PPP) procurement method is expected to help governments in the development of infrastructures and provides an opportunity for the reduction in the governments’ debt profiles. This method has been adopted in Nigeria for more than a decade and with these years of implementation, few infrastructural projects have been developed using this method while some have been unsuccessful. This study aims to examine the PPP projects implementation in Nigeria and identify the most critical factors that could determine the success of such projects. A total of 184 questionnaires were received from public and private sectors’ participants in the implementation of PPP projects. An exploratory factor analysis identified seven critical success factors as projects feedback, leadership focus, risk allocation and economic policy, good governance and political support, short construction period, favourable socio-economic factors, and delivering publicly needed service. This study shows that more developmental projects could be delivered through PPP if the government could focus on these main factors in the implementation process. The result will influence policy development towards PPP and guide the partners in the development of PPP projects. Introduction The governments are increasingly using public private partnership (PPP) procurement arrangement to deliver works and services in both developed and developing countries (Ng, Wong and Wong, 2012;Li et al., 2005).Where the private sector is being used to providing public facilities through partnerships in order to address the infrastructural deficit without the financial commitment to the government (Syuhaida and Aminah, 2009).This method is adopted so that the limited available resources could be channelled to other sectors (Udechukwu, 2012).Despite the benefits, the utilization of PPP as a procurement option still remains comparatively low in percentage to the total public investments in infrastructure (RICS, 2011).However, the public sectors are looking towards best approaches in order to satisfy and meet the demand of the citizens despite the challenges of inadequate budgeting while the citizens are calling for improved public services (Hartmann, Davies and Frederiksen, 2010).This demand is very high in Nigeria which is the biggest economy in Africa (The Economist, 2014) and her high population of about 170million needs to be supported by adequate public infrastructures. The Global Competitiveness Report 2012-2013 ranked Nigeria as 117th in the quality of overall infrastructure highlighted in the twelve pillars of competitiveness among 144 countries across the continents (World Economic Forum, 2012).The report showed that the government's investments in the provision of public infrastructures and other services in the country contributed very little to the development of the public facilities, and the implementation of PPP infrastructure projects has recorded very low performance in Nigeria (Awodele, Ogunlana and Akinradewo, 2012).This study sets out to examine the PPP projects implementation in Nigeria and to identify the most critical factors that determine the success of such projects.The findings of this study could help the industry practitioners to understand the important factors for PPP projects while providing valuable information for organizations that intend to participate in PPP projects in Nigeria.This study applies an industry-wide survey with large respondent numbers, to explore the key ingredients for successful delivery of PPP projects and to examine the perceptions of public and private sector participants.The findings of this study are valuable and useful for the successful application of PPP in the Nigerian construction industry. Literature review The critical success factors (CSFs) are those limited areas in the organization's activities that could result in the organization's success and performance (Kwak, Chih and Ibbs, 2009).It could also be defined as "few key areas of activity in which favourable results are absolutely necessary for a particular manager to reach his goals" (Bullen and Rockart, 1981, p3).The concept of critical success factors could be traced to Ronald Daniel, who initiated and used the word 'success factors' in the 1960s (Chien, 2014).This was later made popular by John F. Rockart of MIT Sloan School of Management in the 1970s where he introduced 'critical success factors' in the article published in Harvard Business Review (Bullen and Rockart, 1981;Chien, 2014).Since then, many authors have applied the idea to various fields.However, this is a concept being adopted in the construction industry (Sanvido et al., 1992;Chua, Cog and Loh, 1999).The concept of critical success factors has been investigated by many authors on PPP projects (Jefferies, Gameson and Rowlinson, 2002;Li et al., 2005;Cartligde, 2006;Jacobson and Choi, 2008;Cheung, 2009;Agrawal, 2010;Minnie, 2011;Chou et al., 2012;Ng, Wong and Wong, 2012;Cheung, Chan and Kajewski, 2012;Tang et al., 2013;Ismail, 2013;Wibowo and Alfen, 2014;and Ameyan and Chan, 2015).A comprehensive literature review was conducted to identify CSFs of PPP in the construction industry.Relevant published documents like textbooks, journal articles, conference papers, research reports and other materials were reviewed.Table 1 presents the summary of the literature review on the key PPP success factors identified by authors from different countries.The table listed fifteen (15) different studies by several authors that cut across various regions.The review identified twenty-eight (28) key success factors for PPP projects implementation.These success factors were later included in the questionnaire survey for this study. The factors that were identified for the private sector participants in the implementation of PPP projects are strong private consortium (Jefferies, Gameson and Rowlinson, 2002;Li et al., 2005;Zhang, 2005;Cheung, 2009;Ng, Wong and Wong, 2012;and Ameyan and Chan, 2015), appropriate risk allocation and sharing (Li et al., 2005, Zhang, 2005;Cheung, 2009;Wibowo and Alfen, 2014), available financial market (Li et al., 2005;Cheung, Chan and Kajewski, 2012;Ismail, 2013;Ameyan and Chan, 2015), thorough and realistic cost/benefit assessment (Li et al., 2005 andChou et al., 2012), economy viability (Zhang, 2005 andNg, Wong andWong, 2012), nature of contractual agreement (Zhang, 2005;Agrawal, 2010;and Wibowo and Alfen, 2014), favourable legal framework (Cheung, 2009;Cheung, Chan and Kajewski, 2012;Ismail, 2013, Wibowo andAlfen, 2014;and Ameyan and Chan, 2015), delivery publicly needed service (Minnie, 2001;Ng, Wong and Wong, 2012;and Ismail, 2013), sound economy policy (Cheung, 2009;and Ismail, 2013), and stable micro-economic conditions (Cheung, 2009;and Cheung et al, 2012).Two factors that were identified as key factors for the delivery of PPP projects by the public sector were the alignment with government's strategic objectives (Ng, Wong and Wong, 2012;and Tang et al., 2013) and strong political support (Wibowo and Alfen, 2014;and Ameyan and Chan, 2015).However, the commitment and responsibility of public/private sectors (Li et al., 2005;Cartlidge, 2006;Jacobson and Choi, 2008;Cheung 2009;Chou et al, 2012;Ismail, 2013;and Ameyan and Chan, 2015), true partnership (Cartlidge, 2006;Jacobson and Choi, 2008), and open communication (Cartlidge, 2006;Jacobson and Choi, 2008) are the factors that are common to both public and private sectors' participants in the projects' implementation.Jefferies, Gameson and Rowlinson (2002) identified three key factors for Australia; solid consortium with a wealth of expertise, considerable experience, high profile and a good reputation; an efficient approval process that assists the stakeholders in a very tight timeframe; and innovation in the financing methods of the consortium.Tang et al. (2013) identified five key success factors as the identification of client/owner requirements; clear and precise briefing documents; feedback from completed projects; and thorough understanding of client/owner requirements.Thus, the wealth of expertise, experience, and reputation are expected to help in the successful delivery of projects under PPP arrangement.The expertise is expected to be in both public and private sectors of the industry.They both have great roles to play in the system.Li et al. (2005) Jacobson and Choi (2008), whereas, in China, the key success factors for PPP projects are a stable macro-economic condition; favourable legal framework; sound economic policy; available financial market; and multi-benefit objectives (Cheung, Chan and Kajewski, 2012).A stable macro-economic condition is seen as an important factor in the delivery of PPP project successfully in the country, although, China is still being recognized as a developing country, despite the large size of her economy.If the economic condition is stable, the investors would be willing to invest their money, since they hope to recoup their investment in a favourable economic environment.These factors are those identified in the countries that are regarded as developed nations.These factors are tending towards the private sector's wealth of experiences and the utilization of such experiences.Agrawal (2010) identified concession agreement, short construction period, and repayment of the debt as the PPP success factors in India, while Minnie (2011) identified delivering a publicly needed service and achieving the objectives of the partnership for South Africa.Ameyan and Chan (2015) identified government (political) commitment, adequate financing, public acceptance/support, strong and competent private partner, and effective regulatory and legal structures.In Malaysia, Ismail (2013) identified five key success factors for the delivery of PPP projects; good governance; commitment/responsibility of public/private sectors; favourable legal framework; sound economic policy; and available financial market.While in Indonesia, the identified key success factors for PPP projects are sound legal basis; an irrevocable contract; sensible and manageable risk-sharing arrangements; clearly defined coordination mechanisms; and strong political support (Wibowo and Alfen, 2014).These identified success factors in Malaysia and Indonesia are similar to those factors identified in other countries mentioned earlier.But in Indonesia, an irrevocable contract as a success factor is distinct from other countries reported.This may be due to the situation of contract administration within the country.These factors are identified in the developing countries and show that the public sector needs to make more contributions to the development of the PPP projects. Methodology For the purpose of this study, a questionnaire survey was used as the main research instrument to obtain relevant data from participants who have played key roles in the implementation of PPP projects from public and private sectors.This was preceded by a rigorous literature review to investigate the current status of the implementation of PPP procurement system.Three interviews were later conducted with experts on PPP projects in order to seek their opinions on the compiled list of PPP CSFs and the suitability of those factors to the local practice.This served as a pilot interview and the interviewees agreed to the relevance of those factors.The profiles of the participants in the pilot interview are shown in Table 2 and their opinions added value to this study.The questionnaire survey was later conducted.This is an effective method of obtaining necessary information and data from a large sample size of the population for quantitative data analysis (Cheung, 2009).A pilot survey was undertaken with twenty (20) respondents before distributing the questionnaires.This stage involved a large number of participants in the PPP procurement system higher than case studies or interview approaches. The questionnaire is divided into two major parts.Part A was on the information about the respondent and part B is on the overall success factors for PPP projects in Nigeria.The respondents were requested to rate the PPP success factors using a five-point Likert scale in part B. The rating systems for the importance of each variable in the questionnaire using the 1 to 5 scale were adopted.Score 1 represents not important, score 2 represents fairly important, score 3 represents important, score 4 represents very important, and score 5 represents highly important. Non-probability convenience sampling method was used for the survey since the actual number of participants could not be ascertained due to lack of data base that describes the population of participants in the PPP procurement arrangement in Nigeria.Snowball sampling was also used to complement the process where identified participants were persuaded to recommend other key participants to serve as respondents in the data collection process.The respondents included Architects, Quantity Surveyors, Builders, Engineers, Project Managers and other related professionals from the public and private sector.The questionnaires were administered in Lagos (commercial capital of Nigeria), FCT Abuja (administrative capital of Nigeria) and Ogun State in Nigeria.A total of 255 questionnaires were distributed by hand delivery and a total of 184 completed questionnaires were completed and returned.This survey achieved a response rate of 72 per cent and this high response rate was achieved because the researcher was rigorous in making necessary follow up which encouraged the respondents to complete the questionnaires and return them as quickly as possible, although some were collected on later days.Another factor was that the time to answer the questions was very short and this facilitated their quick responses.It takes an average of 10 minutes to complete the questionnaire and a maximum of 20 minutes for each respondent.The following steps were taken to avoid non-response from the potentials respondents: (1) the questions concerning the personal information about the respondents were not included in the questionnaire, (2) hardcopies of the questionnaire and personal deliveries were used to increase the rates of return, and (3) opportunities were given to the respondents that needed more days to answer the questionnaires. A reliability coefficient test was carried out to determine the degree of reliability of the questionnaire template before it is administered to the respondents.This assesses the internal consistency of items in a questionnaire (Howitt and Cramer, 2008).The critical level for reliability when using Chronbach's alpha is 0.7 and any coefficient below that indicates that the variables are not sufficiently inter-correlated to combine to yield a single latent construct (Fellows and Liu, 2008).In order to achieve reliable results, the attributes that were less than 0.7 were not included in the analysis.Then, an exploratory factor analysis was carried out.Factor analysis is a statistical method used for the identification and grouping of relatively small numbers of variables that have some things in common.It is a multivariate method that shows the relationships among correlated variables difficult to interpret (Fellows and Liu, 2008).Also, it is a method that can be used to try to identify patterns in fairly large sets of data with substantial numbers of variables (Howitt and Cramer, 2008).This procedure gives opportunity for making meaningful deductions from the large set of variables in the process of interpreting the outcomes of the questionnaire survey during data analysis and interpretations.Other important measures that are considered in the factor analysis process are Bartlett's test of sphericity, Kaiser-Meyer-Olkin Measure (KMO), Measures of Sampling Adequacy (MSA) and factor extraction (Li, 2003).This data analysis technique was used in order to determine representative factors that could indicate other factors to serve as the most important variables that are useful for policy makers and administrators.These factors would help in the decision-making process while the political leaders and private investors could focus on these areas in the PPP project development to serve as a guide. Results and discussion The information on the respondents that participated in the survey is presented in Table 3, Figures 1a and 1b.This includes the respondents from public and private sectors.The result shows that 136 respondents were from the public sector and 48 respondents from the private sector representing 74% and 26% of the respondents respectively.The indication is that the respondents from the public sector participated more in the survey than the private sector.The observation is that it is much easier to approach the public sector participants than their counterparts in the private organizations.Often, the private sector participants were reluctant to provide information concerning their organizations without due authorization from the management.The ratio of private sector respondents to the variables being measured is considered low for factor analysis and this shows a limitation in this study.Figure 1a, shows that 29% of the respondents had between 11 years and 15 years working experience on PPP projects while 32% of the respondents had between 6 years and 10 years working experience.However, with about 39% of the respondents having less than 5 years working experiences on PPP projects, the result still shows that a majority of the respondents had good working experience (61%).The result also shows that the majority of respondents were involved in the Infrastructure and Housing sectors with 65% and 34% respectively (Figure 1b).These two sectors represent a total of 99% of the PPP projects by sector while other sectors make up of 1%.A reliability test was then carried out using SPSS before proceeding with the analysis of the survey data.The results show that the data collected are reliable at 0.812 and 0.781 for the public and private sectors respectively (Fellows and Liu, 2008).The use of factor analysis for the success factors of PPP projects was for the purpose of exploring to detect underlying relationships among the factors.This was later described in fewer, more comprehensive factors.This analysis was necessary to determine the few factors that the stakeholders and participants in the implementation process could concentrate upon in order to achieve the objectives of using this procurement method.The results of factor analysis of variables that enhance the success of PPP project implementation are listed in Tables 6, 7, 8 and 9.The total variables for both public and private sectors were twenty-eight (28) in number and there were thirteen (13) variables for the public sector and fifteen (15) variables for the private sector.The values of the test statistic for sphericity (Bartlett test of sphericity = 611.055)and (Bartlett test of sphericity = 599.812)for public and private sectors respectively are considered very large.The associated significance levels are small (p = 0.000), suggesting that the population correlation matrix is not an identity matrix.This could be interpreted as the correlation matrix showing all with a significant correlation at the 5% level.These results show that the principal component analysis could proceed without eliminating any of the variables.The values of KMO statistic are 0.74 and 0.767 for public and private sectors respectively and these are considered satisfactory for the factor analysis.The principal component analysis produced a three-factor solution for the public sector and a four-factor solution for the private sector with eigenvalues greater than 1.000.Table 8 shows the three component factors for the public sector which accounted for 60% of the total variance from the thirteen variables.This is the factor grouping based on varimax rotation and the loading on each factor exceeds 0.40.Each variable is weighted heavily on only one of the factors.The three principal factors are hereby interpreted as follows: Factor 1 -Leadership focus This principal factor accounts for 30.44% of the total variance of PPP success factors.This factor is composed of nine sub-factors; multi-benefit objectives, true partnership, alignment with government's strategic objectives, clear and precise briefing documents, clear defined coordination mechanisms, open communication, an efficient approval process, nature of the contractual agreement, and thorough and realistic cost/benefit assessment.This result shows that the public sector must be able to provide the needed leadership role in the implementation of PPP projects.They should focus while considering the objectives of utilizing PPP procurement, on making the partnership work and aligning the implementation process with the government's objectives.These three components have high loadings (significance 0.675, 0.636 and 0.629 respectively).Also, the public sector must assemble the team that would produce clear and precise briefing documents (significance 0.598) in order to give a good direction to the private sector participants.In an attempt to give good direction, the public sector should identify clear defined coordination mechanisms (significance 0.574) and provide open communication (significance 0.565) with the private sector and the general public.An efficient approval process (significance 0.553), nature of contractual agreement (significance 0.521), and thorough and realistic cost/benefit assessment (significance 0.521) are the last three factors that are also considered important for the public sector in the implementation of PPP projects.These factors are considered very important for the successful implementation of public projects using PPP procurement method. Factor 2 -Risk allocation and This principal factor accounts for 12.60% of the total variance of PPP success factors.This factor is composed of three sub-factors; appropriate risk allocation and sharing (significance 0.567), sound economic policy (significance 0.527) and achieving the objectives of the partnership (significance 0.490).The management of the risk associated with PPP projects is very important to the success of the project.This is confirmed by earlier studies ( (Li et al., 2005, Zhang, 2005;Cheung, 2009;Wibowo and Alfen, 2014).Therefore, the public sector should ensure that the appropriate risk is allocated to the party best suited to manage such risk.Another important factor for the success of PPP projects is the availability of sound economic policy by the political leaders.This factor gives direction to the development of the economy and provides an environment conducive to the implementation of PPP projects.Thus, if such an environment is available, then, the objectives of the partnership could be achieved.These objectives could be regarded as effective developments of infrastructure for the public sector while enabling the maximization of profit for the private sector partners. Factor 3 -Projects Feedback This principal factor accounts for 16.68% of the total variance of PPP success factors.This factor is composed of one sub-factor as feedback from completed projects.The value of its significance is 0.599.This result shows that the process of reviewing and getting feedback from completed PPP projects would provide adequate information for future implementation of such projects.The identified challenges could then be avoided while success factors could be noted and incorporated into future projects by the public sector participants. Table 9 shows the four component factors of the private sector which accounted for 54% of the total variance from the fifteen variables.This is the factor grouping based on varimax rotation and the loading on each factor exceeds 0.40.Each variable is weighted heavily on only one of the factors.The four principal factors are interpreted below. Factor 1 -Favourable socio-economic factors This principal factor accounts for 26.54% of the total variance of PPP success factors.This factor is composed of eleven sub-factors; stable macro-economic conditions (significance 0.662), favourable investment environment (significance 0.651), commitment and responsibility of public/private sectors (significance 0.650), innovation in the financial methods of consortium (significance 0.618), available financial market (significance 0.580), economic viability of the project (significance 0.551), strong private consortium (significance 0.548), favourable legal framework (significance 0.544), repayment of the debt (significance 0.484), identification and understanding of client/owner requirement (significance 0.481) and sound financial package (significance 0.473).This result shows that the implementation of PPP projects would only be successful in a favourable investment environment.This favourable environment needs to be complimented with the commitment of the public and private sectors' participants.However, the review of PPP projects implementation in other countries, showed that the successful deliveries of PPP projects in countries like United Kingdom, Australia, Singapore, Hong Kong, Malaysia and South Africa (Li et al., 2005;Tang et al., 2013;Hwang, Zhao and Gay, 2013;Ng, Wong and Wong, 2012;Ismail, 2013;and Minnie, 2011 respectively) were due to the conducive investment environments.The international and local investors were attracted to those countries mentioned above because they knew that their rights were going to be protected under the law and that they could easily get justice from the courts of law in cases where their rights are denied.This is seriously lacking in Nigeria where the contract provisions are badly being breached and court orders are frequently disobeyed. Factor 2 -Good governance and political support This principal factor accounts for 10.55% of the total variance of PPP success factors.This factor is composed of two sub-factors; good governance, and strong political support with significance values of 0.511 and 0.463.This result shows the importance of the political leaders in creating policies to drive the development of infrastructure and public services through the utilization of PPP procurement method.In addition, an earlier study showed the very importance of political support, and this is found to be the top critical success factor in the implementation of PPP projects in the United Arab Emirates (UAE) (Dulaimi et al., 2010).The study also concluded that this is relevant to most Middle East countries where governments' influences are strong.In the implementation of PPP as procurement systems, the support of political leaders and citizens, are vital to the success of the arrangement, while lack of political support could affect the commitments from the public sector to the projects and public opinion against PPP could affect the development of those projects.Moreover, social support should assist the process and allow smooth management of the facilities in terms of payment of tolls and other commitments from the public.Therefore, the political leaders are expected to do a thorough assessment of the cost and associated benefits of the projects, to determine the outcome of the process. Factor 3 -Short construction period This principal factor accounts for 8.78% of the total variance of PPP success factors.This factor is composed of one sub-factor; short construction period with significance value 0.831.The factor has the highest loading of the success factors for public and private sectors.This shows that this factor is very important to the success of the private investors in PPP projects implementation.This factor is one of the three most important PPP success factors in India (Agrawal, 2010).India and Nigeria are both developing countries.The results of the two researches show common outcomes while the social and environmental factors are identical. Hence, this result is relevant to the developing countries and this factor is important for a successful business necessary that this factor is achievable in the implementation of PPP projects.If the project is delivered on time, the project cost associated with project management is reduced and the investors could recoup the funds invested in the project. Factor 4 -Delivering publicly needed service This principal factor accounts for 7.92% of the total variance of PPP success factors.This factor is composed of one sub-factor, which is delivering publicly needed service with a significance value of 0.528.This result shows that the provision of public services that could meet the expectations of direct users of such facilities is very important for public acceptability.If the project is accepted by the public, the users would be willing to pay for the use of the facilities and the investors would be able to maximize their profits.This situation will provide a win-win result which is an important objective of PPP procurement system. Success factors model for PPP project implementation The result of the factor analysis was able to identify the principal factors that are critical to the delivery of PPP project within the study area.The principal success factors identified for the public sector participants were leadership focus, risk allocation and economic policy, and projects feedback, whereas the four principal factors identified for the private investors in the implementation of PPP projects are favourable socio-economic factors, good governance and political support, short construction period, and delivering publicly needed service.These factors are considered the most important and, if utilized, would improve the implementation of PPP procurement method while acting as a catalyst for the development of infrastructure in the country.These principal factors summarized the success factors required for the implementation of PPP projects in the study.The public sector serves as the client or owner of the projects under PPP arrangement, so, the likelihood of success would be enhanced when the public sector participants take up the leadership role to ensure that all issues relating the projects are resolved as quick as possible.As the leader and owner of the project, the public sector should create avenues for parties to benefit from the arrangement. Figure 2 shows the public and private sectors' success factors model that incorporates the seven main factors considered the most important to the successful delivery of PPP project in Nigeria.These three principal factors (projects feedback, leadership focus, and risk allocation and economic policy) are those that contribute to the PPP project success with the participation of the public sector.These factors provide a good foundation for project development during the project planning and procurement stages.If these principal factors are included in the implementation of PPP projects, the success of the project is assured.However, the other four principal factors (good governance and political support, short construction period, favourable socio-economic factors, and delivering publicly needed service) are those that ensure the success of project development through private sector participation.These four main factors must be available during the project construction and operation stages of the PPP project.The success factors model in Figure 3 explains that the PPP project implementation process must be supported by the identified main factors in order to produce an efficient and successful public project.This result shows that the seven identified factors are critical to the process of delivering PPP projects in Nigeria.The first success factor for public sector is the 'projects feedback'.This factor is expected to assist the public sector participants in understanding the challenges and be able to address them in order to improve future projects.'Risk allocation' is another factor considered important at the planning stage of the project and this helps in allocating the risks to the party in the better position to manage them.Also, a sound 'economic policy' of the government gives direction to the public sector participants while the 'leadership focus' of the public sector helps the whole process in achieving the objectives of the PPP project implementation.On the other hand, the private sector participants need 'good governance and political support' to succeed in the implementation of PPP projects while 'favourable socio-economic factors' provide enabling environment for the investors.Furthermore, the short construction period' and the 'delivering publicly needed service' are factors that help the private investors to achieve their objectives of participating and investing in PPP projects.Profit maximization and organization's reputation are some of the objectives of the private sectors' participation in the project.Thus, if the private sectors could achieve their aims of participating, the whole PPP project implementation process should succeed.This creates avenues for a win-win situation for the PPP procurement system. Conclusions The successful implementation of PPP projects is crucial to the development of social and economic infrastructures.The result of the factor analysis was able to identify the seven most important factors that are critical to a successful PPP project.These factors form the cardinal areas that governments should focus on for the improvement and development of the system to use PPP procurement method, especially in developing countries.This helps to achieve benefits for the adoption of private financing for the delivery of public projects.Moreover, the availability of a matured political environment would encourage participants in the implementation process, while a favourable socio-economic environment provides an enabling setting for growth and development.Despite the benefits, one of the limitations of this study is that the outcome of this process could only be applicable in developing countries with similar experience regarding the implementation of PPP projects.Furthermore, nearly forty percent of the respondents of this study had limited experience working on PPP projects.Therefore, any future study is expected to consider additional respondents with work experience of more than five years and the proportion of total variance should be for at least 75% for the factor analysis.Its result should be compared with the outcome of this study.Subsequently, the test-retest reliability and alternate form reliability methods for an external consistency reliability assessment of survey that were not utilized could be considered in the future study.Nevertheless, the results would assist foreign participants, especially those from developed countries, to understand the factors that promote investments in the system.The participants could be better informed on the factors that are likely to help in the delivery and implementation of PPP projects in the region. Figure 1: (a) Participants' experiences on PPP projects; (b) PPP project category Figure 2 : Figure 2: Public and private sectors' success factors model for PPP projects Table 1 : PPP success factors from literature review Table 2 : Profiles of interviewees for pilot study Table 3 : Number of respondents on survey Table 4 : Result of data reliability (public sector) Table 5 : Result of data reliability (private sector) Table 8 : Rotated factor matrix (loading) of CSFs for public sector Table 9 : Rotated factor matrix (loading) of CSFs for private sector
v3-fos-license
2024-03-27T15:21:49.305Z
2024-03-25T00:00:00.000
268717548
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1420-3049/29/7/1465/pdf?version=1711383965", "pdf_hash": "8b1c18794d96d3e1a63f1757892bae0cd9257e49", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:288", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "99c7666a6ba431910b684ac133ace55dba214635", "year": 2024 }
pes2o/s2orc
Radical Mediated Decarboxylation of Amino Acids via Photochemical Carbonyl Sulfide (COS) Elimination Herein, we present the first examples of amino acid decarboxylation via photochemically activated carbonyl sulfide (COS) elimination of the corresponding thioacids. This method offers a mild approach for the decarboxylation of amino acids, furnishing N-alkyl amino derivatives. The methodology was compatible with amino acids displaying both polar and hydrophobic sidechains and was tolerant towards widely used amino acid-protecting groups. The compatibility of the reaction with continuous-flow conditions demonstrates the scalability of the process. Introduction Since the pioneering study by Barton and co-workers into radical mediated reactions for decarboxylation, these methods have persisted as an essential tool-kit for organic chemists seeking to carry out chemoselective modifications of carboxylic acid residues on organic substrates [1,2].Recently, the decarboxylation of carboxylic acids under photoredox catalysis has emerged as a mild and facile method for the formation of alkyl radicals which can be trapped to furnish a diverse range of products [3][4][5][6].MacMillan and co-workers have also reported photoredox-catalyzed decarboxylative arylation of α-amino acids (AAs) in the context of the conversion of biomass into useful pharmacophores [7]. The archetypical thiol-ene reaction furnishes a robust thioether bond between a thiol and an unsaturated residue, and this 'click' reaction has been widely exploited for bioconjugation reactions, including peptide and protein modification [8,9].Recently, we reported the acyl counterpart of the thiol-ene reaction, the acyl thiol-ene (ATE) reaction, as a method to furnish thioesters from the addition of thioacids onto alkenes under radical mediated conditions.Thioacids possess bond-dissociation energies in the same range as alkyl thiols (87 kcal mol −1 ) and radical formation can therefore be initiated under identical conditions [10].ATE retains the covetable characteristics of the thiol-ene reaction and is believed to proceed via the same reaction mechanism.The ATE reaction has already been exploited by the Scanlan group to synthesize thiolactones via intra-and intermolecular reactions, and to obtain peptidyl/carbohydrate thioesters suitable for S-to-N acyl transfer to form amide bonds [11][12][13]. During our investigations into ATE ligation of amino acids and peptides, COS elimination of the thiyl radical emerged as a competing side reaction to the desired thioester formation for di-substituted thioacid substrates [13].Presumably due to the instability of primary radical intermediates, COS elimination was not observed as a side reaction for monosubstituted thioacids [13].To the best of our knowledge, only one previous example of COS elimination under radical mediated conditions exists in the literature, whereby Shimizu and co-workers demonstrated the thermally induced radical dethiocarboxylation of C-terminal thioacid peptides in aqueous buffer using the radical initiator VA-044 to furnish alkylated amide peptides (Figure 1a) [14].Under aqueous conditions, the authors reported that hydrolysis of the C-terminal thioacid to the corresponding carboxylic acid was a minor but notable side reaction, especially with non-sterically hindered AAs such as glycine and alanine at the C-terminal [14].A proposed mechanism for the radical mediated COS elimination is depicted in Figure 1b.Following the formation of a thiyl radical via photoinitiation from a suitable thioacid, the elimination of COS forms a carbon-centered radical.Subsequent hydrogen atom transfer (HAT) from the photosensitizer or another thioacid furnishes the dethiocarboxylated product. Herein, we report the photochemical, radical-mediated dethiocarboxylation of broadly available amino acids via carbonyl sulfide (COS) elimination of the corresponding thioacid derivative, furnishing N-alkyl amine products (Figure 1c).This reaction proceeds rapidly under UV and blue LED light irradiation.In addition, the reaction is compatible under continuous flow conditions, demonstrating the potential scalability of the reaction. Optimization of Dethiocarboxylation Reaction In order to study the dethiocarboxylation reaction, we required access to thioacid derivatives of AAs.Due to the propensity of thioacids to oxidize and hydrolyze during prolonged storage, we employed a strategy developed by Crich and co-workers whereby the thioacid is protected as an S-trityl (STrt) thioester which can be deprotected rapidly with trifluoroacetic acid to furnish the thioacid in quantitative yield after concentration in vacuo [15].The STrt thioesters are prepared from the corresponding carboxylic acids by coupling with triphenylmethanethiol (TrtSH) under Steglich thioesterification conditions (Scheme 1) [15].Herein, we report the photochemical, radical-mediated dethiocarboxylation of broadly available amino acids via carbonyl sulfide (COS) elimination of the corresponding thioacid derivative, furnishing N-alkyl amine products (Figure 1c).This reaction proceeds rapidly under UV and blue LED light irradiation.In addition, the reaction is compatible under continuous flow conditions, demonstrating the potential scalability of the reaction. Optimization of Dethiocarboxylation Reaction In order to study the dethiocarboxylation reaction, we required access to thioacid derivatives of AAs.Due to the propensity of thioacids to oxidize and hydrolyze during prolonged storage, we employed a strategy developed by Crich and co-workers whereby the thioacid is protected as an S-trityl (STrt) thioester which can be deprotected rapidly with trifluoroacetic acid to furnish the thioacid in quantitative yield after concentration in vacuo [15].The STrt thioesters are prepared from the corresponding carboxylic acids by coupling with triphenylmethanethiol (TrtSH) under Steglich thioesterification conditions (Scheme 1) [15].Scheme 1.General procedure used to synthesize thioacids. To optimize conditions for the dethiocarboxylation reaction, an N-fluorenylmethoxycarbonyl (Fmoc) derivative of the amino acid phenylalanine (Phe) was selected as a model substrate.The Fmoc-Phe-STrt thioester 1a was synthesized from Fmoc-Phe-OH and deprotected to the corresponding thioacid, Fmoc-Phe-SH (TA-1), with 25% v/v TFA/DCM in 5 min.The conversion was monitored by 13 C NMR spectroscopy to ensure there was quantitative conversion to the desired thioacid (Figure S1) before proceeding with screening conditions for the dethiocarboxylation reaction outlined in Table 1.High yields of the product, N-Fmoc-protected phenethylamine 2a, formed by the dethiocarboxylation of TA-1, were observed under both UV light irradiation (354 nm) using 2,2-dimethoxy-2-phenylacetophenone (DPAP) as a photoinitiator (Table 1, entries 1-2) and blue LED light irradiation (440 nm) with Eosin Y as a photoredox catalyst (Table 1, entries 3-6).Using UV irradiation, an optimal yield of 96% was observed at 15 min reaction time with 0.2 equiv. of DPAP in EtOAc (Table 1, entry 2).The reaction with blue LED light irradiation with 0.25 equiv. of Eosin Y for 1 h in EtOAc gave the best result with a 97% NMR yield (Table 1, entry 4).Having comparable results between the two different light sources, blue LED irradiation was selected to take forward for batch synthesis since it is a milder initial energy source and can be carried out using inexpensive light sources.Moreover, visible light-induced radical reactions benefit from the incapability of most organic molecules to absorb visible light, minimizing possible side reactions and the decomposition of reactants or products by photoactivation.An attempt with a catalytic amount of Eosin Y (1 mol%) gave a satisfactory yield of 72% in 1 h (Table 1, entry 6).A low yield of 6% was obtained without any Eosin Y under blue LED light irradiation (Table 1, entry 7), with the low conversion likely arising from formation of thiyl radicals from molecular O2-derived free radicals in solution [16].When the reaction was conducted in the dark without blue LED irradiation, no reaction was observed (Table 1, entry 8).The radical Scheme 1.General procedure used to synthesize thioacids. To optimize conditions for the dethiocarboxylation reaction, an N-fluorenylmethoxycar bonyl (Fmoc) derivative of the amino acid phenylalanine (Phe) was selected as a model substrate.The Fmoc-Phe-STrt thioester 1a was synthesized from Fmoc-Phe-OH and deprotected to the corresponding thioacid, Fmoc-Phe-SH (TA-1), with 25% v/v TFA/DCM in 5 min.The conversion was monitored by 13 C NMR spectroscopy to ensure there was quantitative conversion to the desired thioacid (Figure S1) before proceeding with screening conditions for the dethiocarboxylation reaction outlined in Table 1.To optimize conditions for the dethiocarboxylation reaction, an N-fluorenylmethoxycarbonyl (Fmoc) derivative of the amino acid phenylalanine (Phe) was selected as a model substrate.The Fmoc-Phe-STrt thioester 1a was synthesized from Fmoc-Phe-OH and deprotected to the corresponding thioacid, Fmoc-Phe-SH (TA-1), with 25% v/v TFA/DCM in 5 min.The conversion was monitored by 13 C NMR spectroscopy to ensure there was quantitative conversion to the desired thioacid (Figure S1) before proceeding with screening conditions for the dethiocarboxylation reaction outlined in Table 1.High yields of the product, N-Fmoc-protected phenethylamine 2a, formed by the dethiocarboxylation of TA-1, were observed under both UV light irradiation (354 nm) using 2,2-dimethoxy-2-phenylacetophenone (DPAP) as a photoinitiator (Table 1, entries 1-2) and blue LED light irradiation (440 nm) with Eosin Y as a photoredox catalyst (Table 1, entries 3-6).Using UV irradiation, an optimal yield of 96% was observed at 15 min reaction time with 0.2 equiv. of DPAP in EtOAc (Table 1, entry 2).The reaction with blue LED light irradiation with 0.25 equiv. of Eosin Y for 1 h in EtOAc gave the best result with a 97% NMR yield (Table 1, entry 4).Having comparable results between the two different light sources, blue LED irradiation was selected to take forward for batch synthesis since it is a milder initial energy source and can be carried out using inexpensive light sources.Moreover, visible light-induced radical reactions benefit from the incapability of most organic molecules to absorb visible light, minimizing possible side reactions and the decomposition of reactants or products by photoactivation.An attempt with a catalytic amount of Eosin Y (1 mol%) gave a satisfactory yield of 72% in 1 h (Table 1, entry 6).A low yield of 6% was obtained without any Eosin Y under blue LED light irradiation (Table 1, entry 7), with the low conversion likely arising from formation of thiyl radicals from molecular O2-derived free radicals in solution [16].When the reaction was conducted in the dark without blue LED irradiation, no reaction was observed ( High yields of the product, N-Fmoc-protected phenethylamine 2a, formed by the dethiocarboxylation of TA-1, were observed under both UV light irradiation (354 nm) using 2,2-dimethoxy-2-phenylacetophenone (DPAP) as a photoinitiator (Table 1, entries 1-2) and blue LED light irradiation (440 nm) with Eosin Y as a photoredox catalyst (Table 1, entries 3-6).Using UV irradiation, an optimal yield of 96% was observed at 15 min reaction time with 0.2 equiv. of DPAP in EtOAc (Table 1, entry 2).The reaction with blue LED light irradiation with 0.25 equiv. of Eosin Y for 1 h in EtOAc gave the best result with a 97% NMR yield (Table 1, entry 4).Having comparable results between the two different light sources, blue LED irradiation was selected to take forward for batch synthesis since it is a milder initial energy source and can be carried out using inexpensive light sources.Moreover, visible light-induced radical reactions benefit from the incapability of most organic molecules to absorb visible light, minimizing possible side reactions and the decomposition of reactants or products by photoactivation.An attempt with a catalytic amount of Eosin Y (1 mol%) gave a satisfactory yield of 72% in 1 h (Table 1, entry 6).A low yield of 6% was obtained without any Eosin Y under blue LED light irradiation (Table 1, entry 7), with the low conversion likely arising from formation of thiyl radicals from molecular O 2 -derived free radicals in solution [16].When the reaction was conducted in the dark without blue LED irradiation, no reaction was observed (Table 1, entry 8).The radical nature of the dethiocarboxylation reaction was confirmed by the addition of 2 equiv. of TEMPO, which resulted in no reaction (Table 1, entry 9). Reaction Scope With the optimized conditions in hand, STrt thioesters 1a-1k were prepared in good yields from commercially available and suitably protected amino acid derivatives (Scheme 2).The STrt thioesters were deprotected by treatment with varying concentrations of TFA, depending on whether concomitant deprotection of the N-terminal and side chainprotecting groups were also required to furnish the required thioacids (Section 3.4).The thioacids were concentrated in vacuo following deprotection and used promptly without further purification. nature of the dethiocarboxylation reaction was confirmed by the addition of 2 equiv. of TEMPO, which resulted in no reaction (Table 1, entry 9). Reaction Scope With the optimized conditions in hand, STrt thioesters 1a-1k were prepared in good yields from commercially available and suitably protected amino acid derivatives (Scheme 2).The STrt thioesters were deprotected by treatment with varying concentrations of TFA, depending on whether concomitant deprotection of the N-terminal and side chain-protecting groups were also required to furnish the required thioacids (Section 3.4).The thioacids were concentrated in vacuo following deprotection and used promptly without further purification.The results of the scope using the optimized conditions are summarized in Scheme 3. High isolated yields of the N-Fmoc phenethylamine 2a (72%) and N-Boc phenethylamine 2b (79%) were obtained following dethiocarboxylation, demonstrating the suitability of the method towards the two most common α-amino-protecting groups used for AAs.The free amino derivative 2c was not isolated.Since the dethiocarboxylation reaction requires a protonated thioacid for proton abstraction to form the thiyl radical, the α-amino group will always be in the protonated alkylammonium form.This may act as an electron withdrawing group that destabilizes the adjacent carbon centered radical that would form upon COS elimination.Similar results have been reported for the thiol-ene click reaction, where proximal α-amino groups have been shown to prevent reaction on peptide substrates [17].N-Ac phenethylamine 2d was isolated in a poor yield of 27% due to an unidentified side reaction during the deprotection of thioester 1c to the corresponding thioacid.The glycine derivative 2e was only detected in trace amounts by 1 H NMR spectroscopy, with the low rate of COS elimination likely attributable to the high energy primary carbon-centered radical intermediate that is formed.The fully protected serine derivative The results of the scope using the optimized conditions are summarized in Scheme 3. High isolated yields of the N-Fmoc phenethylamine 2a (72%) and N-Boc phenethylamine 2b (79%) were obtained following dethiocarboxylation, demonstrating the suitability of the method towards the two most common α-amino-protecting groups used for AAs.The free amino derivative 2c was not isolated.Since the dethiocarboxylation reaction requires a protonated thioacid for proton abstraction to form the thiyl radical, the α-amino group will always be in the protonated alkylammonium form.This may act as an electron withdrawing group that destabilizes the adjacent carbon centered radical that would form upon COS elimination.Similar results have been reported for the thiol-ene click reaction, where proximal α-amino groups have been shown to prevent reaction on peptide substrates [17].N-Ac phenethylamine 2d was isolated in a poor yield of 27% due to an unidentified side reaction during the deprotection of thioester 1c to the corresponding thioacid.The glycine derivative 2e was only detected in trace amounts by 1 H NMR spectroscopy, with the low rate of COS elimination likely attributable to the high energy primary carbon-centered radical intermediate that is formed.The fully protected serine derivative 2f was isolated in a moderate yield of 51%.Deprotection of tert-butyl ether to the hydroxy group resulted in a significant drop in yield to 16% (2g), which increased to 42% with the more substituted threonine derivative 2h.N-alkyl derivatives of proline (2i), methionine (2j), glutamic acid (2k), lysine (2l), and tryptophan (2m) were isolated in low to moderate yields after dethiocarboxylation of the corresponding thioacids.In the case of 2e-2m, incomplete consumption of the corresponding thioacids were observed by analytical thin-layer chromatography (TLC) after 1 h reaction time under blue LED irradiation, which suggests that increasing reaction times could increase the isolated yield.No obvious general trends were observed for the reactivity of the AAs studied. Molecules 2024, 29, 1465 5 of 12 2f was isolated in a moderate yield of 51%.Deprotection of tert-butyl ether to the hydroxy group resulted in a significant drop in yield to 16% (2g), which increased to 42% with the more substituted threonine derivative 2h.N-alkyl derivatives of proline (2i), methionine (2j), glutamic acid (2k), lysine (2l), and tryptophan (2m) were isolated in low to moderate yields after dethiocarboxylation of the corresponding thioacids.In the case of 2e-2m, incomplete consumption of the corresponding thioacids were observed by analytical thinlayer chromatography (TLC) after 1 h reaction time under blue LED irradiation, which suggests that increasing reaction times could increase the isolated yield.No obvious general trends were observed for the reactivity of the AAs studied. Dethiocarboxylation in Flow In recent years, there has been growing interest in academia and industry in the application of photochemical reactions to continuous flow to increase reaction selectivity, efficiency, and productivity [18,19].The scaling up of photochemical reactions in batch can suffer from low yields due to poor light penetration, which is easily overcome in flow as the narrow tubing ensures uniform irradiation of the reaction mixture [20].With this in mind, we set out to investigate whether the photocatalyzed dethiocarboxylation was compatible with flow.The model reaction used for the batch optimization with the dethiocarboxylation of Phe thioacid TA-1 was applied to flow using UV irradiation since we wanted to minimize reaction time and increase throughput.Gratifyingly, N-Fmoc phenethylamine 2a was isolated in an excellent yield of 86% with 5 min residence time of the reagents under UV irradiation on a 1 mmol scale (Scheme 4).This result demonstrates compatibility of the dethiocarboxylation reaction with flow and the potential scalability of the process. Dethiocarboxylation in Flow In recent years, there has been growing interest in academia and industry in the application of photochemical reactions to continuous flow to increase reaction selectivity, efficiency, and productivity [18,19].The scaling up of photochemical reactions in batch can suffer from low yields due to poor light penetration, which is easily overcome in flow as the narrow tubing ensures uniform irradiation of the reaction mixture [20].With this in mind, we set out to investigate whether the photocatalyzed dethiocarboxylation was compatible with flow.The model reaction used for the batch optimization with the dethiocarboxylation of Phe thioacid TA-1 was applied to flow using UV irradiation since we wanted to minimize reaction time and increase throughput.Gratifyingly, N-Fmoc phenethylamine 2a was isolated in an excellent yield of 86% with 5 min residence time of the reagents under UV irradiation on a 1 mmol scale (Scheme 4).This result demonstrates compatibility of the dethiocarboxylation reaction with flow and the potential scalability of the process. Materials and Methods All commercial chemicals used were supplied by Sigma Aldrich (Merck), Fluorochem, VWR Carbosynth, and Tokyo Chemical Industry and used without further purification unless otherwise stated.Deuterated solvents for NMR were purchased from Sigma Aldrich (Merck) or VWR.Solvents for synthesis purposes were used at HPLC grade.All UV reactions were carried out in a Luzchem photoreactor, LZC-EDU (110 V/60 Hz), containing 14 UVA lamps centered at 354 nm.All blue LED reactions were carried out using 2 Kessil PR160-440 LED lamps centered at 440 nm.Silica gel 60 (Merck, 230-400 mesh) was used for silica gel flash chromatography and all compounds were subject to purification using silica gel, unless otherwise stated.Analytical thin layer chromatography (TLC) was carried out with silica gel 60 (fluorescence indicator F254; Merck) and visualized by UV irradiation or molybdenum staining [ammonium molybdate (5.0 g) and concentrated H2SO4 (5.3 mL) in 100 mL H2O].NMR spectra were recorded using Bruker DPX 400 (400.13MHz for 1 H NMR and 100.61MHz for 13 C NMR), Bruker AV 400 (400.13MHz for 1 H NMR and 100.61MHz for 13 C NMR), or Agilent MR400 (400.13MHz for 1 H NMR and 100.61MHz for 13 C NMR) instruments.Chemical shifts, δ, are in ppm and referenced to the internal solvent signals.NMR data was processed using MestReNova software (version 14.3.3).ESI mass spectra were acquired in positive and negative modes as required, using a Micromass TOF mass spectrometer interfaced to a Waters 2690 HPLC or a Bruker micrOTOF-Q III spectrometer interfaced to a Dionex UltiMate 3000 LC. General Procedure for the Synthesis of STrt Thioesters 1a-1k (GP-1) To a solution of carboxylic acid (1 equiv., 0.2 M) in anhydrous DCM under argon was added 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide hydrochloride (1.2 equiv.)and the solution was stirred at 0 °C for 10 min.Then, 4-Dimethylaminopyridine (0.2 equiv.)and triphenylmethanethiol (1 equiv.)were added and the mixture was stirred at room temperature for 18 h.The solvent was evaporated in vacuo and the residue obtained was purified directly by silica gel flash chromatography (n-hexane/EtOAc gradient) to afford the desired compound. Materials and Methods All commercial chemicals used were supplied by Sigma Aldrich, Burlington, MA, USA (Merck), Fluorochem, VWR Carbosynth, and Tokyo Chemical Industry and used without further purification unless otherwise stated.Deuterated solvents for NMR were purchased from Sigma Aldrich (Merck) or VWR.Solvents for synthesis purposes were used at HPLC grade.All UV reactions were carried out in a Luzchem photoreactor, LZC-EDU (110 V/60 Hz), containing 14 UVA lamps centered at 354 nm.All blue LED reactions were carried out using 2 Kessil PR160-440 LED lamps centered at 440 nm.Silica gel 60 (Merck, 230-400 mesh) was used for silica gel flash chromatography and all compounds were subject to purification using silica gel, unless otherwise stated.Analytical thin layer chromatography (TLC) was carried out with silica gel 60 (fluorescence indicator F254; Merck) and visualized by UV irradiation or molybdenum staining [ammonium molybdate (5.0 g) and concentrated H 2 SO 4 (5.3 mL) in 100 mL H 2 O].NMR spectra were recorded using Bruker DPX 400 (400.13MHz for 1 H NMR and 100.61MHz for 13 C NMR), Bruker AV 400 (400.13MHz for 1 H NMR and 100.61MHz for 13 C NMR), or Agilent MR400 (400.13MHz for 1 H NMR and 100.61MHz for 13 C NMR) instruments.Chemical shifts, δ, are in ppm and referenced to the internal solvent signals.NMR data was processed using MestReNova software (version 14.3.3).ESI mass spectra were acquired in positive and negative modes as required, using a Micromass TOF mass spectrometer interfaced to a Waters 2690 HPLC or a Bruker micrOTOF-Q III spectrometer interfaced to a Dionex UltiMate 3000 LC. General Procedure for the Synthesis of STrt Thioesters 1a-1k (GP-1) To a solution of carboxylic acid (1 equiv., 0.2 M) in anhydrous DCM under argon was added 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide hydrochloride (1.2 equiv.)and the solution was stirred at 0 • C for 10 min.Then, 4-Dimethylaminopyridine (0.2 equiv.)and triphenylmethanethiol (1 equiv.)were added and the mixture was stirred at room temperature for 18 h.The solvent was evaporated in vacuo and the residue obtained was purified directly by silica gel flash chromatography (n-hexane/EtOAc gradient) to afford the desired compound. Dethiocarboxylation Using UV Irradiation for Optimization Table Thioester 1a (50 mg) was dissolved in DCM (1 mL) and then TES (20 equiv.)and TFA (25% v/v) were added and stirred at rt for 5 min under argon.The reaction mixture was concentrated in vacuo and dried to afford the thioacid Fmoc-Phe-SH.Then, Fmoc-Phe-SH (1 equiv., 0.1 M), DPAP (0.2 equiv.)and 1,3,5 trimethoxybenzene (1 equiv.)were dissolved in EtOAc and stirred at room temperature under UV irradiation for 5/15 min.The solvent was removed in vacuo and the sample was analyzed by 1 H NMR. Dethiocarboxylation Using Blue LED Irradiation for Optimization Table Thioester 1a (50 mg) was dissolved in DCM (1 mL) and then TES (20 equiv.)and TFA (25% v/v) were added and stirred at rt for 5 min under argon.The reaction mixture was concentrated in vacuo and dried to afford the thioacid Fmoc-Phe-SH.Then, Fmoc-Phe-SH (1 equiv., 0.1 M), Eosin Y (0.01 or 0.25 equiv.)and 1,3,5 trimethoxybenzene (1 equiv.)were dissolved in EtOAc and stirred at room temperature under blue LED irradiation for 15 min/1 h.The solvent was removed in vacuo and the sample was analyzed by 1 H NMR. Scheme 3 . Scheme 3. Scope of the dethiocarboxylation and isolated yields.a Deprotection conditions for each compound are outlined in Section 3.4.b Yield determined by 1 H NMR spectroscopy (400 MHz, DMSO-d6) of the crude reaction mixture using triphenylmethanethiol as an internal reference.nr = no reaction. Scheme 3 . Scheme 3. Scope of the dethiocarboxylation and isolated yields.a Deprotection conditions for each compound are outlined in Section 3.4.b Yield determined by 1 H NMR spectroscopy (400 MHz, DMSO-d 6 ) of the crude reaction mixture using triphenylmethanethiol as an internal reference.nr = no reaction. Table 1 . Optimization of reaction conditions. Table 1 . Optimization of reaction conditions.General procedure used to synthesize thioacids. Table 1 . Optimization of reaction conditions.
v3-fos-license
2019-05-11T10:13:16.119Z
2020-01-10T00:00:00.000
157065039
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/aaa/2020/7985298.pdf", "pdf_hash": "22e5aff49e31b356f04340456a64966c983ab4f2", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:289", "s2fieldsofstudy": [ "Mathematics" ], "sha1": "81bdca379d4bd554ef029955977b526fac1867fb", "year": 2020 }
pes2o/s2orc
On a Partial q-Analog of a Singularly Perturbed Problem with Fuchsian and Irregular Time Singularities A family of linear singularly perturbed difference differential equations is examined. )ese equations stand for an analog of singularly perturbed PDEs with irregular and Fuchsian singularities in the complex domain recently investigated by A. Lastra and the author. A finite set of sectorial holomorphic solutions is constructed by means of an enhanced version of a classical multisummability procedure due to W. Balser. )ese functions share a common asymptotic expansion in the perturbation parameter, which is shown to carry a double scale structure, which pairs q-Gevrey and Gevrey bounds. Introduction In this work, we focus on singularly perturbed linear partial qdifference differential equations which couple two categories of operators acting both on the time variable, so-called qdifference operators of irregular type and Fuchsian differential operators.As a seminal reference concerning analytic and algebraic aspects of q-difference equations with irregular type, refer [1], and for a far reaching investigation of Fuchsian ordinary and partial differential equations, refer [2]. Our equations are presented in the following manner + P t, z, ϵ, t k σ q;t , tz t , z z u(t, z, ϵ) for vanishing initial data u(0, z, ϵ) ≡ 0, where k, δ D , m D ≥ 1 are integers, σ q;t represents the dilation map t ⟶ qt acting on time t for some real number q > 1, and Q(X), R D (X) stand for polynomials in C[X].e main block P(t, z, ϵ, V 1 , V 2 , V 3 ) is polynomial in the arguments t, V 1 , V 2 , V 3 , holomorphic in the perturbation parameter ϵ on a disc D(0, ϵ 0 ) ⊂ C centered at 0 and in the space variable z on a horizontal strip of the form H β � z ∈ C/|Im(z)| < β , for some β > 0. e forcing term f(t, z, ϵ) is analytic relatively to (z, ϵ) ∈ H β × D(0, ϵ 0 ) and defines an entire function w.r.t t in C with (at most) q-exponential growth (see (60), for precise bounds). is paper is a natural continuation of the study [3] by Lastra and Malek and will share the same spine structure.Indeed, in [3], we aimed attention at the next problem + H z, ϵ, t k+1 z t , tz t , z z y(t, z, ϵ) for vanishing initial data y(0, z, ϵ) ≡ 0, where Q(X), R D (X), H(z, ϵ, V 1 , V 2 , V 3 ) stand for polynomials in their arguments X, V 1 , V 2 , V 3 as above and h(t, z, ϵ) is like the forcing term f(t, z, ϵ) but with (at most) exponential growth in t.Under convenient conditions put on the shape of (2), we are able to construct a set of genuine bounded holomorphic solutions expressed as a Laplace transform of order k along a halfline and Fourier inverse integral in space z: where the Borel/Fourier map V p (τ, m, ϵ) is itself set forth as a Laplace transform of order k ′ � kδ D /m D : where W p (u, m, ϵ) has (at most) exponential growth along L c p and exponential decay in phase m on R. e resulting maps y p (t, z, ϵ) are therefore expressed as iterated Laplace transforms following a so-called multisummability procedure introduced by Balser, see [4].ese functions define bounded holomorphic functions on domains T × H β × E p for a well-selected bounded sector T at 0 and where E � E p 0≤p≤ς− 1 is a set of sectors which covers a full neighborhood of 0 and is called a good covering (cf.Definition 6).Additionally, the partial maps ϵ ⟼ y p (t, z, ϵ) share on E p a common asymptotic expansion y(t, z, ϵ) � n≥0 y n (t, z)ϵ n with bounded holomorphic coefficients y n (t, z) on T × H β . is asymptotic expansion turns out to be (at most) of Gevrey order 1/κ with κ � kk ′ /(k + k ′ ), meaning that we can single out two constants C p , M p > 0 such that sup t∈T,z∈H β y p (t, z, ϵ) − n− 1 m�0 y m (t, z)ϵ m for all n ≥ 1, all ϵ ∈ E p .We plan to obtain a similar statement for the problem under study (1).Namely, we will construct a set of genuine sectorial solutions to (1) and describe their asymptotic expansions as ϵ borders the origin.We first notice that our main problem (1) can be seen as a q-analog of (2), where the irregular differential operator t k+1 z t is replaced by the discrete operator t k σ q;t . is terminology originates from the basic observation that the expression f(qt) − f(t)/(qt − t) approaches the derivative f ′ (t) as q tends to 1. Here, as mentioned in the title, we qualify the q-analogy as partial since the Fuchsian operator tz t is not discretized in the process.is suggests that, in the building procedure of the solutions (that will follow the same guideline as in [3]), the classical Laplace transform of order k shall be supplanted by a q-Laplace transform of order k as it was the case in the previous work [5] of the author where a similar problem was handled.However, due to presence of the Fuchsian operator tz t , we will see that a single q-Laplace transform is not enough to construct true solutions and that a new mechanism of iterated q-Laplace and classical Laplace transforms is required.Furthermore, we witness that this enhanced multisummability procedure has a forthright effect on their asymptotic expression w.r.t ϵ.Namely, the expansions in the perturbation parameter are neither of classical Gevrey order as displayed in (3) nor of q-Gevrey order 1/k as in [5] (meaning that Γ(1 + (n/κ)) has to be replaced by q n 2 /2k in the control term of (3)).e asymptotic expansions we exhibit present a double scale structure which has a q-Gevrey leading part with order 1/k and a subdominant tail of Gevrey order 1/kδ D , that we call Gevrey asymptotic expansion of mixed order (1/kδ D ; (q, 1/k)) (cf.Definition 8).Such a coupled asymptotic structure has already been observed in another setting by Lastra et al. in [6].Indeed, we considered linear q-difference differential Cauchy problems with the shape tσ q;t r 2 zz z r 1 z S z X(t, z) � B z, tσ q;t , σ q − 1 ;z , z z X(t, z), (6) for suitably chosen analytic Cauchy data and properly selected complex number q ∈ C * with |q| > 1, where r 1 ≥ 0 and r 2 , S ≥ 1 are integers and B stands for a polynomial.When r 1 ≥ 1, the Fuchsian operator (zz z ) r 1 is responsible of the classical Gevrey part of the asymptotic expansion X(t, z) � n≥0 X n (z)t n of the true solution X(t, z) which is shown to be of mixed order (r 1 /r 2 ; (q, 1)) (in the sense of Definition 8) outside some q-spiral λq Z for some λ ∈ C * w.r.t t near 0, uniformly in z in the vicinity of the origin.Here, the solutions are expressed through a single q-Laplace transform and the Γ((r 1 /r 2 )n) contribution in the asymptotics emerges from a discrete set of singularities that accumulates at 0 in the Borel plane. It is worthwhile mentioning that the approach which consists in building solutions by means of iterated q-Laplace and Laplace transforms stems from a new work by Yamazawa.In [7], he examines linear q-difference differential equations of the form L t, σ q;t , z x u(t, x) � f(t, x), (8) for the given holomorphic forcing term f(t, x) near the origin and where L(t, V 1 , V 2 ) is a polynomial in V 1 , V 2 with holomorphic coefficients w.r.t t near 0. Under special conditions on the structure of (4), he is able to construct a genuine solution u(t, x) obtained as a small perturbation of iterated truncated q-Laplace and Laplace of order 1 transforms of the iterated Borel and q-Borel transforms of a formal solution u(t, x) � k≥1 u k (x)t k of (4).Furthermore, he gets in particular that u(t, x) has u(t, x) as asymptotic expansion of mixed order (1; (q, 1)) w.r.t t, uniformly in x near 0. Notice that in our paper, the solutions are built up as complete iterated q-Laplace and classical Laplace transforms that are shown to be exact solutions of our problem (1). is is why the process we follow can actually be understood as an enhanced version of the multisummation mechanism introduced by Balser, see [4]. In a larger framework, this work is a contribution to the promising and fruitful realm of research in q-difference and q-difference-differential equations in the complex domain.For recent important advances in this area, we mention in particular the works by Tahara and Yamazawa [8][9][10].Notice that the fields of applications of q-difference equations have also encountered a rapid growth in the last years. 2 Abstract and Applied Analysis Some forefront studies in this respect are given, for instance, by [11][12][13] and references therein.Now, we describe a little more precisely our main results obtained in eorems 1 and 3. Namely, under convenient restrictions on the shape of (1) detailed in the statement of eorem 1, we can manufacture a family of bounded holomorphic solutions u p (t, z, ϵ) on domains T × H β × E p for a suitable bounded sector T at 0, H β a strip of width β > 0, and E p belonging to a good covering in C * , which can be displayed as a q-Laplace transform of order k along a halfline L c p � R + exp( �� � − 1 √ c p ) and Fourier integral: e q-Borel/Fourier map W d p (u, m, ϵ) is itself shaped as a classical Laplace transform of order kδ D along L c p : where w d p (h, m, ϵ) has (at most) q-exponential growth of some order 0 < k 1 < k along L c p (see ( 180)) and exponential decay in phase m ∈ R. In eorem 3, we explain the reason for which all the partial functions ϵ ⟼ u p (t, z, ϵ) share a common asymptotic expansion u(t, z, ϵ) � m≥0 h m (t, z)ϵ m on E p with bounded holomorphic coefficients h m (t, z) on T × H β , which turns out to be of mixed order (1/kδ D ; (q, 1/k)). is last result leans on a new version of the classical Ramis-Sibuya theorem fitting the above asymptotics, which is fully expounded in eorem 2. Our paper is arranged as follows. In Section 2, we recall the definition of the classical Laplace transform and its q-analog.We also put forward some classical identities for the Fourier transform acting on functions spaces with exponential decay. In Section 3, we set forth our main problem (33) and we discuss the formal steps leading to its resolution.Namely, a first part is devoted to the inquiry of solutions among q-Laplace transforms of order k and Fourier inverse integrals of Borel maps W with q-exponential growth on unbounded sectors and exponential decay in phase leading to the first main integrodifferential q-difference equation (69) that W is asked to fulfill.A second undertaking suggests to seek for W, as a classical Laplace transform of suitable order kδ D of a second Borel map w with again appropriate behaviour.e expression w is then contrived to solve a second principal integro q-difference equation (81). In Section 4, bounds for linear convolution and q-difference operators acting on Banach spaces of functions with q-exponential growth are displayed.e second key equation (81) is then solved within these spaces at the hand of a fixed point argument. In Section 5, genuine holomorphic solutions W of the first principal auxiliary equation (69) are built up and sharp estimates for their growth are provided (cf.( 146) and (147)). In Section 6, we achieve our goal in finding a set of true holomorphic solutions (176) to our initial problem (33). In Section 7, the existence of a common asymptotic expansion of the Gevrey type with mixed order (1/kδ D ; (q, 1/k)) is established for the solutions set up in Section 6. e decisive technical tool for its construction is detailed in eorem 2. Transforms of Order k, and Fourier Inverse Maps Let k ′ ≥ 1 be an integer.We remind the reader the definition of the Laplace transform of order k ′ as introduced in [14]. as some unbounded sector with bisecting direction d ∈ R and aperture 2δ > 0 and D(0, ρ) as a disc centered at 0 with radius ρ > 0. Consider a holomorphic function w: S d,δ ∪ D(0, ρ) ⟶ C that vanishes at 0 and withstands the bounds. ere exist C > 0 and K > 0 such that for all τ ∈ S d,δ .We define the Laplace transform of w of order k ′ in the direction d as the integral transform { }, where c depends on Tand is chosen in such a way that cos(k ′ (c − arg(T))) ≥δ 1 , for some fixed real number δ 1 > 0. e function L d k′ (w)(T) is well defined, holomorphic, and bounded on any sector: where 0 < θ < (π/k ′ ) + 2δ and 0 < R < δ 1 /K.If one sets w(τ) � n≥1 w n τ n , the Taylor expansion of w, which converges on the disc D(0, ρ/2), the Laplace transform as Gevrey asymptotic expansion of order 1/k ′ . is means that for all 0 < θ 1 < θ, two constants C, M > 0 can be selected with the bounds: (11), its Laplace transform L d k′ (w)(T) does not depend on the direction d in R and represents a Abstract and Applied Analysis bounded holomorphic function on D(0, R 1/k′ ), whose Taylor expansion is represented by the convergent series X(T) � n≥1 w n Γ(n/k ′ )T n on D(0, R 1/k′ ). Let k ≥ 1 be an integer and q > 1 be a real number.At the next stage, we display the definition of the q-Laplace transform of order k which was used in a former work of Malek [5]. Let us first recall some essential properties of the Jacobi eta function of order k defined as the Laurent series: for all x ∈ C * . is analytic function can be factorized as a product known as Jacobi's triple product formula: for all x ∈ C * , from which we deduce that its zeros are the set of real numbers − q m/k /m ∈ Z .We recall the next lower bound estimates on a domain, bypassing the set of zeroes of Θ q 1/k (x), from [5] Lemma 3, which are crucial in the sequel. ere exists a constant C q,k > 0 depending on q, k and independent of Δ such that Definition 2. Let ρ > 0 be a real number and S d be an unbounded sector centered at 0 with bisecting direction d ∈ R. Let f: D(0, ρ) ∪ S d ⟶ C be a holomorphic function, continuous on the adherence D(0, ρ), such that there exist constants K, α > 0 and δ > 1 with where c is a halfline in the direction c. e following lemma is a slightly modified version of Lemma 4 from [5]. Lemma 2. Let Δ > 0 chosen as in Lemma 1 above.e integral transform L c q;1/k (f(x))(T) defines a bounded holomorphic function on the domain R c,Δ ∩ D(0, r 1 ) for any radius 0 < r 1 ≤ q − (1/k)(α+1) /2, where Notice that the value due to the Cauchy formula.e next lemma describes conditions under which the q-Laplace transform defines a convergent series near the origin.Lemma 3. Let f: C ⟶ C be an entire function with Taylor expansion f(x) � n≥1 f n x n fulfilling bound (19) for all x ∈ C. en, its q-Laplace transform of order k, L d q;1/k (f)(T), does not depend on the direction d ∈ R and represents a bounded holomorphic function on D(0, r 1 ) with the restriction 0 < r 1 ≤ q − (1/k)(α+1) /2 whose Taylor expansion is given by the convergent series Y(T) � n≥1 f n q n(n− 1)/2k T n . Proof. e proof is a direct consequence of the next formulas: , where the last equality follows (for instance) from identity (4.7) from [15], for all n ≥ 1. We restate the definition of some family of Banach spaces mentioned in [14]. We set E (β,μ) as the vector space of continuous functions h: R ⟶ C such that is finite.e space E (β,μ) endowed with the norm ‖.‖ (β,μ) becomes a Banach space.Finally, we remind the reader the definition of the inverse Fourier transform acting on the latter Banach spaces and some of its handy formulas relative to derivation and convolution product as stated in [14].Definition 4. Let f ∈ E (β,μ) with β > 0 and μ > 1. e inverse Fourier transform f is given by for all x ∈ R. e function F − 1 (f) extends to an analytic bounded function on the strips for all given 0 < β ′ < β. (a) Define the function m ⟼ ϕ(m) � imf(m) which belongs to the space E (β,μ− 1) .en, the next identity occurs.(b) Take g ∈ E (β,μ) and set 4 Abstract and Applied Analysis as the convolution product of f and g. Layout of the Principal Initial Value Problem and Associated Auxiliary Problems We set k ≥ 1 as an integer.Let m D , δ D ≥ 1 be integers.We set We consider a finite set I of N 3 that fulfills the next feature, whenever (l 0 , l 1 , l 2 ) ∈ I and we set nonnegative integers Δ l ≥ 0 with and l ∈ I be polynomials such that for all m ∈ R, all l ∈ I. We consider a family of linear singularly perturbed initial value problems for vanishing initial data u(0, z, ϵ) ≡ 0. Here, q > 1 stands for a real number and the operator σ q;t is defined as the dilation by q acting on the variable t through σ q;t u(t, z, ϵ) � u(qt, z, ϵ). e coefficients c l (z, ϵ) are built in the following manner.For each l ∈ I, we consider a function m ⟼ C l (m, ϵ) that belongs to the Banach space E (β,μ) for some β, μ > 0, which depends holomorphically on the parameter ϵ on some disc D(0, ϵ 0 ) with radius ϵ 0 > 0 and for which one can find a constant C l > 0 with sup ϵ∈D 0,ϵ 0 ( ) We construct as the inverse Fourier transform of the map C l (m, ϵ) for all l ∈ I.As a result, c l (z, ϵ) is bounded holomorphically w.r.t ϵ on D(0, ϵ 0 ) and w.r.t z on any strip H β′ for 0 < β ′ < β in view of Definition 4. e presentation of the forcing term requires some preliminary groundwork.We consider a sequence of functions m ⟼ ψ n (m, ϵ), for n ≥ 1, that belongs to the Banach space E (β,μ) with the parameters β, μ > 0 given above and which relies analytically and is bounded w.r.t ϵ on the disc D(0, ϵ 0 ).We assume that the next bounds, sup ϵ∈D 0,ϵ 0 ( ) hold for all n ≥ 1 and given constants K 0 , T 0 > 0. We define the formal series for some real number 0 < k 1 < k.We introduce the next Banach space. Definition 5. Let k 1 , β, μ, r, α > 0 and q, δ > 1 be real numbers.Let U d be an open unbounded sector with bisecting direction d ∈ R centered at 0 in C. We denote Exp the vector space of complex valued continuous functions (u, m) ⟼ h(u, m) on the adherence U d ∪ D(0, r) × R, which are holomorphic w.r.t u on U d ∪ D(0, r) and such that the norm is finite.One can check that the normed space (Exp q (k 1 ,β,μ,α,r) , ‖.‖ (k 1 ,β,μ,α,r) ) represents a Banach space. Remark 1. e spaces above are faint modifications of the Banach spaces already introduced in the works of Dreyfus and Lastra [16][17][18]. e next lemma is a proper adjustment of Lemma 5 out of [5] to the new Banach spaces from Definition 5. Lemma 4. Let T 0 be fixed as in (36).We take a number α > 0 such that Let k 1 , β, μ be chosen as above.en, the function (u, m) ⟼ ψ(u, m, ϵ) belongs to the Banach space Exp q (k 1 ,β,μ,α,r) for any unbounded sector U d , any disc D(0, r).Moreover, one can find a constant C 1 > 0 (depending on q, k 1 , α, T 0 ) with Proof.ound (36) implies that Abstract and Applied Analysis (41) According to the elementary fact that the polynomial h(x) � x(n − 1 − α) − (k 1 /2)(x 2 /log(q)) admits its maximum value (log(q)/2k 1 )(n − 1 − α) 2 at x � (log(q)/k 1 ) (n − 1 − α), we deduce by means of the change of variable for all n ≥ 1. erefore, we deduce that which converges, provided that (39) holds, whenever ϵ ∈ D(0, ϵ 0 ).We define as the Laplace transform of ψ(u, m, ϵ) w.r.t u of order k ′ in direction d ∈ R. Notice that two constants K 1 , K 2 > 0 (depending on k 1 , q, α, δ, k ′ ) can be found such that for all u ∈ C. As a result, owing to bound (40) and the last part of Definition 1, we deduce that Ψ d does not depend on the direction d and can be written as a convergent series: w.r.t τ near the origin.Now, we fix some real number k 2 such that 0 for all n ≥ 1. is inequality is a consequence of the Stirling formula, which states that as x tends to +∞ and from the existence of a constant K 3 > 0 (depending on q, k 1 , k 2 , k ′ ) with for all n ≥ 1.Consequently, it turns out that Ψ d (τ, m, ϵ) represents an entire function w.r.t τ such that for all τ ∈ C. Furthermore, owing to bound (42), we know that for all n ≥ 1, all τ ∈ C. Henceforth, we get the next global bounds provided that for all τ ∈ C, m ∈ R, and ϵ ∈ D(0, ϵ 0 ).Next, we set as the q-Laplace transform of Ψ d (u, m, ϵ) w.r.t u of order k in direction d and Fourier inverse integral w.r.t m.We put for all n ≥ 1.We first provide bounds for this sequence of functions.Namely, we can get a constant C μ,β,β′ > 0 (relying on μ, β, β ′ ) with 6 Abstract and Applied Analysis for all n ≥ 1, whenever ϵ ∈ D(0, ϵ 0 ) and z belongs to the horizontal strip H β′ for some 0 < β ′ < β (see Definition 4).Owing to Lemma 3, we deduce that the function F d (T, z, ϵ) converges near the origin w.r.t T, where it carries the next Taylor expansion: for all ϵ ∈ D(0, ϵ 0 ) and z ∈ H β′ .In particular, the function F d is independent of the direction d chosen. We now show that F d (T, z, ϵ) represents an entire function w.r.t T and supply explicit upper bounds.Namely, in accordance with (47), we obtain Again, estimate (42) yields for all T ∈ C, all n ≥ 1, where κ 2 > 0 is defined by 1/κ 2 � (1/k 2 ) − (1/k).By gathering the two last above inequalities, the next global estimates can be figured out: for all T ∈ C, all z ∈ H β′ and ϵ ∈ D(0, ϵ 0 ), provided that Lastly, we define the forcing term f as a time rescaled version of F d , that represents a bounded holomorphic function w.r.t z ∈ H β′ and ϵ ∈ D(0, ϵ 0 ) and an entire function w.r.t t with q-exponential growth of order κ 2 . roughout this paper, we are looking for time rescaled solutions of (33) of the form u(t, z, ϵ) � U(ϵt, z, ϵ). (63) As a consequence, the expression U(T, z, ϵ), through the change of variable T � ϵt, is asked to solve the next singular problem: At the onset, we seek for a solution U(T, z, ϵ) that can be expressed as an integral representation via a q-Laplace transform of order k and Fourier inverse integral: where the inner integration is performed along a halfline Overall this section, we assume that the partial functions u ⟼ W(u, m, ϵ) have at most qexponential growth of order k on some unbounded sector S d centered 0 with bisecting direction d and m ⟼ W(u, m, ϵ) belong to the Banach space E (β,μ) mentioned in Definition 3, whenever ϵ ∈ D(0, ϵ 0 ).Precise bounds will be given later in Section 5. Here, we assume that L c ⊂ S d ∪ 0 { }.Our aim is now the presentation of a related problem fulfilled by the expression W(u, m, ϵ).We first need to state two identities which concern the action of q-difference and Fuchsian operators on q-Laplace tranforms. □ Lemma 5. e actions of the q-difference operators T l 0 σ l 1 q;T for integers l 0 , l 1 ≥ 0 and the Fuchsian differential operator Tz T are given by Proof. e first identity is a direct consequence of the commutation formula (76) displayed in Proposition 6 from [5].For the second, a derivation under the integral followed by an integration by parts implies the sequence of equalities: Abstract and Applied Analysis from which the forecast formula follows since the map u ⟼ W(u, m, ϵ) is assumed to possess a growth of q-exponential order k and vanishes at u � 0. e application of the above identities (66) and (67) in a row with ( 26) and ( 28) leads to the first integrodifferential qdifference equation fulfilled by the expression W(u, m, ϵ) as long as U c (T, z, ϵ) solves (64): We turn now to the second stage of the procedure.Solutions of this latter equation are expected to be found in the class of Laplace transforms of order k ′ since by construction Ψ d (τ, m, ϵ) owns this structure after (44).Namely, we take for granted that where { } where U d represents an unbounded sector centered at 0 with bisecting direction d.Within this step, we assume that the expression (u, m) ⟼ w(u, m, ϵ) belongs to the Banach space Exp q (k 1 ,β,μ,α,r) introduced in Definition 5, for all ϵ ∈ D(0, ϵ 0 ), where the constants k 1 , β, μ, and α are selected accordingly to the construction of the forcing term f(t, z, ϵ). e next lemma has already been stated in our previous work [3].□ Lemma 6.For all integers l ≥ 1, positive integers a q,l ≥ 1 and 1 ≤ q ≤ l can be found such that With the help of this last expansion, equation ( 69) can be recast in the form Abstract and Applied Analysis is last prepared shape allows us to apply the next lemma that repharses formula (8.7) p. 3630 from [19], in order to express all differential operators appearing in (72) in terms of the most basic one τ k′+1 z τ . By convention, we take for granted that the above sum Indeed, by construction of the finite set I, we can represent the next integers in a specific way: where e h,l 0 � l 0 − hk ′ ≥ 1 for all (l 0 , l 1 , l 2 ) ∈ I and 1 As a consequence, we can further expand the next piece of (72) in its final convenient form: and we can remodel equation ( 72) in such a way that it contains only primitive building blocs: Similarly to our previous technical Lemma 5, we disclose some useful commutation formulas dealing with the actions of the basic irregular operator τ k′+1 z τ , multiplication by monomials τ m′ , and of the q-difference operator σ q;τ , Lemma 8 (1) e action of the differential operators τ k′+1 z τ on W(τ, m, ϵ) is given by (2) Let m ′ ≥ 1 be an integer.e action of the multiplication by τ m′ on W(τ, m, ϵ) is described through the next formula: (3) Let c ∈ Z be an integer.e action of the operator σ c q;τ is represented through the following integral transform: Proof. e first two formulas have already been given in our previous works [3,20].We focus on the third equality.By definition, Abstract and Applied Analysis and if one deforms the path of integration L c through u � q c v which keeps the path invariant since q c ∈ R + , we get formula (79).Departing from the arranged equation ( 76) with the help of Lemma 8, we can exhibit an ancillary problem satisfied by the expression w(u, m, ϵ), □ An Integral q-Difference Equation with Complex Parameter e objective of this section is the construction of a unique solution of equation (81) just established overhead.is solution will be built among the Banach space displayed in Definition 5. Within the next three propositions, continuity of linear convolutions and q-difference operators acting on Exp q (k 1 ,β,μ,α,r) is discussed.Proposition 1.Let k ′ ≥ 1 be an integer and c 1 > 0 and c 2 , c 3 be real numbers such that k ′ (c 2 + c 3 + 2) is an integer that are submitted to the next constraint: for all u ∈ U d ∪ D(0, r), for some constant M c 1 > 0. en, the linear function represents a continuous map from the Banach space Exp q (k 1 ,β,μ,α,r) into itself.In other words, some constant M 1 > 0 (depending on k ′ , c 1 , c 2 , c 3 ) can be found with for all u ∈ U d ∪ D(0, r), whenever m ∈ R. By definition, the next upper bounds Abstract and Applied Analysis for all u ∈ U d ∪ D(0, r), all m ∈ R.Under condition (82), the expected bound (85) follows. □ Proposition 2. Let a ∈ C be a complex number, c 1 ≥ 0 be an integer, and c 2 ≥ 0 be a real number withstanding the following condition: en, we can sort a constant M 2 > 0 (depending on c 1 , c 2 , q, α, δ, r, a) with Proof. e proof is proximate to the one of Proposition 1 in [5] and similar to the one of Proposition 1 from [18].We provide, however, a complete proof for the sake of a better readability. Let f(u, m) belong to Exp r) .By definition, we can perform the next factorization Since the contractive map u ⟼ u/q c 2 keeps the domain U d ∪ D(0, r) invariant, we deduce with M 2 � (1/q c 2 )sup u∈U d ∪D(0,r) A(u), where We observe that where In the remaining part of the proof, we show that M 2.2 is also finite.We first need to rearrange the pieces of A(u).Namely, we expand Abstract and Applied Analysis Since log(1 + x) ∼ x as x ⟶ 0, we get two constants A 1 , A 2 ∈ R (depending on r, δ, q, c 2 ) with for all u ∈ U d , |u| > r.Gathering (95) and ( 96) with (97) gives rise to the bound which is finite owing to (89). (99) for some constant M b > 0. en, there exists a constant M 3 > 0 (depending on Q, R, and μ) such that whenever f belongs to E (β,μ) and g belongs to Exp Proof. e proof shares the same ingredients as the one of Proposition 2 of [3].Again, we give a thorough explanation of the result.We take f inside E (β,μ) and select g belonging to Exp q (k 1 ,β,μ,α,r) .We first recast the norm of the convolution operator as follows: where 12 Abstract and Applied Analysis By construction of the polynomials Q and R, one can sort two constants Q, R > 0 with for all m, m 1 ∈ R. As a consequence of (102), (104), and (100), with the help of the triangular inequality |m| ≤ |m 1 | + |m − m 1 |, we are led to the bounds where is a finite constant under the first and last restriction of (99) according to the estimates of Lemma 2.2 from [21] or Lemma 4 of [22]. We disclose now additional assumptions on the leading polynomials Q(X) and R D (X).ese requirements will be essential in the transformation of our main problem (81) into a fixed point equation, as explained later in Proposition 4. With this respect, the guideline is close to our previous study [3].Namely, we assume the existence of an unbounded sectorial annulus: where direction d Q,R D ∈ R and aperture η Q,R D > 0 for some given inner radius r Q,R D > 0 with the feature: We consider the next polynomial: In the following, we need lower bounds of the expression P m (u) with respect to both variables m and u.In order to achieve this goal, we can factorize the polynomial w.r.t u, namely, where its roots q l (m) can be displayed explicitely as for all 0 ≤ l ≤ k′m D − 1, for all m ∈ R. We set an unbounded sector U d centered at 0, a small disc D(0, r), and we adjust the sector S Q,R D in a way that the next condition holds.A constant M > 0 can be chosen with Indeed, inclusion (108) implies in particular that all the roots q l (m), 0 ≤ l ≤ k ′ m D − 1 remain a part of some neighborhood of the origin, i.e., satisfy |q l (m)| ≥ 2r for an appropriate choice of r > 0. Furthermore, when the aperture η Q,R D > 0 is taken close enough to 0, all these roots q l (m) stay inside a union U of unbounded sectors centered at 0 that do not cover a full neighborhood of 0 in C * .We assign a sector U d with By construction, the quotients q l (m)/u live outside some small disc centered at 1 in C for all u ∈ U d , m ∈ R, 0 ≤ l ≤ k ′ m D − 1. en, (112) follows. We are now ready to supply lower bounds for P m (u).□ Lemma 9. A constant C P > 0 (depending on k, k ′ , m D , q, M) can be found with Proof.Departing from factorization (110), the lower bound (112) entails for all u ∈ U d ∪ D(0, r). e next proposition discusses sufficient conditions under which a solution w d (u, m, ϵ) of the main integral qdifference equation (81) can be built up in the space Exp q (k 1 ,β,μ,α,r) . □ Proposition 4. Let us assume the next extra requirements: Abstract and Applied Analysis for all l � (l 0 , l 1 , l 2 ) ∈ I. Furthermore, for each l � (l 0 , l 1 , l 2 ) ∈ I, we set an integer p l 0 ,l 1 such that and we take for granted that holds.en, for an appropriate choice of the constants C l > 0 (see (34)) that need to be taken close enough to 0 for all l ∈ I, a constant ϖ > 0 can be singled out in a manner that equation (81) gets a unique solution (u, m) ⟼ w d (u, m, ϵ) in the space Exp q (k 1 ,β,μ,α,r) with the condition: whenever ϵ ∈ D(0, ϵ 0 ), where U d ,r are chosen as above and k 1 , β, μ, α are specified in Section 3 on the way to the construction of the forcing term f(t, z, ϵ). Proof. e proof relies strongly on the next lemma which discusses contractive properties of a linear map. □ Lemma 10.For all ϵ ∈ D(0, ϵ 0 ), we define the map H ϵ as Under the additional requirement ( 116)-( 118), one can select the constants C l > 0, for l ∈ I, and a real number ϖ > 0 in a way that this map acts on some neighborhood of the origin of the space Exp q (k 1 ,β,μ,α,r) in the following way: (i) e inclusion holds, where B(0, ϖ) stands for the closed ball of radius ϖ centered at 0 in Exp r) , for all ϵ ∈ D(0, ϵ 0 ).(ii) e map H ϵ is contractive, namely, whenever w 1 , w 2 ∈ B(0, ϖ), for all ϵ ∈ D(0, ϵ 0 ). Proof.We first control the forcing term.Owing to bound (76) in Lemma 4, together with (114), we can exhibit a constant C 1 > 0 (relying on q, k 1 , α, T 0 ) with where K 0 > 0 is a constant that is set in (36), whenever ϵ ∈ D(0, ϵ 0 ).We deal with the first property (121).Let us take w(τ, m) in Exp q (k 1 ,β,μ,α,r) under the constraint ‖w(τ, m)‖ (k 1 ,β,μ,α,r) ≤ ϖ.We fix some complex number ω d such that ω d ∉ U d ∪ D(0, r), and we redraft the norm of the next integral expression as follows: 14 Abstract and Applied Analysis for all l ∈ I, 1 ≤ h ≤ l 2 , where p l 0 ,l 1 ≥ 0 is an integer chosen as in (117) and We observe that a constant M B > 0 (depending on q, l 0 , l 1 , p l 0 ,l 1 , k, k ′ , m D , ω d , M) can be picked up with , we obtain for all u ∈ U d ∪ D(0, r), m ∈ R where the right-hand side is finite owing to the suitable choices of ω d and p l 0 ,l 1 in (118).Under requirements (32) and ( 116), an application of Proposition 3 yields a constant M 3.1 > 0 (depending on R D , R l and μ) such that where Conditions ( 116) and (117) allow us to call back Proposition 2 in order to get a constant M 2.1 > 0 (depending on l 0 , l 1 , p l 0 ,l 1 , k, q, α, δ, r, ω d ) with where Lastly, Proposition 1 gives rise to constants M ω d ,l 0 > 0 (depending on ω d , l 0 ) and M 1.1 > 0 (depending on k ′ , l 0 , l 2 ) with By compiling (128)-(132), we obtain We now turn to the second principal pieces of H ϵ .Following the same lines of arguments as above, we obtain that Abstract and Applied Analysis where for all l ∈ I, 2 ≤ h ≤ l 2 and 1 ≤ p ≤ h − 1.In order to give bounds for J 3 , we make use of Proposition 1 which affords a constant M 1.2 > 0 (depending on k ′ , l 0 , l 2 ) with By combining ( 134) and (136), we obtain for all l ∈ I, 2 ≤ h ≤ l 2 and 1 ≤ p ≤ h − 1. In the next step, we impose the constants C l > 0, l ∈ I, to stay close enough to 0 in order that a constant ϖ > 0 can be singled out with e collection of (123), 133 and (137) submitted to condition (138) yields the inclusion (121). e next part of the proof is devoted to the explanation of the contractive property (122).Indeed, consider two functions w 1 (u, m) and w 2 (u, m) inside the ball B(0, ϖ) ⊂ Exp q (k 1 ,β,μ,α,r) .en, an application of the two inequalities (133) and (137) overhead leads to is time, we require the constants C l > 0, l ∈ I, to withstand the next inequality 16 Abstract and Applied Analysis (141) Owing to (139) and (140), under demand (141), we obtain (122). In conclusion, we choose the constants C l > 0, l ∈ I in order that both (138) and (141) hold conjointly. is yield Lemma 10. We go back to the core of Proposition 4. For ϖ > 0, chosen as in the lemma above, we consider the closed ball B(0, ϖ) ⊂ Exp d (k 1 ,β,μ,α,r) that stands for a complete metric space for the distance d(x, y) � ‖x − y‖ (k 1 ,β,μ,α,r) .According to the same lemma, we observe that H ϵ induces a contractive application from (B(0, ϖ), d) into itself.en, according to the classical contractive mapping theorem, the map H ϵ carries a unique fixed point that we set as w d (u, m, ϵ); meaning that that belongs to the ball B(0, ϖ), for all ϵ ∈ D(0, ϵ 0 ).Furthermore, the function w d (u, m, ϵ) depends holomorphically on ϵ in D(0, ϵ 0 ).Let the term be taken from the right to the left-hand side of (81) and then divide by the polynomial P m (u) defined in (109).ese operations allows (81) to be exactly recast into equation (142) above.Consequently, the unique fixed point w d (u, m, ϵ) of H ϵ obtained overhead in B(0, ϖ) precisely solves equation (81).□ An Integrodifferential q-Difference Equation with a Complex Parameter In this section, we build up a solution W d (τ, m, ϵ) to the integrodifferential q-difference equation (69) with the shape of a Laplace transform of order k ′ in direction d.Furthermore, we provide sharp bounds of this solution for large values of its q-Borel and Fourier variables τ and m. In the second part of the proof, we are scaled down to provide bounds for the next associated function: when x > 0 is chosen large enough.e next lemma holds. □ Lemma 11. One can select two constants for all x ≥ 2δ k′ . Proof.We first make the change of variable r � r k′ /x in the integral above: On the other hand, we need the next expansions: We cut the integral expression in two pieces: Abstract and Applied Analysis where provided that x ≥ 2δ k′ .We control the first piece E 1 (x).We observe that log(( r) 1/k′ + (δ/x 1/k′ )) ≤ 0 when r ∈ [0, (1 − (δ/x 1/k′ )) k′ ].From (154), we deduce the inequalities log 2 ( r) for all r ∈ [0, (1 − (δ/x 1/k′ )) k′ ] and x ≥ 2δ k′ .erefore, By construction, a constant E 1.2 > 0 (depending on k ′ , k 1 , δ, q) can be found with for all x ≥ 2δ k′ .In a second step, we evaluate the part E 2 (x).Expansion (154) affords us to write where Besides, we can check that there exists a constant provided that r ≥ (1 − (δ/x 1/k′ )) k′ , when x ≥ 2δ k′ .We deduce that when x ≥ 2δ k′ .Furthermore, one can sort a constant E 2.2 > 1 (depending on k ′ ) such that with when x ≥ 2δ k′ .We perform the linear change of variable h � r/2 in this latter integral Abstract and Applied Analysis in order to express it in terms of the Gamma function Γ(x). In the final part of the proof, the function W d (τ, m, ϵ) is shown to fulfill the second main equation (69).In this respect, we tread rearwards the construction discussed in Section 3. Indeed, according to the fact that w d (u, m, ϵ) solves (81) and appertains to the space Exp q (k 1 ,β,μ,α,r) , for a well-chosen sector U d , the three identities of Lemma 8 can be applied in order to check that W d (τ, m, ϵ) is a genuine solution of the integrodifferential-q-difference equation in prepared form (76). Ultimately, a successive play of Lemma 7 followed by Lemma 6 transforms equation ( 76) into the expected one (69).□ Construction of a Finite Set of True Sectorial Solutions to the Main Initial Value Problem We return to the first part of the formal constructions undertaken in Section 3 in view of the gain made in solving the two auxiliary problems (81) and (69) throughout Sections 4 and 5. We need to state the definition of a good covering in C * , and we introduce a fitted version of a so-called associated sets of sectors to a good covering which is analog to the one proposed in our previous work [3].Definition 6.Let ς ≥ 2 be an integer.We consider a set E of open sectors E p centered at 0, with radius ϵ 0 > 0 for all 0 ≤ p ≤ ς − 1 for which the next three properties hold: (i) e intersection E p ∩ E p+1 is not empty for all 0 ≤ p ≤ ς − 1 (with the convention that E ς � E 0 ) (ii) e intersection of any three elements of E is empty (iii) e union ∪ ς− 1 p�0 E p equals U\ 0 { } for some neighborhood U of 0 in C en, the set of sectors E is named a good covering of C * .Definition 7. We consider centered at 0 with bisecting direction d p ∈ R and small opening θ , for some integer k ′ ≥ 1 (iv) A fixed bounded sector T centered at 0 with radius r T > 0 and a disc D(0, r) suitably selected in a way that the next features are conjointly satisfied: (a) Bound (112) holds, provided that u ∈ U d p ∪ D(0, r), for all 0 ≤ p ≤ ς − 1 (b) e set S fulfills the next properties: (1) e intersection S d p ∩ S d p+1 is not empty for all 0 ≤ p ≤ ς − 1 (with the convention that ) For all 0 ≤ p ≤ ς − 1, all ϵ ∈ E p and all t ∈ T: where with Δ > 0 any fixed real number close to 0. When the above features are verified, we say that the set of data E, U, S, T, D(0, r) is admissible.We settle now the first principal result of the work.We construct a set of actual holomorphic solutions to the main initial value problem (33) defined on sectors E p , 0 ≤ p ≤ ς − 1, of a good covering in C * .Besides, we are able to monitor the difference between consecutive solutions on the intersections E p ∩ E p+1 .Theorem 1.We ask the record of requirements ( 29)-( 32), ( 34), ( 36), ( 39), ( 53), ( 61 ), (108), and (116)-(118) to hold. Let us distinguish an admissible set of data Abstract and Applied Analysis as described in the definition above.en, for a suitable choice of the constants C l > 0 (c.f. ( 34)) close enough to 0 for all l ∈ I, a collection u p (t, z, ϵ) 0 ≤ p ≤ ς− 1 of true solutions of (33) can be singled out.More precisely, each function u p (t, z, ϵ) stands for a bounded holomorphic map on the product (T ∩ D(0, σ)) × H β′ × E p for any given 0 < β ′ < β and appropriate small radius σ > 0. Additionally, u p (t, z, ϵ) is represented as a q-Laplace transform of order k and Fourier inverse integral: where where (u, m) ⟼ w d p (u, m, ϵ) belongs to the Banach space Exp q (k 1 ,β,μ,α,r) for the unbounded sector U d p , provided that ϵ ∈ D(0, ϵ 0 ). Finally, some constants A p , B p > 0 can be found with where by convention, we set u ς (t, z, ϵ) � u 0 (t, z, ϵ). Proof.We first single out an admissible set of data A. Under the requirements enounced in eorem 1, Proposition 5 can be called in order to find a family of functions: r), w.r.t ϵ on D(0, ϵ 0 ), and continuous relatively to m ∈ R, coming along with a constant ϖ d p > 0 such that for all u ∈ U d p ∪ D(0, r), m ∈ R, and ϵ ∈ D(0, ϵ 0 ).Furthermore, the function W d p (τ, m, ϵ) solves the first auxiliary integrodifferential q-difference equation (69) on S d p × R × D(0, ϵ 0 ) and suffers the bounds: for some constants ρ We now revisit the first stage of the formal construction from Section 3. Namely, we set the next q-Laplace transform of order k and Fourier inverse map Paying heed to the upper bound (181) and to Lemma 2 together with basic features about Fourier transforms discussed in Definition 4, we notice that U c p (T, z, ϵ) stands for (a) A bounded holomorphic function w.r.t T on a domain R d p ,Δ ∩ D(0, r 0 ) for some small radius r 0 > 0, where R d p ,Δ is described in (173) (b) A bounded holomorphic application relatively to the couple (z, ϵ) on H β′ × D(0, ϵ 0 ), for any given 0 < β ′ < β Additionally, since W d p (τ, m, ϵ) solves (69), Lemma 5 leads to the claim that U c p (T, z, ϵ) must fulfill the singular equation ( 64 In conclusion, the function defined as represents a bounded holomorphic function w.r.t t on T ∩ D(0, σ) for some σ > 0 close enough to 0, ϵ ∈ E p , z ∈ H β′ for any given 0 < β ′ < β, owing to assumption 3 of Definition 7.Moreover, u p (t, z, ϵ) solves the main initial value problem (33) on the domain In the second half of the proof, we explain bound (178).Here, we follow a similar roadmap based on path deformation arguments as in our previous work [3].Indeed, for l � p, p + 1, the partial function is holomorphic on the sector S d l .By the Cauchy theorem, we can bend each straight halfline L c l , l � p, p + 1 into the union of three curves with appropriate orientation depicted as follows: Abstract and Applied Analysis 21 (1) A halfline L c l ,r 1 � [r 1 , +∞) for a given real number r 1 > 0 (2) An arc of circle with radius r 1 denoted C r 1 ,c l ,c p,p+1 joining the point r 1 exp( ) which is taken inside the intersection S d p ∩ S d p+1 (that is assumed to be nonempty, see Definition 7, 2.1) to the halfline As a result, the difference u p+1 − u p can be decomposed into a sum of five integrals along these curves: e izm dτ τ dm e izm dτ τ dm Bounds for the first piece, are now considered.e arguments followed are proximate to the ones displayed in the proof of eorem 1 from [5]. Owing to Lemma 1 and bound (181), we obtain for all ϵ ∈ E p+1 ∩ E p , t ∈ T ∩ D(0, σ), and z ∈ H β′ .We need the next two expansions: Hence, for all ϵ ∈ E p+1 ∩ E p , t ∈ T ∩ D(0, σ), and z ∈ H β′ .We now specify estimates for some pieces of these last upper bounds.Namely, since log(1 + x) ∼ x as x tends to 0, we get a constant A 1.1 > 0 (depending on r 1 , δ) such that log(r)log 1 for all r ≥ r 1 .Since 0 < ϵ 0 < 1, we also notice that 22 Abstract and Applied Analysis exp whenever r 1 ≤ r ≤ 1 and 0 < σ < 1 together with provided that r ≥ 1.Finally, there exists a constant K k,r 1 ,q > 0 (depending on k, r 1 , q) with sup Inequality (189) together with the collection of bounds (190)-(194) yield two constants I 1.1 > 0 and for all ϵ ∈ E p+1 ∩ E p , t ∈ T ∩ D(0, σ), and z ∈ H β′ .We want to express these last bounds in terms of sequences now.e discussion hinges on the next lemma. Proof.By performing the change of variable x � log(|ϵ|) with the help of the computation already undertaken in Lemma 4, we obtain exp for all given ϵ ∈ C * and integer N ≥ 1. Consequently to (195) and (196), two constants I 1.3 , I 1.4 > 0 (depending on I 1.1 , I 1.2 , q, k) can be picked up with With a similar discussion, we can exhibit comparable bounds for the next term: e izm dτ τ dm . (199) Namely, two constants I 2.1 > 0 and for all ϵ ∈ E p+1 ∩ E p , t ∈ T ∩ D(0, σ), and z ∈ H β′ .Furthermore, we can single out two constants I 2.3 , I 2.4 > 0 (resting on I 2.1 , I 2.2 , q, k) such that In the next step, we turn to the first integral along an arc of circle: (202) Making use of Lemma 1 and (181), gives rise to the inequality for all ϵ ∈ E p+1 ∩ E p , t ∈ T ∩ D(0, σ), and z ∈ H β′ .We require once more the expansion: Abstract and Applied Analysis We deduce that Owing to the hypothesis 0 < ϵ 0 < 1, we check that (191) holds and bearing in mind (194), we arrive at the existence of two constants I 3.1 > 0 and I 3.2 ∈ R (depending on the constants k, q, Δ, k 2 , δ, r 1 , ρ , β, β ′ ) with for all ϵ ∈ E p+1 ∩ E p , t ∈ T ∩ D(0, σ), and z ∈ H β′ .Calling back Lemma 12 gives rise to two additional constants I 3.3 , I 3.4 > 0 (subjected to I 3.1 , I 3.2 , q, k) with for all ϵ ∈ E p+1 ∩ E p , t ∈ T ∩ D(0, σ), and z ∈ H β′ , for all given integers N ≥ 1. e second integral along an arc of circle In the remaining part of the proof, we inspect the last integral along the segment: for some fixed 0 < δ 2 < δ 1 close enough to 0 and any given positive real number Δ 2 > 0, under the convention that Proof.We first observe that all the maps u ⟼ w d p (u, m, ϵ), 0 ≤ p ≤ ς − 1, are analytic continuations on the sector U d p of a unique holomorphic function that we name u ⟼ w(u, m, ϵ) on the disc D(0, r) which suffers the same bound (180).Furthermore, the application u ⟼ w(u, m, ϵ)exp(− (u/τ) k′ )/u is holomorphic on D(0, r) when τ ∈ S d p+1 ∩ S d p , and its integral is therefore vanishing along an oriented path described as the union of (a) A segment linking 0 to (r/2)exp( where the integrations paths are two halflines and an arc of circle staying aside from the origin that are depicted as follows: We consider the first integral along a halfline in the above splitting: e direction c p+1 ′ (which might depend on τ) is properly chosen in order that for all τ ∈ S d p+1 ∩ S d p , for some fixed δ 1 > 0. Besides, let Δ 2 > 0 be any given positive real number (even close to 0). en, we can find a constant B 1.1 > 0 (depending on k 1 , k ′ , δ, q, α, r, Δ 2 ) such that for all s ≥ r/2.According to estimate (180), we obtain that for a given 0 < δ 2 < δ 1 . Onwards, we take for granted that the real number r 1 > 0 selected in the above deformation (1), (2), and (3) suffers restriction (213) and 0 < r 1 ≤ 1. Bound (212) in a row with Lemma 1 yields: where for all ϵ ∈ E p+1 ∩ E p , all t ∈ T ∩ D(0, σ) and all z ∈ H β′ .Bound control given below in (235) are now provided for this parameter depending on last integral. e ongoing reasoning leans on the next elementary lemma. □ Lemma 14 (1) e next inequality holds for all integers N ≥ 1 and all positive real numbers r > 0, where C � (1/M W p ) 1/k′ and C Γ,k′ > 0 is a constant depending on k ′ . Abstract and Applied Analysis Proof.For the first item (1), using the change of variable x � (1/r) k′ we observe that for all integers N ≥ 1.On the other hand, from the Stirling formula (48), we get a constant C Γ,k′ > 0 (depending on k ′ ) such that for all x ≥ 1/k ′ .Gathering ( 231) and ( 232) yields ( 229). e second item (2) can be treated in a similar way through the successive changes of variables y � r/|ϵt| and x � log(y) by using the computation already carried out in Lemma 4, whenever N ≥ 1. In the next proposition, we show that difference (178) of neighboring solutions of (33) turn out to be flat functions for which accurate bounds are displayed.□ Proposition 6.Let u p (t, z, ϵ) 0≤p≤ς− 1 be the set of actual solutions of (33) built up in eorem 1. en, we can find constants ϑ > 0 and A p , B p > 0 (which rely on A p , B p , k, k ′ , q) such that where Abstract and Applied Analysis where by convention u ς � u 0 . Proof.Let A p , B p > 0 be real numbers and k, k ′ ≥ 1 be integers.We define the function for all x > 0. Keeping in mind the Stirling formula (48), we can find two constants C, D > 0 (depending on k ′ ) with for all integers N ≥ 1.Hence, where A p,1 � A p C and B p,1 � B p D. In the next part of the proof, we exhibit explicit bounds for the function h(x).We follow a similar strategy as in the recent work [7].We select a real number ϑ 1 > 0 small enough in order that for all given x ∈ (0, ϑ 1 ), there exists a positive real number We focus on the integer N � N, where x denotes the floor function.By construction, we have N ≤ N. erefore, which implies that and hence On the other hand, we can express N in term of the variable x by means of the Lambert function.Namely, we set W(z) as the principal branch of the Lambert function defined on (− e − 1 , +∞) and which solves the functional equation for all z ∈ (− e − 1 , +∞).Since relation (243) can be recast in the form we deduce that Furthermore, owing to the paper [23], the next sharp lower bounds hold for all z > e.Finally, since N < N + 1, the above facts (246)-(250) give rise to the bounds where , where ϑ 2 � ((k ′ ) 2 log(q)/kB k′ p,1 e) 1/k′ .Finally, from ( 242) and (251), we deduce that whenever 0 < x < ϑ, which implies the forseen bounds (238) when looking back to estimate (178). Asymptotic Expansions with Double Gevrey and q-Gevrey Scales: A Related Version of the Ramis-Sibuya eorem.We first put forward the notion of asymptotic expansion with double Gevrey and q-Gevrey scales for formal power series introduced by Lastraet al. [6].Here, we need a version that involves Banach valued functions which represents a straightforward adaptation of the original setting.Definition 8. Let (F, ‖.‖ F ) be a complex Banach space.We set k, k ′ ≥ 1 as two integers and q > 1 as a real number.Let E be a bounded sector in C * centered at 0 and f: E ⟶ F be a holomorphic function.en, f is said to possess the formal series as Gevrey asymptotic expansion of mixed order (1/k ′ ; (q, 1/k)) on E if for each closed proper subsector W of E centered at 0, one can choose two constants C, M > 0 with for all integers N ≥ 0 and any ϵ ∈ W. In the literature, the Ramis-Sibuya theorem is known as a cohomological criterion which ensures the existence of a 28 Abstract and Applied Analysis common Gevrey asymptotic expansion of a given order for families of sectorial holomorphic functions (see [24], p.121 or [25], Lemma XI-2-6).Here, we propose a variant of this result which is adapted to the Gevrey asymptotic expansions of the mixed order disclosed in the above definition. Assume that the ensuing two requirements hold. (1) e functions G p (ϵ) are bounded on E p , for 0 ≤ p ≤ ς − 1, (2) e functions Δ p (ϵ) suffer the next sequential constraint on Z p ; there exist two constants A p , B p > 0 with In other words, Δ p (ϵ) has the null formal series 0 as Gevrey asymptotic expansion of mixed order en, all the functions G p (ϵ), 0 ≤ p ≤ ς − 1, share a common formal power series G(ϵ) ∈ F[[ϵ]] as Gevrey asymptotic expansion of mixed order (1/k ′ ; (q, 1/k)) on E p . Proof. e entire discussion leans on the following central lemma. □ Lemma 15.For all 0 ≤ p ≤ ς − 1, the cocycle Δ p (ϵ) splits, which means that bounded holomorphic functions Ψ p : E p ⟶ F can be singled out with the next feature: for all ϵ ∈ Z p , where by convention Ψ ς � Ψ 0 .Furthermore, a sequence φ m m ≥ 0 of elements in F can be built up such that for each 0 ≤ p ≤ ς − 1 and any closed proper subsector W ⊂ E p with apex at 0, one can find K p , M p > 0 with for all ϵ ∈ W, all integers M ≥ 0. Proof. e proof mimics the arguments of Lemma XI-2-6 from [25] with fitting adjustment in the asymptotic expansions of the functions Ψ p constructed by means of the Cauchy-Heine transform. For all 0 ≤ p ≤ ς − 1, we choose a segment: where by convention θ − 1 � θ ς− 1 .Let for all ϵ ∈ E p , for 0 ≤ p ≤ ς − 1, be defined as a sum of Cauchy-Heine transforms of the functions Δ h (ϵ).By deformation of the paths C p− 1 and C p without moving their endpoints and letting the other paths C h , h ≠ p − 1, p untouched (with the convention that C − 1 � C ς− 1 ), one can continue analytically the function Ψ p onto E p .erefore, Ψ p defines a holomorphic function on E p , for all 0 ≤ p ≤ ς − 1. Now, take ϵ ∈ E p ∩ E p+1 .In order to compute Ψ p+1 (ϵ) − Ψ p (ϵ), we write where the paths C p and � C p are obtained by deforming the same path C p without moving its endpoints in such a way that where for all ϵ ∈ W. Keeping in mind (256) for the special value N � m + 1 and (266), we get some constants A p , B p > 0 such that for all 0 ≤ m ≤ M. In particular, we deduce the existence of two constants A p , B p > 0 (depending on A p , B p , r 1 , q, k, k ′ ) with for all 0 ≤ m ≤ M. Indeed, recall from [24], Appendix B, that for any given real number a > 0, Γ(x)x a ∼ Γ(x + a) as x tends to +∞.Hence, a constant K k′ > 0 (depending on k ′ ) can be sort with for all m ≥ 0. Consequently, (268) follows from (267) and (269).Moreover, one can choose a positive number η > 0 (depending on W) such that |ξ − ϵ| ≥ |ξ|sin(η) for all ξ ∈ L p and all ϵ ∈ W. Bringing to mind (256) for the peculiar value N � M + 2 and (266) give rise to two constants A p , B p > 0 such that (270) For that reason, we can find constants � A p , � B p > 0 (relying on A p , B p , r 1 , q, k, k ′ , η) such that for all ϵ ∈ W. Namely, from (269) we notice that Using comparable arguments, one can give estimates of the form (265), (266), (268), and (271) for the other integrals for all h ≠ p, p − 1. As a consequence, for any 0 ≤ p ≤ ς − 1, there exist coefficients φ p,m ∈ F, m ≥ 0, and two constants for all ϵ ∈ Z p .erefore, each a p (ϵ) stands for the restriction on E p of a global holomorphic function called a(ϵ) on D(0, r)∖ 0 { }.Since a(ϵ) remains bounded on D(0, r)∖ 0 { }, the origin turns out to be a removable singularity for a(ϵ) which, as a result, defines a convergent power series on D(0, r). Finally for all integers n ≥ 1, provided that ϵ ∈ E p . Each G p defines a bounded holomorphic map from E p into the Banach space F described in the statement of eorem 3. Furthermore, bound (178) implies that the cocycle Δ p (ϵ) � G p+1 (ϵ) − G p (ϵ) fulfills the sequential bound (256) on Z p � E p+1 ∩ E p for any 0 ≤ p ≤ ς − 1. en, eorem 2 can be applied in order to get a formal power series G(ϵ) ∈ F[[ϵ]] which stands for the Gevrey asymptotic expansion of mixed order (1/k ′ ; (q, 1/k)) of each G p (ϵ) on E p , for all 0 ≤ p ≤ ς − 1. Theorem 2 . Consider a complex Banach space (F, | |.| | F ) and set a good covering E (a) C p ⊂ E p ∩ E p+1 and � C p ⊂ E p ∩ E p+1 (b) Γ p,p+1 ≔ − � C p + C p is a simple closed curve with positive orientation whose interior contains ϵ erefore, due to the residue formula, we can writeΨ p+1 (ϵ) − Ψ p (ϵ) � 1 2π �� � − 1 √ Γ p,p+1 Δ p (ξ) ξ − ϵ dξ � Δ p (ϵ),(263)for all ϵ ∈ E p ∩ E p+1 , for all 0 ≤ p ≤ ς − 1 (with the convention that Ψ ς � Ψ 0 ).In a second step, we derive asymptotic properties of Ψ p .We fix an 0 ≤ p ≤ ς − 1 and a proper closed sector W contained in E p .Let C p (resp. C p− 1 ) be a path obtained by deforming C p (resp.C p− 1 ) without moving the endpoints in order that W is contained in the interior of the simple closed curve C p− 1 + c p − C p (which is itself contained in E p ), where c p is a circular arc joining the two points re ��
v3-fos-license
2020-01-16T09:09:10.967Z
2020-01-09T00:00:00.000
213673906
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1149/1945-7111/ab6289", "pdf_hash": "ffd9c8b213ee7336d82bd6eff22e506adeede9d6", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:290", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "65b982156277dd8fe2fb215bfd9f5492e110e6a9", "year": 2020 }
pes2o/s2orc
Combinatorial Studies into the Effect of Thermally Inter-Diffused Magnesium on the Kinetics of Organic Coating Cathodic Delamination from Zinc Galvanized Steel This paper describes a high-throughput study into the role of Mg in preventing corrosion driven coating disbondment of organic coatings from Zn-Mg alloy galvanized steel. A graded Mg wedge is applied to a hot-dip zinc galvanised steel substrate using physical vapour deposition, and subsequently annealed to produce metallic inter-diffusion and formation of Mg2Zn11 intermetallic. An overcoat of electrically insulating polyvinyl butyral (PVB) is applied and corrosion is initiated from a penetrative coating defect using an aqueous electrolyte. The variation in Mg coating weight across the wedge facilitates a systematic investigation of the effect of Mg on Volta potential and the rate of corrosion driven cathodic coating disbondment using scanning Kelvin probe (SKP) potentiometry. The rate of cathodic disbondment is shown to decrease rapidly even at very low Mg coating weight (corresponding to 25 nm thickness before annealing). The results are explained in terms of the galvanic polarity of the corrosion cell formed between Zn exposed at the defect site, and the intact Zn-Mg layer at the metal-organic coating interface. Magnesium is increasingly used as an alloying element in zincbased galvanized coatings applied to steel substrates intended for architectural and automotive applications. [1][2][3][4] In such applications the galvanized surface is typically overcoated with an organic film (paint, lacquer or laminate). [1][2][3][4] Failure via corrosion-driven organic coating disbondment is therefore a problem of particular interest. [1][2][3][4] The current paper focuses on one important mode by which such corrosion-driven failure can occur, namely cathodic delamination. In the case of cathodic delamination it is the cathodic oxygen reduction reaction (ORR) which is responsible for the disbondment of the organic coating from the metal substrate. Anodic metal dissolution occurring in the defect region is coupled to a cathodic delamination front via a thin (<5 μm) gel like film of electrolyte which ingresses beneath the coating. [5][6][7] In the delaminated region, the potential increases as a function of distance away from the artificial defect, and the potential difference between this region and that of the intact coating (higher potential) creates a driving force for cation migration away from the defect region and beneath the organic coating. 7 Cathodic delamination is known to affect conventional organic coated zinc galvanized steel. [8][9][10] However, it has been reported that magnesium additions, present in the form of the intermetallic (IM) compound MgZn 2 , can act to profoundly decrease cathodic delamination rates. [1][2][3][4] For these reasons, the principal aim of the work to be presented here was to systematically characterize the relationship between magnesium content and rates of cathodic organic coating delamination from galvanized steel. In so doing a combinatorial metrology for obtaining such a characterization has been developed. High-throughput (combinatorial) metrologies have evolved over the past few decades as a useful alternative to the traditional "onecomposition-at-a-time" approach for rapidly determining compositionstructure-property relationships in novel materials systems. [11][12][13] For such determinations, a combinatorial library, typically a continuous composition-spread film, is first synthesized and then characterized using high throughput measurement techniques for selected properties of interest. Thus, two essential tools are required for a combinatorial approach: the first a means of synthesizing the composition spread library film, and the second a high-throughput measurement technique for screening the desired property or characteristic. Zinc-magnesium alloy coatings are typically applied to the steel substrate by hot dipping. 1,14 However, such an approach does not readily permit the rapid development of a combinatorial library of continuously varied alloy composition. One recognized methodology for so doing involves the deposition of metallic films via a sputtering route. [11][12][13] For example, simultaneous co-deposition using two or more sputtering targets can (in principle) directly create a continuous composition spread library on a single substrate. Alternatively, individual sputtering targets can be used in conjunction with a movable shutter to deposit a graded metal layer or "wedge" (shown schematically in Fig. 1a) of continuously varied thickness. 13,15,16 By superimposing wedges of dissimilar metals, and then annealing to produce thermal interdiffusion, a single-substrate combinatorial library of alloy composition can once again be created. Physical vapor deposition (PVD) has been widely used as a sputtering technique for the deposition of metallic coatings. [17][18][19][20][21] Furthermore, it has been demonstrated that magnesium layers, deposited onto a pre-existing galvanized steel substrate using PVD, may subsequently be thermally interdiffused by annealing to produce Zn-Mg coatings. 20 It has thus been shown that, for annealing temperatures between 300°C and 400°C, the principal products of interdiffusion are the crystalline IM compounds MgZn 2 and Mg 2 Zn 11 , which tend to form in bands or layers parallel with the basal plane of the coating. However, if an electrogalvanized steel (where the original coating is a pure zinc electrodeposit) is used as the substrate, results may be complicated by thermal diffusion of iron from the underlying steel into the Zn-Mg coating at higher annealing temperatures. 21,22 Given the above, it was decided to use PVD in conjunction with a movable shutter to deposit wedges of magnesium metal onto a preexisting galvanized steel substrate and then induce thermal interdiffusion by annealing. In so doing, the intention was to generate continuously varied, single-substrate, combinatorial libraries corresponding to zinc-rich portions of the zinc-magnesium binary diagram. One important drawback of this approach is that the layered structure of the interdiffused Zn-(PVD)Mg coatings (which from this point will be referred to as ZM coatings) does not strongly resemble the microstructure evolving in hot-dipped Zn-Mg or Zn-Al-Mg alloy coatings. Nevertheless, it would seem entirely appropriate as a means of determining how varying quantities of Mg-Zn IM, present at the metal surface, affects the tendency of an organic overcoat to undergo cathodic disbondment. Hot-dip galvanized steel, produced using a zinc spelter which also contains 0.15 wt% aluminium, was used as the substrate. The small aluminium addition acts to block iron-coating interdiffusion through the formation of a crystalline aluminium-rich IM layer (nominal composition ((Zn,Al) 5 Fe ) at the coating-steel interface. 23 z E-mail: n.wint@swansea.ac.uk *Electrochemical Society Member. The high-throughput measurement technique selected as a means of characterizing the ZM coated combinatorial library samples with respect to cathodic delamination resistance involves the Scanning Kelvin Probe (SKP). SKP has been widely used as means of following the progress of organic coating delamination from a variety of metallic substrates, [6][7][8][9][10]15,24,25 including galvanized steel [8][9][10] and pure phase MgZn 2 , 1-4 and recent work has highlighted its role in providing mechanistic information by measuring the characteristic potentials associated with different portions of a localized corrosion cell. 26 Thus, the high cathodic delamination resistance observed in the case of organic coated MgZn 2 has been attributed to an inversion in the normal polarity of the localized corrosion cell evolving between an intact organic-coated surface and MgZn 2 exposed at a penetrative organic coating defect. [1][2][3][4] Here we show that, by arranging the geometry of the localized corrosion cell in such a way that cathodic delamination occurs in a direction normal to the gradient of the original (PVD)Mg wedge deposit, it is possible to characterize simultaneously the full range of Mg coating weights present in a single-substrate combinatorial library. An exactly similar approach has been reported previously for PVD aluminium wedge deposits on iron. 15,16 Experimental Materials.-Hot dip galvanized steel was obtained from Tata Steel UK and consisted of 0.8 mm thick mild steel coated with 20 μm layer zinc (containing 0.15 wt% Al) on each side. Mg deposition sources were acquired from Kurt J Lesker and were of at least 99.95% purity. All chemicals, including polyvinyl butyralco-vinyl alcohol-co-vinylacetate (PVB) (molecular weight 70 000--100 000 Da) were of analytical grade purity and supplied by Sigma Aldrich Chemical Company. Methods.-Galvanized steel coupons of approximately 35 mm × 50 mm were cut from larger sheets and 5 μm abrasive alumina powder (Buehler) was used to remove any surface contamination. The coupons were then rinsed using distilled water and ethanol, and subjected to both an ultrasonic acetone and ethanol wash (each wash lasted 10 min). Coupons were finally rinsed with ethanol and dried using pressurized air. Application of Mg PVD film.-A Kurt J Lesker PVD75 Physical Vapour Deposition (PVD) system was used to apply thin Mg layers onto the cleaned zinc coated steel surface, following a procedure described previously. 15 Coupons were attached to the stainless steel holder via four screws to ensure that they remained flat once loaded into the deposition chamber. An area of the coupon, on to which deposition would take place, was exposed (shown schematically in the first image of Fig. 1b) using vacuum compatible polyimide tape (CHR K250 Saint-Gobain Performance Plastics). The holder was then loaded into the vacuum chamber of the PVD system in a position where it could not obstruct the sliding shutter mechanism (used to apply films of variable thickness). Before deposition of the Mg film, the surface of the coupon was plasma etched for 15 min in an argon atmosphere. An argon gas flow rate of 140 standard cubic centimetres per minute and an applied power density of 15 W · cm −2 . Deposition of the Mg film took place within the same chamber, which was maintained at a pressure of at 3 × 10 −3 Torr (the base pressure of the vacuum pump was 5 × 10 −8 Torr). Deposition was carried out onto the shutter for the first 5 min to ensure stabilization of the deposition rate and to avoid depositing surface oxide from the target. A Mg film, varying linearly in thickness, was then deposited onto the un-taped area. Prior to deposition, the shutter was positioned such to cover the entire exposed area (shown Figure 1. A schematic representation of the process by which a graded wedge coating is formed using a sliding shutter mechanism and the finished wedge coating produced in the case of an Mg coating. schematically in first image of Fig. 1a and second image of Fig. 1b). The rate of shutter withdrawal was used to control the rate at which the film thickness was varied, following Eq. 1, where v is the shutter speed, w is the width of the area to be coated, T is the maximum film thickness required, and D is the deposition rate. Deposition was stopped at the point where all of the un-masked area was exposed. The argon gas flow rate used was 40 SSCM and the power was 150 W. A deposition rate of ∼4.9 Å · s −1 was achieved throughout. All coating thicknesses were monitored indirectly using a Sigma Instruments oscillating crystal (calibrated for the physical properties of the coating). For single thickness films (used for both GDOES and XRD measurements) the same methodology was followed with the exception that the sliding shutter was not used. Magnesium film inter-diffusion.-Following Mg deposition, the coupons were taken out of the PVD chamber. Any tape (used to mask areas of the coupon) was removed and they were then transferred to a Carbolite tube furnace (time between removal from PVD chamber and entering furnace <10 min). Coupons were heated to 340°C, at which they were held for 10 min, before cooling. 21 An inert atmosphere was maintained throughout the period of the heating and cooling cycle by flowing argon gas at a rate of 1.5 l · min −1 . Coupon temperature was monitored using a Ktype thermocouple which was spot welded to its rear face. GDOES; Depth profiling of the inter-diffused films were performed using a Horiba-Jobin Yvon-5000RF glow discharge optical emission spectroscopy (GDOES) unit. XRD; Data were collected on a Bruker C2 Gadds with a 1.6 kW (40 kV, 40 mA) Cu sealed tube generator equipped with a single Goebel mirror (0.5 mm collimator), a xyz stage and 2D multi-wire HiStar detector. A single 2D image was acquired for 600 s. Data were corrected for spatial distortion and integrated in Chi. Analysis was undertaken using reference data from PDF-2 (2002) International Centre for Diffraction Data. Cathodic disbondment studies.-A "Stratmann" type cell (shown in Fig. 2) was used to investigate the influence of influence of Mg on the rate of corrosion driven cathodic delamination. [6][7][8][9][10]15,24,25 A strip of clear adhesive tape was applied across one end of a coupon in such a way that it ran parallel to the profile of the coating wedge. Insulating tape was attached onto the two parallel sides of the coupon which ran perpendicular to the clear adhesive tape. The remaining area was coated in an ethanolic solution (15.5 wt%) of PVB via bar casting. The insulating tape acted as a height guide and the dry film thickness (measured using a micrometer screw gauge) was 30 μm. A 15 mm × 20 mm area of bare metal was exposed by partially peeling back the clear tape. The residual clear tape and PVB overcoating acted as a barrier between the intact PVB coating and the exposed metal "defect" area. Non-corrosive silicone rubber was used to line the remaining sides of the defect, to which a 2 cm 3 volume of 0.86 M aqueous NaCl electrolyte was applied to initiate corrosion. Figure 2a shows a schematic version of the delamination cell arrangement indicating the various Mg alloyed (ZM), unalloyed Zn (HDG), PVB coated and uncoated regions of the cell. Figure 2b shows the PVB (30 μm) coated samples as part of the delamination cell arrangement prior to introduction of the 0.86 M NaCl experimental electrolyte. It may be appreciated by Figs. 2a and 2b that the ZM portion of the metal substrate surface does not extend to the lip of the PVB coated area. Thus, any process of cathodic PVB disbondment will become initiated at the Mg-free Zn (HDG) metal-PVB interface and must propagate for a distance of ∼4 mm before encountering the ZM-PVB interface. The reason for choosing to adopt such an arrangement were twofold; 1.) by allowing initiation to occur on Zn we de-convolve the effects of Mg on the initiation and propagation of cathodic disbondment. By so doing we exclude the possibility that Mg is simply acting to reduce the probability of disbondment becoming initiated and not to inhibit subsequent disondment kinetics. 2.) Once established, the polarity of the cathodic disbondment cell disfavours migration of Cl − ions beneath the disbonded organic films. Exclusion of Cl − minimizes the tendency for anodic Mg dissolution to occur. The effect of Mg on cathodic disbondment kinetics may therefore be quantified in isolation without the risk of complication from anodic undercutting processes. Calibration of the SKP was completed to obtain a relationship between the Volta potential value (as recorded by the SKP) and the corrosion potential (E corr ) associated with the polymer covered metal. 6,9 Ag/Ag + , Cu/Cu 2+ , Fe/Fe 2+ and Zn/Zn 2+ couples were used to complete the calibration process following a procedure established previously. 6,9 Each of the metals was machined into a disc (15 mm diameter, 5 mm thick) within which a well (1 mm deep wells of 8 mm diameter) was formed. 0.5 M aqueous solutions of the respective metal chloride salt (0.5 M nitrate salt in the case of Ag) were used to fill the well. The metal electrode potential was simultaneously measured vs SCE using a Solartron 1280 potentiostat and compared to the Volta potential difference measured above the filled well by the SKP. A constant humidity of ca. 95% RH was achieved by placing reservoirs of 0.86 M aqueous NaCl (pH 6.5) within the SKP chamber. The temperature was maintained at 25°C. The SKP gold wire probe (125 μm diameter) was positioned at a constant height of 100 μm above the sample and scanned along twelve 12 mm long lines, perpendicular and contiguous to the defect/coating interface. Given the orientation of the inter-diffused Mg wedge, scanning in this manner meant that each scan line followed the progress of the PVB delaminating from ZM alloys of different compositions (see Fig. 2b). The number of scan lines was chosen so as to maximise the amount of Mg thickness values whilst ensuring that the distance between lines was greater than the lateral resolution L of the SKP (given by Eq. 2) where d is probe-specimen distance and D is the probe diameter. 27 Using Eq. 2 it can be shown that in the case that d = 100 μm and D = 125 μm, the lateral resolution will be ∼140 μm. For the case that the inter line spacing was greater than 140 μm, Volta potential values recorded for each scan line were not influenced by the adjacent scan line (for which Mg thickness would be substantially different). Scanning took place immediately following the application of the electrolyte to the defect. Subsequent scans took place at intervals of 1 h for a total of up to 48 h. 20 E corr values were recorded per mm. In some cases E corr profiles, made noisy by the nature of the interdiffused ZM layer, were smoothed in Origin software using a Savitzky Golay method whereby successive sub-sets of 20 adjacent data points were fitted with a second degree polynomial by the method of linear least squares. This process was completed to reduce the complexity of the profiles and to aid in determining the location of the cathodic delamination front. Results and Discussion Structural/compositional characteristics.- Figure 2b shows the appearance of the combinatorial ZM wedge sample immediately after thermal inter-diffusion of HDG and a 600 nm thick Mg wedge, and prior to over-coating with PVB. The unalloyed HDG surface is visibly polycrystalline. By contrast, the thermally inter-diffused ZM in the thicker portion of the ZM region appears significantly more uniform, light grey in colouration, with no surface crystallinity resolvable by eye. Towards the thinner edge of the wedge the ZM coating, to some extent, follows the topography of the substrate allowing the polycrystalline morphology of the HDG to become visible. Figure 3 shows an SEM cross section through the ZM layer for the case that the Mg layer thickness prior to inter-diffusion was 600 nm. The ZM layer is somewhat uneven, reflecting the uneven HDG substrate surface, and varies in thickness between 1 and 2 μm. Two layers are visible in the interdiffused ZM coating. Entirely similar results have been reported by other authors using PVD and thermal inter-diffusion and attributed it to the formation of MgZn 2 (upper layer) and Mg 2 Zn 11 (lower layer). 21,22 However, Figure 4 shows a GDOES depth profile derived from a ZM surface produced by the thermal inter-diffusion of a 600 nm Mg layer, and the near surface Mg concentration is 8 wt%, consistent with the presence of Mg 2 Zn 11 . Furthermore, a glancing angle XRD spectrum, obtained from an exactly similar sample surface and shown in Fig. 5, again indicates the presence of crystalline Mg 2 Zn 11 but not MgZn 2 . The various peaks in Fig. 5 are labelled in accordance with reference data from PDF-2 (2002) International Centre for Diffraction Data. Thus, if any MgZn 2 is present in the interdiffused coating it is not present in sufficient quantities to be evident in either the XRD or GDOES data. SKP potentiometry.-The effect of Mg film thickness (d) upon the E corr values for inter-diffused ZM films over-coated with 30 μm PVB was quantified during initial experiments. The meaning of E intact (in the context of a metal coated with a non-conducting polymer) has been explained in detail previously, 24 and reflects the open circuit potential of the oxide-covered metal substrate. Figure 6a shows a profile of E intact as a function of initial Mg layer thickness. Measurements were obtained using SKP in humidity (∼96% RH) air prior to the introduction of 0.86 M NaCl delamination electrolyte. Values shown are based on the average of all readings taken at each thickness and error intervals represent one standard deviation on the mean. The local E intact values vary with the thickness of the initial (pre inter-diffusion) Mg-layer. Mg-free, Zn (HDG) portions of the sample exhibit E intact of ∼ − 0.3 V vs SHE i.e. similar to values reported previously for PVB coated HDG. 8,9,25 The potential value remains constant for Mg thickness values ⩾100 nm and ⩽400 nm. A decrease in potential (of up to ∼0.3 V) is observed for Mg thickness values ⩾400 nm. This finding would suggest that the outer surface of the inter-diffused ZM layer is becoming enriched in Mg for higher (initial) Mg thickness. There is no evidence from XRD (Fig. 5) for the formation of MgZn 2 under these circumstances. However, it has been proposed that ZM intermetallic compounds may exhibit some degree of variable stoichiometry. 28 The observed variation in Volta potential ( Y D ) values, with original Mg film thickness in Fig. 6a is most readily explained on the basis of a contact potential developing between the HDG zinc substrate and superposed MgZn IM layer(s). The Volta potential difference (or contact potential) established between the Zn and the overlying ZM layer is ∼0.75 eV and is a result of their varying work functions (W). When two dissimilar, electronically conducting materials are connected together (in such a way that electrons can be transported between them) they will share a common Fermi level and the Volta potential difference between them can be calculated using Eq. 3. Despite this, the calculated value is expected to be a reasonable reflection of the true value. As far as the authors are aware, the work function of Mg 2 Zn 11 has not been published previously. However, the value of 3.6 eV calculated for Mg 2 Zn 11 would seem reasonable as a first attempt, given that it falls below the value widely adopted for pure Zn (∼4.33 eV) 29 and is more consistent with that of pure Mg (∼3.68 eV). 30 The variation in E intact with Mg layer thickness is more significant for initial (pre-inter-diffusion) Mg film thicknesses less than 100 nm than for thicknesses between ∼150 nm and ∼450 nm (Fig. 6). The value of E intact recorded decreases from (−0.3 ± 0.05) V vs SHE (similar to values reported previously for PVB coated HDG 8,9,25 ) to (−0.8 ± 0.1) V vs SHE as Mg layer thickness values increase from 0 nm to 70 nm. This decrease in potential can be seen more clearly in Fig. 7, which shows E intact as a function of initial Mg layer thickness for a 100 nm wedge. The value of E intact decreases almost linearly from (−0.3 ± 0.02) V vs SHE to (−0.6 ± 0.1) V vs SHE for thicknesses between 0 nm and 70 nm. It seems reasonable to assume that a less complete ZM layer is produced at these reduced thicknesses and that E intact tends toward that of the underlying HDG substrate. Cathodic delamination studies.-Unmodified HDG; Baseline delamination kinetics were obtained for the cathodic disbondment of PVB from a plain HDG substrate. When cathodic disbondment occurred on the unmodified HDG substrate a series of timedependent E corr (x) profiles were obtained as shown in Fig. 8. The significance of these profiles has been described at length elsewhere and will be dealt with here only briefly. [8][9][10] Initially, the potential value associated with the intact PVB coated Zn surface (E intact ) corresponds to a "passive" state (∼−0.3 V vs SHE) and is determined by the relative rates of anodic zinc dissolution (Eq. 4) and the ORR (Eq. 6). [8][9][10] The rate of the anodic reaction is slowed by the presence of the zinc hydr(oxide) layer whilst the permeability of the PVB layer means that the ORR is kinetically faster. Upon contact with the NaCl electrolyte (∼pH 6.5), the zinc hydr (oxide) layer dissolves and Zn undergoes anodic dissolution via Eq. 4. The potential within the near defect region (I) falls to values corresponding to actively corroding Zn (∼−0.7 V vs SHE) following Eq. 5. [8][9][10] The sharp transition between the defect and intact coating (IV) is referred to as the delamination front (III) and can be seen in the profiles in Fig. 8. These profiles can be understood as a galvanic cell within which the defect is the principal anodic site and is coupled to region IV. This galvanic coupling results in an anodic potential shift at the defect and a cathodic potential shift at the metal/ coating interface and is facilitated by an ionic current flux which passes through the underfilm electrolyte as delamination proceeds (Region II). Zn Zn 2e A sharp inflection in potential values is seen to move from left to right (in Fig. 8) as delamination proceeds. The point of maximum slope dE dx ( ) is taken as a semi-empirical indicator of the delamination front. The time dependent distance of the front from the artificial defect (x del ) can therefore be monitored easily and is plotted as a function of the associated delamination time (t del ) to obtain kinetic data. For unpigmented coatings, it is the migration of electrolyte cations (in this case Na + ) in the underfilm electrolyte, which typically controls the delamination rate. 7 Delamination kinetics are then predicted to be parabolic and to follow Eq. 12, where t i is the time before initiation of corrosion-driven delamination and k d is the parabolic delamination rate constant. associated with each scan line are given in Table I. Figure 9 shows the time dependent E corr (x) profiles obtained for line 11 (588 nm Mg). In Fig. 9 it may be seen that, initially, E intact values over the unmodified Zn portion (∼−0.3 V vs SHE) reflect those observed in Fig. 8. As delamination proceeds over the unmodified Zn surface E corr (x) profiles evolve, which are, in the main entirely similar to Fig. 8. However, once the delamination front reaches the edge of ZM region no further propagation is observed. That is to say, no wave of potential variation is observed to move over the ZM surface. In comparison to Fig. 8, the E corr recorded in the intact region is lower (∼−1.0 V vs SHE) than that recorded within the defect region (∼−0.75 V vs SHE). It would therefore seem reasonable to propose that, as with pure MgZn 2 , 1-4,11 the lack of cathodic delamination observed in the case of Mg 2 Zn 11 or MgZn 2 (formed at a galvanized Zn surface) results from the galvanic polarity of the corrosion cell. SKP has previously been used to demonstrate the ability of MgZn 2 to resist cathodic coating delamination. [1][2][3][4]11 The MgZn 2 and Mg 2 Zn 11 IM phases formed within Zn-Mg and Zn-Al-Mg alloys have been shown to corrode sacrificially. The Mg 2+ ions released can react with OH − (produced at the cathode) to form Mg(OH) 2 3,14,15 and this process ensures that the surface pH is maintained at values at which protective zinc hydr(oxides) are stable. 4,12 The selective dissolution which occurred in the defect region, meant that the defect potential increased to values of freely corroding zinc. The formation of a large band gap oxide at the intact MgZn 2 /oxide/polymer interface resulted in increased cathodic polarization, and the galvanic polarity of the corrosion cell meant that cathodic delamination from the IM surface was completely inhibited. [1][2][3][4] It is also evident from Fig. 9 that once the cathodic delamination front has reached the ZM portion of the coated sample, and the delamination rate has fallen to zero, potentials in the delaminated (zinc) portion of the sample start to rise. The mean value of E corr in the delaminated region is plotted vs time in Fig. 10 which shows that E corr increases from −0.82 V vs SHE (16 h) to −0.74 V vs SHE (20 h) over a period of 4 h after the cessation of delamination. If the measured values of E corr are assumed to be close to the reversible potential for Zn oxidation (a cathodically controlled reaction), obtained via Eq. 10, and that the underfilm electrolyte is in isopiestic equilibrium with the reservoir electrolyte (0.86 M), a first approximation of the underfilm pH values can be made using Eq. 11. The implication here is that once OH − production (through Eq. 6 at the delamination front) stops, underfilm pH moderates as OH − is consumed. The further implication is that a significant fraction of total underfilm ORR occurs at the delamination front (as opposed to the portion of the sample where delamination has already occurred). This would be true if either i.) the underfilm electrolyte layer was acting as a barrier to O 2 mass transport and/or ii.) the corroded underfilm Zn surface was relatively inactive with respect to the ORR. A question arises as to what extent the delaminated regions E corr values recorded by the SKP, and shown in Fig. 9, actually reflect under-film pH. The accuracy with which Eq. 11 is able to predict the underfilm pH is dependent upon the degree to which polarization and ohmic effects distort the thermodynamic potential associated with the system. Whilst it is true that the ohmic contribution is unknown, there is no evidence of a potential gradient developing between the anode and cathode shown in Fig. 9. This finding is consistent with the cessation of the cathodic delamination current which occurs after 16 h. One alternative explanation for the increase of potential in the delaminated area would be a corresponding shift in the potential observed at the defect, for example in the case of corrosion product accumulation. However, given both the defect geometry, and the amount of electrolyte contained within the defect, it is unlikely that significant shifts in the defect potential would occur after 100 h. It has previously been shown that potentials in the near neardefect delaminated region can become "pinned" by galvanic coupling to the defect. 10 Thus, when the process of cathodic delamination on zinc was arrested by switching the experimental atmosphere to nitrogen, E corr values beneath the delaminated coating close to the defect were not found to change significantly. 10 In contrast, E corr values close to the delamination front (and far from the defect) were found to change in a manner determined by the zinc/zincate equilibrium (as is assumed in the current paper). Furthermore, near-defect potential "pinning" is not observed under circumstances where coupling to the defect is diminished by the underfilm precipitation of corrosion products. 31 The SKP derived E corr profiles in Fig. 9 show no evidence of potential "pinning" to the defect after cathodic disbondment has ceased and on this basis the assumption that the subsequent time-dependent evolution of underfilm E corr results from a time-dependent pH increase would seem to be a reasonable one. 100 nm Wedge; Experiments were completed on three different 600 nm wedge samples and cathodic disbondment was not observed over the Mg bearing portions in any of the cases. The analysis of time-dependent SKP E corr data was therefore repeated for different ZM layer thicknesses between 0 and 100 nm initial Mg (Table I). Figure 11 shows the time dependent E corr (x) profiles obtained for line 7 (25 nm Mg). The profiles shown in Fig. 11 are somewhat noisy as a result of the nature of the inter-diffused ZM layer, and have been smoothed to allow for easy identification of the delamination front. The accurate determination of the delaminated distance was confirmed by visual inspection of the sample following removal from the SKP. In Fig. 11 E intact values are more variable over the ZM portion of the sample (∼−0.55 to ∼ −0.35 V vs SHE) than in Fig. 9, suggesting greater heterogeneity in the thinner ZM layer. Once again the delamination front propagates from left to right over the unmodified Zn portion of the sample and comes to a stop at the edge of the ZM region. Once delamination has ceased, E corr over the unmodified Zn surface increases with time and reaches ∼ −0.7 V vs SHE after 45 h. A slight but clear decrease in potential is observed within the ZM region (∼7000 μm away from the defect) after ∼45 h. However, upon inspecting the sample following experimentation, it was confirmed that there was no evidence of mechanical coating disbondment and any potential drop shown cannot be attributed to the occurrence of a cathodic disbondment mechanism. In comparison to Fig. 9, the potential difference between the intact and defect region is significantly reduced and only limited galvanic polarity (required to prevent cathodic delamination) is observed. In Fig. 12 the potential difference between the intact ZM portion (16 nm Mg) and the defect is further reduced (compared to Fig. 11) and the delamination front continues to propagate from the unmodified Zn portion of the sample and over the ZM region. Figure 13 shows the time-dependent E corr (x) data derived from the Mg-free "corridor" region of 600 nm ZM wedge sample which was intended to provide an internal control for the combinatorial metrology. Initially E corr (x) profiles become established and evolve in a manner identical with the PVB coated HDG (Fig. 8) for which E intact is initially ∼ −0.3 V vs SHE and E corr falls to ∼ −0.7 V vs SHE in the delaminated region. The delamination front moves from left to right until it reaches a position which is parallel with the edge of the ZM coated portion of the sample. At this point the delamination front continues to advance unimpeded (as shown in Fig. 13). However, E corr values in the delaminated region now start to increase in a similar fashion to in Figs. 11 and 12 and reach ∼−0.5 V vs SHE in the defect area after 20 h. Once the PVB coating has delaminated and an electrolyte layer has ingressed, a lateral diffusion of electrolyte ions becomes possible. That is to say, ions (such as OH − and H + ), dissolved in the electrolyte layer, can diffuse in a direction normal to the direction of the delamination. Furthermore, because of the Grotthus mechanism, 32 the diffusion of OH − /H + is expected to be relatively fast. Consequently, pH values in the delaminated region will tend to become uniformized in the direction normal to delamination. This will in turn produce a uniformization of underfilm E corr . Figure 14 shows x del plotted as a function of the associated t del was therefore suppressed by ∼25-35 nm thick Mg layers and it would seem reasonable to propose that any error would result in a ∼ 10 nm difference in the thickness of Mg needed to prevent cathodic delamination. The above findings have a number of implications; i) The similarity of x del vs time kinetics for the HDG Zn and the Zn corridor (internal control) experiments support the validity of the combinatorial approach. That is to say, there is no evidence that the lateral proximity of a compositionally different surface interferes with local rates of PVB coating delamination. ii) Figures 8 and 14 suggest that E corr values in the delaminated region do not significantly influence delamination rate on Zn. However, it must be borne in mind that delamination kinetics on Zn are parabolic and therefore controlled by ionic mass transport between the defect and the delamination front rather than any activation-controlled process at the front itself. 7 Conclusion Work has been completed to show that it is possible to produce a Mg/Zn alloy combinatorial library on a single substrate through physical vapour deposition (PVD) of a Mg "wedge", followed by thermal treatment to promote interdiffusion with the HDG Zn substrate. A rapid, parallel characterisation of the alloy library, with respect to resistance to corrosion driven cathodic delamination, can then be completed using SKP. • A combination of SEM, GDOES and XRD were used to show that thermal heat treatment resulted in the formation of a Zn-Mg coating consisting principally of a Mg 2 Zn 11 layer, with the possible superposition of an MgZn 2 layer at higher Mg coating weights. • The presence of surface IMs produced a depression in Volta potential consistent with Mg 2 Zn 11 exhibiting a work function of 3.6 eV. • When Volta potentials are converted into electrochemical potentials (E corr ) by calibration, the local E intact (which, in the context of an intact non conducting PVB coated metal, reflects the open circuit potential of the oxide-covered metal substrate 24 ) values varied with the initial (pre inter-diffusion) Mg layer thickness. Mg free, Zn (HDG) portions of the sample exhibited E intact of ∼ − 0.3 V vs SHE. The E intact value remained constant at −1.0 V vs SHE for Mg thickness values ⩾100 nm and ⩽400 nm and decreased to −1.4 V vs SHE for Mg thickness values ⩾400 nm. • A greater variation in E intact values with thickness was observed in the case that the initial Mg layer thickness <100 nm. The potential decreased almost linearly from −0.3 V vs SHE to −0.6 V vs SHE for thicknesses between 0 nm and 70 nm. • The susceptibility of ZnMg IMs (formed by interdiffusion), to corrosion driven cathodic delamination is strongly dependent upon (pre-interdiffusion) Mg coating weight. A (pre-interdiffusion) thickness of as little as 25 nm is sufficient to arrest actively propagating cathodic delamination on Zn (HDG). • There is no evidence that any "cross-talk," which may occur between regions of dissimilar Mg coating weight, effects the cathodic delamination rate. • The findings are entirely consistent with the theory advanced by Rohwerder et al. [1][2][3][4]33 That is to say that, as with pure MgZn 2 , the effect of the Mg 2 Zn 11 formed is to reduce the overpotential between the defect and delamination front such that the driving force for cation insertion at the delamination zone is diminished or removed altogether. The implication here is that it is the presence of Mg that results in the resistance to corrosion driven cathodic delamination, as opposed to the presence of a particular Mg intermetallic. • The finding that cathodic delamination is suppressed in cases when a reduction in overpotential between the defect and delamination front is observed, supports the hypothesis that the potential gradient can be adjusted (to zero) by changing the composition of the magnesium-zinc oxide at the surface of coatings. This in turn reduces the driving force for ion migration and galvanic coupling such as to suppress corrosion driven coating delamination. 1 • It is significant that the presence of Mg 2 Zn 11 alone would appear sufficient to strongly inhibit cathodic disbondment and the presence of MgZn 2 (in quantities detectable by glancing angle XRD) is not required.
v3-fos-license
2021-05-12T06:16:53.408Z
2021-05-10T00:00:00.000
234360507
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-021-89170-y.pdf", "pdf_hash": "e9049a02887cc1e822d1b5561a351969fc21b338", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:291", "s2fieldsofstudy": [ "Biology" ], "sha1": "e10d3fde6400eb69bdc03c8a604dab7a2c343703", "year": 2021 }
pes2o/s2orc
Single-cell transcriptome profiling of buffelgrass (Cenchrus ciliaris) eggs unveils apomictic parthenogenesis signatures Apomixis, a type of asexual reproduction in angiosperms, results in progenies that are genetically identical to the mother plant. It is a highly desirable trait in agriculture due to its potential to preserve heterosis of F1 hybrids through subsequent generations. However, no major crops are apomictic. Deciphering mechanisms underlying apomixis becomes one of the alternatives to engineer self-reproducing capability into major crops. Parthenogenesis, a major component of apomixis, commonly described as the ability to initiate embryo formation from the egg cell without fertilization, also can be valuable in plant breeding for doubled haploid production. A deeper understanding of transcriptional differences between parthenogenetic and sexual or non-parthenogenetic eggs can assist with pathway engineering. By conducting laser capture microdissection-based RNA-seq on sexual and parthenogenetic egg cells on the day of anthesis, a de novo transcriptome for the Cenchrus ciliaris egg cells was created, transcriptional profiles that distinguish the parthenogenetic egg from its sexual counterpart were identified, and functional roles for a few transcription factors in promoting natural parthenogenesis were suggested. These transcriptome data expand upon previous gene expression studies and will be a resource for future research on the transcriptome of egg cells in parthenogenetic and sexual genotypes. Apomixis is commonly defined as a reproductive phenomenon in angiosperms where embryos form from maternal cells in the ovule without meiosis and syngamy, resulting in asexual seed formation [1][2][3] . Apomixis is widespread in flowering plants and has now been described in 148 genera as adventitious embryony where the embryo arises from somatic cells in the ovule, 110 genera as aposporous and 68 genera as diplosporous where the embryo develops from the unreduced egg in a nucellar or megaspore mother cell-derived embryo sac, respectively 4 . While apomicts are by nature clonal and can produce seeds of identical genotype to the mother plant, no major crops have this self-reproducing capability. Elucidating molecular mechanisms of apomixis will significantly enhance the plant breeding toolbox for preserving genetic composition of elite hybrid cultivars. Significant strides have been made in deciphering the genetic control of apomixis in a wide range of natural apomicts 3 , although genes underlying the multiple components comprising apomixis are still not fully elucidated. In the natural apomict Pennisetum squamulatum (syn. Cenchrus squamulatus), mapping in a F 1 population derived from a cross between Pennisetum glaucum (syn. C. americanus; pearl millet) x P. squamulatum segregating for apospory and sexuality revealed 12 apospory-linked markers, which defined a contiguous aposporyspecific genomic region (ASGR) 5 . The linkage of many of these markers to apospory was further shown in C. ciliaris 6 . The physical size of the ASGR in P. squamulatum was determined to encompass ~ 50 Mb based on fluorescent in situ hybridization mapping of multiple ASGR-linked and one ASGR-recombinant BAC clones. The physical size of the ASGR in C. ciliaris could not be determined due to a lack of identified ASGR-recombinant BAC clones; however, the co-linearity of shared ASGR-linked BAC clones between the two species suggested that the two apomictic species share a similar physical size [7][8][9] . Sequencing of ASGR-linked bacterial artificial chromosome clones from P. squamulatum and C. ciliaris led to the identification of genes within the region including an AP2-domain containing transcription factor (ASGR-BABY BOOM-like, ASGR-BBML) 10,11 . The PsASGR-BBML transgene was shown to induce parthenogenesis in sexual pearl millet as well as in maize and rice, but not in Arabidopsis 12,13 . Comparative genomics and transcriptomics studies of apomictic plants and their sexual relatives or siblings often can help unlock the functional molecular components of apomixis that have not been genetically tractable. However, genomes of most natural apomicts remain unsequenced, and targeting the critical tissues where components of apomixis are expressed for RNA-seq has been biologically and technically challenging 14 . Recent expression studies have focused on and advanced our understanding of the first component in the apomixis pathway, termed apomeiosis, by investigating differentially expressed molecular signatures between sexual and apomictic reproductive tissues. Apomeiosis encompasses the initiation of mitotic divisions leading to the production of unreduced embryo sacs derived either from a somatic cell of the ovule nucellus (apospory) or from a megaspore mother cell (diplospory). Differential expression in whole ovules, each containing one to a few apomeiotic cells, to more targeted cells isolated using laser capture microdissection (LCM) has been investigated [15][16][17][21][22][23] . Gene expression activity within monocots prior to and after fertilization has been interrogated in isolated egg/zygote cells of maize and rice, with an emphasis on zygotic genome activation 24,25 . An important missing component in current expression studies is the transcriptomic comparison between sexual and parthenogenetic egg cells. Parthenogenesis, embryo development of the egg cell without fertilization, is the second component of the apomixis pathway, with transcriptomic analysis providing an approach to expose pathways underlying natural parthenogenesis. Discovery of genes driving parthenogenesis in the natural apomict C. ciliaris has been challenging due to limited genomic resources and technical difficulties in accessing the egg cell expressing parthenogenesis. Dropletbased single cell RNA-seq recently has been applied to acquire gene expression profiles for complex tissues such as Arabidopsis roots and developing ears of maize 26,27 . Yet it remains challenging for the current technology to capture extremely rare transcripts underlying core traits of interest in sparse and spatially-restricted cells like eggs. By conducting laser capture microdissection (LCM)-based RNA-seq on unfertilized sexual and parthenogenetic eggs on the day of anthesis and deep sequencing, combined with de novo transcriptome assembly and computational analyses, we created a de novo transcriptome with sequence information for C. ciliaris eggs. Transcriptional profiles that distinguish the parthenogenetic egg from its sexual counterpart were identified, and suggested functional roles for a few key transcription factors and pathways in promoting natural parthenogenesis. Our transcriptome data complement previous gene expression studies and will be an important resource for research on natural parthenogenesis. Results and discussion Parthenogenetic embryo frequency. Twelve percent of unpollinated ovaries from obligately apomictic genotype B-12-9, two days after anthesis, were shown to contain parthenogenetic embryos (Supplementary Fig. S1; Table 1) as evidenced by the presence of a multicellular embryo without endosperm development deduced from retention of the polar nuclei. This frequency of parthenogenesis in unpollinated ovaries of apomictic C. ciliaris is consistent with observations of Veille-Calzada 28 who reported 7-27% proembryos in unpollinated ovaries of different genotypes. No parthenogenetic embryos were observed in the sexual genotype B-2s. These data confirm the apomictic and sexual phenotypes of the plants used in this experiment and the obligate nature of sexuality for B-2s. Egg identification and collection. Serial sections of sexual C. ciliaris genotype B-2s showed a single sexual embryo sac containing the egg apparatus at the micropylar end, two polar nuclei, and antipodals at the chalazal end ( Fig. 1A-F). In aposporous genotype B-12-9, an egg apparatus and polar nuclei, but no antipodals, were observed in the aposporous embryo sac nearest to the micropylar end of the ovule (Fig. 1G-L). An additional aposporous embryo sac also was frequently detected. This is consistent with previous descriptions of aposporous embryo sac development in C. ciliaris 29 . De novo egg, ovule, and ovule plus egg transcriptome assemblies. De novo assembly and annotation of the C. ciliaris egg transcriptome was challenging for several reasons. (1) No annotated reference C. ciliaris genome sequence is available to guide transcriptome assembly. (2) The ultra-low input quantity of egg cell RNA required multiple cycles of cDNA amplification for downstream library preparation, potentially introducing PCR-generated nucleotide errors within the sequencing reads resulting in an excess of allelic variants. (3) The inevitable compromised quality of LCM-derived RNA and a poly-A based cDNA synthesis strategy resulting in the loss of 5′mRNA ends 30 . To partially address these challenges, high-quality, abundant ovule RNA was used for amplification-free cDNA library construction. The resulting reads were used to improve the assembly and annotation of LCM www.nature.com/scientificreports/ egg reads based on the hypothesis that the egg, as a biological subset of ovule tissues, will be represented in the whole ovule transcriptome with more complete transcripts than in the LCM egg transcriptome. Therefore, three de novo assemblies were constructed using Trinity: ovule only, LCM egg, and LCM egg plus ovule. Statistics, metrics for completeness, and number of unique plant annotations for each assembly were compared ( Table 2). As expected, a significantly longer N50 (1747 vs 433 bases) and higher overall alignment rate (96.1% vs 77.6%) were achieved in the ovule assembly compared with the LCM egg, while the combined LCM egg plus ovule assembly had a slightly lower N50 than ovule alone. By subjecting our transcriptome assemblies to BUSCO analysis 31 , the ovule transcriptome was found to be near-complete, the LCM egg transcriptome highly fragmented and the combined LCM egg plus ovule transcriptome slightly less complete than ovule alone. A BLASTN (e-value < 1e −10 ) search against the NCBI nt database found the greatest number of unique plant hit descriptions (36,861) in the LCM egg plus ovule assembly with slightly fewer (33,826) in the ovule assembly. The fewest unique plant hit descriptions (23,049) were found in the LCM egg assembly. These observations indicate that most LCM egg transcripts are represented in the ovule transcriptome, and that the LCM egg plus ovule assembly not only properly captured the diversity from both individual assemblies but also captured transcripts that failed to be assembled and annotated solely by LCM egg or ovule data alone. For example, CcASGR-BBML reads were www.nature.com/scientificreports/ detected in the apomictic ovule read data but no contig was present in the ovule assembly, whereas a CcASGR-BBML contig (TRINITY_DN28916_c0_g1_i1) was identified in the LCM egg plus ovule assembly. The number of contigs in the LCM egg assembly and in the LCM egg plus ovule assembly with detectable egg cell expression (hit count > 0 across all LCM egg libraries) was examined ( Table 3). The LCM egg plus ovule assembly showed an additional 14,831 contigs with LCM egg expression (a 9.8% increase) and a 26.8% increase in the number of annotations identified. These results support the combination of reads generated from the LCM egg and ovule to improve both assembly and annotation of egg cell sequences. The LCM egg plus ovule assembly was used for the downstream analyses. Cell-type specific expression patterns. The top 50 most abundantly expressed LCM egg transcripts accounted for 12.7% of total LCM egg reads and included constitutively expressed transcripts from mitochondrion, chloroplast, and ribosomal protein genes (Supplementary Table S2). Among the 165,998 transcripts expressed in the LCM egg, 91,730 (55.26%) had hits in the NCBI nt database (e-value < 1e −10 ), and 71,631 of those (78.1%) to a close relative, Setaria italica (foxtail millet), consistent with the known evolutionary relationship between these species [32][33][34] . To further check the fidelity of our transcriptome data, the expression of egg cell-specific gene EC1 (egg cell 1) 35 , synergid predominant gene MYB98 36 and a previously experimentally verified parthenogenesis gene ASGR-BBM-Like 12 was examined. Three potential EC1 orthologs (EC1.2-like, EC1.3 and EC1.4) were expressed in all parthenogenetic and sexual LCM egg libraries (Fig. 2). RNA in situ hybridization experiments using a ssRNA probe designed from the CcEC1.3 transcript (TRINITY_DN34886_c2_g1_i5) further confirmed its egg cell expression specificity in both sexual and apomictic ovary sections (Fig. 3A,B). MYB98, identified as one of the top five synergid-enriched transcripts in rice [Log 2 (synergid/egg cell) = 7.23 37 ], was detected at extremely low levels in the LCM egg reads, showing that egg cells likely were preferentially captured during LCM or that synergids in the egg apparatus from apomicts differed in their gene expression pattern from those in sexuals (Fig. 2). CcASGR-BBM-like expression was detected in all parthenogenetic LCM egg libraries, and was completely absent from the sexual LCM egg libraries, indicating correctly captured eggs with or without parthenogenesis expression. RNA in situ hybridization experiments with an ASGR-BBML ssRNA probe also confirmed expression specifically in the parthenogenetic egg cell (Fig. 3C,D). The expression patterns of EC1, MYB98-like and CcASGR-BBM-like genes in our data, based on TPM and RNA in situs, correlate with previously published expression patterns 12,37 and support the highly enriched egg-cell specificity of our LCM egg reads. Differentially expressed embryonic transcription factors reveal distinctions between parthenogenetic and sexual eggs. The expression profiles across all samples were clustered through principle component analyses with excellent correlation among biological replicates clustering tightly according to genotype and tissue type (Fig. 4). Pairwise comparisons of parthenogenetic LCM egg reads to sexual LCM egg reads were performed to identify differentially expressed genes (DEGs) as those with log 2 FC > 2, false discovery rate (FDR) < 0.05 using DESeq2 38 . Of the 4,625 differentially expressed Trinity transcripts (Supplementary Table S3), Table 2. Basic statistics, completeness and number of unique plant hit descriptions for each de novo assembly. 1 Transcripts that were grouped into clusters based on shared sequence content. 2 The percentage of RNA-seq reads that mapped back to the transcriptome assembly. 3 www.nature.com/scientificreports/ 2,571 were up-regulated in the parthenogenetic egg, illustrated in a heat map that includes all the differentially expressed transcripts in parthenogenetic eggs relative to sexual eggs (Fig. 5). Several embryogenesis-related transcription factors were significantly expressed in the parthenogenetic eggs compared to sexual eggs (Fig. 6). Parthenogenetic up-regulated DEGs were further subjected to gene ontology enrichment analyses which identified 175 over-represented GO terms (Supplementary Table S4) including terms such as embryo development (GO: 0,009,790), reproductive process (GO: 0,022,414), transcription factor activity and transcription factor binding (GO: 0,000,989). In a similar enrichment of GO-terms for up-regulated genes in a comparison of parthenogenetic eggs from Boechera and sexual eggs from Arabidopsis 39 , the only category in common with our results was for regulation of transcription, DNA-templated (GO:0,006,355). This difference could be attributed to developmental differences between apomictic eudicot Boechera and the apomictic monocot Cenchrus where parthenogenesis in Boechera is repressed in the absence of central cell fertilization 40 . To assess major regulatory distinctions between parthenogenetic and sexual egg transcripts, LCM egg plus ovule transcripts were subjected to a BLASTX (e-value < 1e-10) search against the plantTFDB (Plant Transcription Factor Database 41 ). The search identified 72 potential transcription factors spanning 27 families de novo expressed in parthenogenetic eggs and 21 potential transcription factors spanning 11 families de novo expressed in sexual eggs ( Fig. 7; Supplementary Table S5). The AP2 and WOX TF families were among 16 TF families that were exclusively de novo expressed in parthenogenetic eggs. These gene families have been shown to play major roles in embryogenesis and embryo development in monocots and eudicots 42,43 . Using microarrays, a fertilization-induced WOX gene was identified in rice zygotes 2-3 h post-pollination 44 . From RNA-seq data, a comparison of transcription factor families up-regulated in maize zygotes 12 h after pollination compared with maize eggs showed considerable overlap with those up-regulated in C. ciliaris parthenogenetic versus sexual eggs, namely AP2, bHLH, bZIP, C3H, GRAS, Homeobox (WOX), MADS, MYB, NAC, Trihelix, WRKY, YABBY, and ZF-HD 25 . At least for transcriptional regulation, this supports our hypothesis that parthenogenetic eggs resemble early stage zygotes that have initiated the maternal to zygotic transition 25,45 . As expected, an ASGR-BABY BOOM-like (ASGR-BBML) gene was de novo expressed in parthenogenetic eggs. Expression of this apomixis-locus derived gene induces parthenogenesis in unfertilized eggs of sexual pearl millet, rice and maize in the absence of pollination 12,13 . Ectopic overexpression of Brassica BBM induces somatic embryogenesis in Arabidopsis 46 and the rice BBM1 (LOC_Os11g19060.1) transgene, expressed from an egg-cell-specific promoter, induces embryogenesis in rice eggs without fertilization 47 . Native OsBBM1 expression in rice was detected in zygotes at 2.5 h after pollination (HAP) (corresponding to karyogamy) in a male-originspecific manner, and ectopic expression in egg cells under an egg-cell-specific promoter induced parthenogenesis. Differential gene expression analysis also provided evidence for de novo expression of a C. ciliaris BBM2-like gene (TRINITY_DN31800_c0_g1_i1) in unfertilized parthenogenetic LCM eggs. The parthenogenesis gene CcASGR-BBML was de novo expressed at very low levels (average TPM = 3.7) in parthenogenetic eggs, while the CcBBM2-like gene was also de novo expressed in parthenogenetic eggs with relatively abundant expression Based on the observation that BBM has a transcriptional auto-activation nature 48 , we speculate that CcBBM2 expression may be due to CcASGR-BBML activation and CcBBM2 auto-activation. The parthenogenesis pathway may be initiated by CcASGR-BBML and proceed through its activation of CcBBM2 together with CcBBM2 autoactivation to further promote cell proliferation and embryogenesis. C. ciliaris WUSCHEL-related homeobox 7 (WOX7) genes (TRINITY_DN42296_c3_g1_i3 and TRINITY_ DN42296_c3_g1_i2) showed de novo and up-regulated expression in parthenogenetic eggs, respectively. Overexpression of maize BBM (orthologous to the C. ciliaris BBM2-like gene) and WUSCHEL2 in somatic tissues of maize, sorghum, rice and sugarcane greatly increases embryogenic potential thereby enhancing transformation 49 . It is possible that the de novo-expressed and up regulated WUSCHEL from parthenogenetic eggs works in concert with BBM2 to promote parthenogenesis. It is noteworthy that BBM and WUSCHEL were de novo expressed in rice zygotes, 2.5 h after pollination at karyogamy 24 suggesting a role in promoting zygotic embryogenesis. WUSCHEL is thought to be required for maintaining meristem identity 50 and its overexpression can induce organogenesis and somatic embryogenesis in shoot and root tissues 42,51 . Several genes overlapped between our DEGs and WUSCHEL-target genes 52 (Supplementary Table S6) but none of them plays a central role in embryogenesis suggesting that some of the WUSCHEL-mediated transcriptional change may be ancillary to regulating parthenogenesis or that another set of embryogenesis-related targets is specific to CcWOX7. Besides CcBBM2 and CcWOX7, embryonic factor CcAINTEGUMENTA-LIKE 5 (AIL5) gene (TRINITY_ DN44016_c2_g3_i3 and TRINITY_DN44016_c2_g3_i4) showed de novo and up-regulated expression in parthenogenetic LCM egg. Ectopic expression of Arabidopsis AIL5 can induce somatic embryo formation in Arabidopsis 53 . The identification in parthenogenetic eggs of up-regulated and de novo expressed embryonic transcription factors that previously have been shown to function in parthenogenesis, somatic embryogenesis, or zygotic embryogenesis, demonstrated the utility of our data and confirmed that embryogenic potential is one of the major functional distinctions between parthenogenetic and sexual eggs. Transcriptional change in parthenogenetic eggs may confer the core parthenogenesis pathway. Based on prior research that the transcription factor ASGR-BBML is the only experimentally verified apomict-derived parthenogenesis gene 12,13 , and that BBM transcription factors are known to function with other proteins to control cell proliferation and somatic embryogenesis 48,54,55 , we hypothesize that the core natural parthenogenesis pathway initiated by ASGR-BBML may advance through interactions with other TFs followed by up-or down-regulation of their target genes. Candidate BBM-target genes that were directly activated by BBM expression in Arabidopsis seedlings and those with DNA sites bound by BBM in Arabidopsis somatic embryos through chromatin immunoprecipitation sequencing (ChIP-seq) 48,54,55 were examined. Overlap between these previously reported genes and our differentially expressed genes was found. Putative orthologs were identified through BLASTN description annotation (NCBI nt database) similarities and/or sequence similarities through BLASTX search using full length contigs (their corresponding best hit in NCBI nt database) against Araport11_genes.201606.pep (1e −5 ) (Supplementary Table S7). These included BBM and AIL5 described above. In addition, potential orthologs (TRINITY_DN34773_ c0_g1_i2 nuclear transcription factor Y subunit A-1 and TRINITY_DN30545_c0_g1_i1 nuclear transcription factor Y subunit A-10) of CcNUCLEAR TRANSCRIPTION FACTOR Y SUBUNIT A-9, similar to another BBM target gene thought to play a role in inducing embryogenesis when overexpressed in Arabidopsis 56 , were upregulated in parthenogenetic eggs suggesting that BBM may also interact with NF-YA to control embryogenesis. Apart from the parthenogenetic egg up-regulated transcripts with similarity to BBM-target genes that correspond to genes with known function to induce embryogenesis, those that may play roles in transcription, signaling, protein-protein interaction and cytoskeleton organization that function in embryogenesis also were identified. For example, two members of a well-studied gene family ACTIN DEPOLYMERIZING FACTOR (ADF), Histone modification and chromatin remodeling factors are differentially expressed in parthenogenetic vs sexual eggs. The chromosomal regions associated with apomixis in C. ciliaris are largely heterochromatic 8 , suggesting potential for epigenetic regulation. Comparative small RNA-seq between sexual and apomictic spikelets of Paspalum notatum provided evidence that RNA-dependent regulation of auxin pathways may be important for the expression of apomixis 20 . A comparative transcriptomics study between sexual and apomictic ovule nucellus at a pre-meiotic developmental stage in Hypericum perforatum L. showed that RNA-mediated DNA methylation, histone and chromatin modifications were associated with aposporous apomixis expression 19 . Moreover, a methylation status analysis of the apomixis-specific region in Paspalum spp. suggests a possible epigenetic control of parthenogenesis, where the factors controlling repression of parthenogenesis might be inactivated in apomictic Paspalum by DNA methylation 58 . Chromatin states are mainly determined via two mechanisms: covalent modifications of histones and DNA methylation 59 . Major histone modification events can be categorized into acetylation, methylation, phosphorylation and ubiquitination 60 , which affect chromatin states resulting in activation or repression of gene expression [61][62][63] . In plants, studies have shown that histone acetylation/deacetylation and histone methylation play a fundamental role in regulating plant development. Among DEGs between parthenogenetic and sexual eggs, multiple histone deacetylase genes (TRINITY_ DN41443_c1_g1_i1, TRINITY_DN39020_c3_g1_i2, TRINITY_DN37516_c2_g2_i6, and TRINITY_DN42464_ c1_g1_i7) were exclusively up-regulated in parthenogenetic eggs while a single putative MYST histone www.nature.com/scientificreports/ acetyltransferase gene (TRINITY_DN38730_c1_g1_i7) was up-regulated in sexual eggs. Hypoacetylation is known to be associated with chromatin condensation and transcriptional suppression 63 , which suggests that there are some genomic regions in parthenogenetic eggs that are hypoacetylated and genes within those regions may be transcriptionally silent. Alternatively, evidence is emerging in plants for a balance of acetylation/deacetylation for gene activation 64 . The short fruit phenotype in cucumber results from a mutation in Histone deacetylase complex1 (HDC1; SF2 in cucumber), a gene whose product interacts with multiple histone deacetylases to enhance expression of cell cycle, DNA replication, and chromatin assembly genes and promote cell division, but also to repress expression of suppressors of auxin, gibberellin, and cytokinin biosynthesis and responses. Limited histone 3 acetylation in gene bodies and properly acetylated promoters and enhancers were shown to promote transcriptional elongation, at least in mammalian cells 65 . Another potential role for histone deacetylases is to allow methylation of the H3K9 residue 66,67 and interaction with DNA methyltransferase to repress gene expression 68 . Multiple histone methylation factors were differentially expressed, most being up-regulated in parthenogenetic eggs. Among these were Cchistone-lysine N-methyltransferase ATXR4 (TRINITY_DN34478_c0_g1_i3), Cchistone-lysine N-methyltransferase TRX1 (TRIN-ITY_DN46041_c2_g1_i2), CcH3 lysine-9 specific SUVH1 (TRINITY_DN42267_c0_g1_i1), and CcH3 lysine-9 specific SUVH4 (KRYPTONITE) (TRINITY_DN46013_c1_g1_i6). SUVH4 encodes a protein that methylates H3K9, is required for the maintenance of cytosine methylation in the CpNpG context, and represses retrotransposon activity in Arabidopsis 69 . Another parthenogenetic up-regulated histone-lysine N-methyltransferase gene, CcH3 lysine-9 specific SUVH1 shows similarity with a gene in Arabidopsis that is associated with heterochromatic H3K9 dimethylation, and may play a role in heterochromatin gene silencing 70 . Apart from histone modification, DNA methylation is also involved in determining chromatin states. One of the best characterized classes of DNA methylation genes in Arabidopsis, DOMAINS REARRANGED METHYL TRANSFERASE2 (DRM2), responsible for de novo DNA methylation in all sequence contexts and mediation of transgene silencing 71 , was represented by one CcDRM2 gene (TRINITY_DN36171_c0_g1_i3), de novo expressed in parthenogenetic eggs. Another CcDRM2 gene (TRINITY_DN46376_c0_g4_i1) was up-regulated in sexual eggs. The de novo expression of one CcDRM2 gene in parthenogenetic eggs and the up-regulation of another in sexual eggs implies that increased de novo DNA methylation may be common to both parthenogenetic eggs with parthenogenetic fate and sexual www.nature.com/scientificreports/ eggs with embryogenesis fate upon fertilization. This is consistent with the observation that embryogenesis is characterized by increased de novo DNA methylation 72 . However, it is not known if these two CcDRM2 genes function somehow to differentiate sexual and parthenogenetic egg cell fate through de novo DNA methylation. Hypoacetylation and H3K9 methylation activities suggest enhancement of repressive chromatin states in parthenogenetic eggs even though they are transitioning to an active stage of cell division. Specific chromatin regions may be repressed in parthenogenetic eggs while others are derepressed in order to release totipotency and achieve competency to initiate parthenogenesis. Day0 unpollinated parthenogenetic eggs are biologically active in terms of having the capacity to form embryos two days after anthesis and are transcriptionally active as indicated by GO enrichment analyses reflecting the active side of chromatin states. Several Jumonji-like proteins were up-regulated in parthenogenetic eggs including JMJ25-like (TRINITY_DN44176_c1_g1_i3; TRINITY_ DN46670_c1_g4_i1) which demethylates H3K9, regulating gene expression through epigenetic modifications 73,74 . Histone 3 lysine 9 methylation homeostasis may be regulated by the interplay of Jumonji and SUVH proteins. Histone 3 lysine 4 methylation, positively associated with actively transcribed genes 75 , was indicated by a COM-PASS-like H3K4 histone methylase component WDR5B gene and a H3K4 histone-lysine N-methyltransferase CcTRX1 gene, both up-regulated in parthenogenetic eggs. Similar genes were shown to activate flowering under long day conditions 76,77 . A rice ortholog was shown to promote flowering by interacting with a DNA-binding C2H2 zinc finger protein SIP1 to activate EARLY HEADING DATE 1, a B-type response regulator 78 . Although the histone-lysine N-methyltransferase ATXR4 was up-regulated in parthenogenetic eggs, no direct evidence is available to infer its function. In Arabidopsis ATX1, ATX2, ATXR3, and ATXR7 positively regulate FLC via H3K4 methylation [79][80][81][82] while ATXR5 and ATXR6 control the heterochromatin condensation and heterochromatic elements silencing via methylation of H3K27 83 . Taken together, the differential expression of histone deacetylases and histone methylases as well as a de novo expressed DNA methyltransferase is likely to both repress and activate a spectrum of genes that characterize parthenogenetic eggs assuming a parthenogenesis fate. Cell cycle. The idealized cell cycle is comprised of four successive phases: G1 (Gap 1), S (Synthesis), G2 (Gap 2) and M (Mitosis), through which parthenogenetic egg cells must traverse as they become competent for cell division. The expression profiles of some key cell-cycle regulators were examined in an attempt to identify the cell cycle stages for Day 0 parthenogenetic and sexual egg cells. A CcBREAST CANCER SUSCEPTIBILITY 1 (BRCA1) transcript (TRINITY_DN29461_c0_g1_i1) was upregulated in parthenogenetic eggs. This gene plays a role in the G2/M checkpoint, particularly upon detection of DNA damage, limiting the proliferation of cells containing replication defects 84,85 . Similarly, another gene, CcSUPPRESSOR OF GAMMA RESPONSE 1 (SOG1) (TRINITY_DN39490_c0_g1_i2), that responds to DNA damage signals, and whose phosphorylated form controls several cell cycle and DNA damage repair genes 86 , was up-regulated in parthenogenetic eggs. BRCA1 has been identified as a target of SOG1, but was uniquely up-regulated in parthenogenetic eggs compared with other common targets. Since the expression of BRCA1 was barely detectable in the sexual eggs, the parthenogenetic egg might have reached the G2/M checkpoint while the sexual egg remained at a stage prior to G2. Aside from a potential cell cycle marker, it is tempting to speculate that an apomict undergoing many cycles of clonal reproduction through seeds may have adapted gene expression to ensure DNA repair in somatic cell-derived eggs destined to undergo parthenogenesis. Auxin and calcium. Auxin plays a role in virtually every aspect of plant growth and development including early embryogenesis 87 and gamete specification in the female gametophyte 88 . In relation to apomixis, auxin treatment of the inflorescence of apomictic Poa pratensis was used to rapidly and reliably test whether the egg cell was competent for parthenogenesis, showing that exogenously supplied auxin promoted the expression of apomictic parthenogenesis 89 . Among DEGs, an auxin receptor CcTRANSPORT INHIBITOR RESPONSE 1 (TIR1) gene (TRINITY_ DN40331_c0_g1_i1) [90][91][92] was up-regulated in parthenogenetic eggs indicating that the parthenogenetic egg may differ from the sexual egg in auxin perception. Interestingly, auxin negative regulators CcAUX/IAA genes (IAA17, TRINITY_DN37013_c0_g1_i2; IAA2, TRINITY_DN44579_c0_g3_i1; IAA31, TRINITY_DN39140_c0_g1_i3; IAA29, TRINITY_DN32631_c0_g1_i2) were exclusively up-regulated in parthenogenetic eggs. AUX/IAAs are thought to interact with SCF TIR1 and lead to ubiquitination and degradation through the 26S proteasome in the presence of high auxin concentrations, derepressing auxin response factors (ARFs) to allow ARFs to transcriptionally activate or repress downstream auxin responsive genes 93,94 . Interestingly, 10 BTB/POZ-domain transcripts were up-regulated in parthenogenetic eggs versus only one in sexual eggs, suggesting an active proteasomal ubiquitin-mediated degradation pathway in parthenogenetic eggs. At least one of these BTB/POZ-domain transcripts contains an ankyrin repeat domain similar to BLADE ON PETIOLE (BOP) lateral organ boundary genes 95 . The ankyrin repeat interacts with basic Leu repeat transcription factors 96 for which six transcripts were up-regulated in parthenogenetic eggs. These up-regulated genes suggest a regulatory network in parthenogenetic eggs involving a dynamic transcriptional and protein turnover response to hormones shaping growth and development. Induction of auxin pathway genes in maize zygotes 12 h after pollination was also observed 25 . Crosstalk between auxin and ethylene signaling pathways leading to cooperative regulation of developmental processes has been documented 97 Calcium ionophore is reported to stimulate parthenogenesis in mouse oocytes 98 but calcium alone is not sufficient to induce parthenogenesis in plants. As for plants, Ca 2+ changes in egg cells are widely thought to be associated with successful fertilization and egg activation [99][100][101] . In our data, intracellular calcium receptor protein genes, calmodulin (TRINITY_DN44663_c0_g1_i2; TRINITY DN33744_c3_g1_i1; TRINITY_DN34327_c1_g1_i3) 102 and its effector calcium/calmodulin-dependent protein kinases (TRINITY_DN44558_c1_g2_i1; TRINITY_DN40595_ c1_g3_i1) 103 , were exclusively up-regulated in parthenogenetic eggs compared to the sexual eggs, suggesting the presence of an internal Ca 2+ increase and calcium-triggered signaling pathway in parthenogenetic eggs. This may reflect some aspects of parthenogenetic egg activation but its correlation to parthenogenesis is still unknown. With the differentially expressed genes and their annotations generated in this study, the potential core pathways diagnostic of natural parthenogenesis in an apomictic grass species and pathways that may play an ancillary role in promoting parthenogenesis based on sequence similarities to previously identified genes and pathways were discussed. However, to experimentally test our hypotheses, chromatin immunoprecipitation-seq (ChiP-seq) on transcription factor-bound DNA would be essential to study target genes or protein-DNA interactions. Yeast two hybrid experiments may be needed to experimentally investigate physical interactions between proteins encoded by differentially expressed genes or other embryogenic factors identified in this study. Bisulfite-or ATAC(Assay for Transposase Accessible Chromatin)-sequencing could potentially be used to ultimately check the chromatin state differences between sexual and parthenogenetic egg cells, but both would be challenging at a single cell level in plants. Progress recently has been made to apply single nucleus Assay for Transposase Accessible Chromatin sequencing (sNucATAC-seq) technologies to decipher cell-type-specific pattern of chromatin accessibility in Arabidopsis roots 104 . As an initial step, this study has provided numerous sequence and computational analyses for sexual and parthenogenetic eggs, which are valuable for developing testable hypotheses to further explore natural parthenogenesis in grasses. Materials and methods Plant material and floret collection. The C. ciliaris plants used as source materials were vegetatively propagated tillers of the sexual genotype B-2s and natural apomictic genotype B-12-9 105 . Plants were grown in the greenhouse (24-30 °C) for head collection from July to September in 2016. Heads were bagged prior to stigma exsertion and stigmas were manually removed with tweezers upon appearance. The heads remained bagged until the day of anther exsertion (anthesis) at which time florets were collected from 8:00-11:00 am with fresh anthers half or fully exserted and immediately fixed in Farmer's fixative (ethanol:glacial acetic acid, 3:1) 106 and stored at 4 °C for overnight. Fixed florets were then transferred and stored in 70% ethanol (DEPC-treated water) at 4 °C. Ovary clearing and microscopic observation. For ovary clearing, stigma-free heads were collected on the day of (Day0) and two days after (Day2) anthesis and immediately fixed in Farmer's fixative. Ovaries were dissected from florets, and dehydrated in a graded ethanol series for two hours in each step as follows: 70% ethanol, 85% ethanol, and 100% ethanol, followed by another 100% ethanol incubation overnight. Ovaries were transferred to a methyl salicylate series for 2 h in each step as follows: ethanol: methyl salicylate (3:1), ethanol: methyl salicylate (1:1), ethanol: methyl salicylate (1:3), and followed by a 100% methyl salicylate incubation overnight 107 . Cleared ovaries were observed under DIC (differential interference contrast) microscope (Supplementary Fig. S1). Ovule collection, RNA extraction, cDNA Synthesis, amplification, library construction and sequencing. Intact ovaries were dissected from fixed florets on ice using fine tweezers to avoid physical damage. Ovules were subsequently isolated by pulling apart the bifurcated stigma to separate the ovary wall from the ovaries. Fifty ovules per sample were collected. Ovule RNA was extracted using RNeasy Plant Mini Kit (QIAGEN) following the manufacturer's protocol. The quantity and quality of RNA were checked using the Qubit 2.0 Fluorometer RNA assay (Invitrogen) and an Agilent 2100 Bioanalyzer using a RNA 6000 nano kit (Agilent Technologies), respectively. For each sample, 500 ng of total RNA was used for poly-A capture, cDNA synthesis and amplification using KAPA stranded RNA-seq Kit. Libraries were quality checked with the Qubit 2.0 Fluorometer dsDNA high sensitivity assay (Invitrogen) and Fragment Analyzer Automated CE System (Agilent Technologies), and were sequenced on an Illumina NextSeq (300 Cycles) PE150 Mid Output flow cell on which three biological replicates of two genotypes were pooled. Tissue preparation for laser capture microdissection. Intact ovaries were dissected on ice from fixed florets using fine tweezers to avoid physical damage and were dehydrated in a graded ethanol series on ice with gentle shaking for 20 min in each step as follows: 80% ethanol, 90% ethanol, and three changes of 100% ethanol. Ovaries were stored overnight in 100% ethanol at 4 °C, and transferred the next day to a xylene series with gentle shaking for 20 min in each step as follows: ethanol: xylene (3:1), ethanol: xylene (1:1), ethanol: xylene (1:3), and three changes of 100% xylene. Xylene-cleared ovaries were transferred to a Paraplast series in an incubator (54 °C) with shaking for 9 h in each step as follows: xylene and Paraplast (1:1) mixture and seven changes of 100% Paraplast. Paraplast-infiltrated ovaries were embedded in Paraplast blocks using a tissue embedding center, and 8-µm sections were cut on a rotary microtome (Leica RM2145, Germany). Sections were floated on methanol and mounted on metal frame PET foil slides (Leica) that had been UV-irradiated (DNA transfer lamp, Fotodyne) for 30 min. Slides were heated on a slide warmer at 42 °C overnight to air-dry and stretch the Paraplast ribbons. Prior to egg identification and laser capture microdissection, PET foil slides with ovary sections www.nature.com/scientificreports/ were de-paraffinized in xylene for two changes of 5 min each with gentle shaking and air-dried for 1 h in a fume hood. A few ovary sections were stained with Safranin and FastGreen to check tissue quality, and the egg cell was identified under a microscope (Fig. 1). Unstained sections were used for LCM. Laser capture microdissection and egg cell collection. A Leica LMD6000 laser microdissection system was used to capture the egg cell from tissue sections prepared as above. Before loading PCR tubes with cap into the collection device, 8 μl of RNAlater was pipetted into the lid of the cap. Microdissections were then performed by drawing a line or circle around the egg along which the laser beam cut ( Supplementary Fig. S2), with the laser setting as follows: Power 45, Aperture 4, Speed 9 and Specimen balance 15. After the LCM, the egg sections fell into the lid by gravity, were preserved by the RNAlater, and stored at −80 °C until use. Around 1000 egg sections (from 500 ovules) per biological replicate (4 B-2s and 4 B-12-9) were collected. Egg RNA extraction, cDNA Synthesis, amplification, library construction and sequencing. Egg Data preprocessing. Both ovule and egg raw reads were cleaned by removing adapter sequences, overrepresented technical sequences detected by FASTQC 108 , and sequences of poor-quality using Trimmomatic 109 . For LCM egg sequencing data, 5-7 bases from the 5′ and 3′ ends of the trimmed reads were further cut using TrimGalore 110 to avoid potential sequence bias. Comprehensive rRNA removal was done using SortMeRNA 111 . To further identify and remove potential biological contaminants, a de novo assembly was constructed with cleaned reads from the eight LCM egg libraries using Trinity 112,113 . Trinity contigs were first annotated by a BLASTN 114 search against the NCBI nt database (e-value cutoff of 1e −10 ). Using these annotations, a custom non-plant contamination database was constructed by extracting the aligned portion of the subject sequences from the contigs that hit (< 1e −10 ) bacterial, fungal and animal sequences stored in the NCBI nt database. The non-plant contamination database was indexed as the reference to clean each library until no significant amount of non-plant hit was seen in the final assembly. All cleaned reads were used for downstream analyses. Trinotate annotation and gene ontology enrichment analyses. Trinity transcripts were annotated with Trinotate through a BLASTX 114 search (e-value = 1e −5 ) against a comprehensive protein database comprised of the Swiss-Prot 115 and UniRef90 protein databases. Putative coding regions within each Trinity transcript were predicted using TransDecoder (http:// trans decod er. github. io), and the predicted coding sequences were further annotated through BLASTP (e-value = 1e −5 ) alignment against the comprehensive protein database mentioned above and for protein domains search using hmmer (http:// hmmer. org/) and PFam 116 . SignalP 117 was used to predict the potential signal peptides in the transcripts. All results were integrated by Trinotate, stored in an SQLite database, and then reported as a tab-delimited excel file (Supplementary Table S8). The functional enrichment analyses were done on significantly differentially expressed transcripts set against the whole expressed transcripts using Trinotate-assigned GO annotations and GOseq 118 , and enriched GO terms of parthenogenetic up-regulated transcripts were reported as a tab-delimited file. BUSCO completeness analyses. The completeness of the transcriptome assembly was examined by subjecting the Trinity.fasta file to BUSCO 31 analyses using the command run_busco -i Trinity.fasta -l liliop-sida_odb10 -m tran. Transcript abundance estimation and differential expression analysis. Kallisto RNA in situ. Tissue for RNA in situs were prepared and sectioned according to the LCM protocol. Sections were floated on methanol and mounted on microscope slides (Fisherbrand Probe On Plus). Slides were heated on a slide warmer at 42 °C for overnight. Slides were de-paraffinized in xylene for 10 min twice, followed by a hydration series: 2 times of 100% ethanol, 95% ethanol, 70% ethanol, 50% ethanol, 30% ethanol, and 2 times of DEPC-treated water (2 min for each). Rehydrated slides were incubated in 0.2 M HCl for 10 min, and then washed in DEPC-treated water, 2 times of 2 × SSC (saline sodium citrate), then DEPC-treated water, 5 min each. Slides were then treated with a mixture of 100 mM Tris pH 8.0, 50 mM EDTA pH 8.0 and 1 μg/mL proteinase K at 37 °C for 20 min, followed by a wash in PBS (phosphate buffered saline) for 2 min. Slides were incubated in 2 mg/mL glycine in PBS for 2 min to block proteinase K, and washed in PBS for 30 s twice. Slides were fixed in 4% formaldehyde in PBS for 10 min and washed in PBS for 5 min twice, then dehydrated as follows: 2 times of DEPC-treated water, 30% ethanol, 50% ethanol, 70% ethanol, 95% ethanol and 2 times of 100% ethanol (2 min each). Slides were dried for 15 min in the fume hood. Hybridization. Hybridization buffer was made with each of the following components at a final concentration of: 50% HiDi formamide (Applied Biosystems),100 μg/ml Yeast tRNA (Invitrogen), 0.75% blocking reagents (Roche), 2.5% dextran sulfate (Millipore), 2.5 mM EDTA and DEPC-treated water to make up for the total volume. Probe (0.1 μl) was added to 10 μl of hybridization buffer and heated at 80 °C for 3 min. 190 μl hybridization buffer was added to 10 μl pre-denatured probe-hybridization buffer mixture and kept at 55 °C until use. For each slide with pretreated ovary sections, 200 μl pre-denatured probe-hybridization mixture were added, and sealed with another microscope slide on top. The slide pairs were incubated in a humidified environment at 50 °C for overnight. Data availability The data that support the findings of this study are available within the paper and its supplementary materials published online.
v3-fos-license
2021-07-26T00:05:44.079Z
2021-06-12T00:00:00.000
236236836
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/ace/2021/9640521.pdf", "pdf_hash": "fc1949e0f757760a459930fa99def5d261ca0202", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:292", "s2fieldsofstudy": [ "Engineering" ], "sha1": "1df94a321717671513026234855da3bba61a9e72", "year": 2021 }
pes2o/s2orc
Life-Cycle Seismic Fragility Assessment of Existing RC Bridges Subject to Chloride-Induced Corrosion in Marine Environment Bridges in a marine environment have been suffering from the chloride attack for a long period of time. Due to the fact that different sections of piers may be exposed to different conditionals, the chloride-induced corrosion not only affects the scale of the deterioration process but also significantly modifies over time the damage propagation mechanisms and the seismic damage distribution. In order to investigate the seismic damage of existing RC bridges subject to spatial chloride-induced corrosion in a marine environment, Duracrete model is applied to determine the corrosion initiation time of reinforcing steels under different exposure conditionals and the degradation models of reinforcing steels, confined concrete, and unconfined concrete are obtained based on the previous investigation. According to the seismic fragility assessment method, the damage assessment approach for the existing RC bridges subject to spatial chloride-induced corrosion in a marine environment is present. Moreover, a case study of a bridge under two kinds of water regions investigated the influence of spatial chloride-induced corrosion on the seismic damage of piers and other components.&e results show that the spatial chloride-induced corrosion may result in the section at the low water level becoming more vulnerable than the adjacent sections and the alteration of seismic damage distribution of piers.&e corrosion of pier will increase the seismic damage probability of itself, whereas it will result in a reduction of seismic damage probability of other components.Moreover, the alteration of seismic damage distribution of piers will amplify the effect. Due to the fact that the spatial chloride-induced corrosion of piers may alter the yield sequence of cross section, it then affects the seismic performance assessment of piers. Amethod to determine the evolution probability of yield sequence of corroded piers is proposed at last. From the result, the evolution probability of yield sequence of piers in longitudinal direction depends on the relationship between the height of piers and submerged zone. Moreover, the height of piers, submerged zone, and tidal zone have a common influence on the evolution of yield sequence of piers in transversal direction. Introduction In the past decades, many coastal bridges have been built in different countries with long coastlines to meet the growing requirement of fast transport and economic development. Overall, most of these bridges are reinforced concrete structures and are located in the severe marine environments. Under such environments, chloride-induced corrosion is a major environmental stressor for RC bridges, because it may result in the decrease of the effective crosssectional area of the reinforcing steels and the deterioration of the mechanical properties of reinforcing steels and concrete. Obviously, the performance of coastal bridges is expected to be significantly affected by chloride-induced corrosion. erefore, it is of interest to investigate the effects of chloride-induced corrosion on the performance of aging RC bridges in marine environments and to improve the performance level of these bridges with the corrosion effects. On the other hand, chloride-induced corrosion may also result in the decrease of the seismic performance of aging RC bridges; thereby bridges exhibit different seismic damage probability as time increases. In this respect, many studies have focused on the seismic damage assessment of RC bridges with chloride-induced corrosion. Choe et al. [1] developed the probabilistic drift and shear force capacity models for corroding reinforced concrete columns to predict the service-life and life-cycle cost of the columns. Kumar et al. [2] assessed the seismic damage probability of the aging bridges with the cumulative seismic damage and chlorideinduced corrosion. Alipour et al. [3] investigated the effects of reinforcement corrosion on the seismic damage probability of the aging bridge in California with different structural parameters. anapol et al. [4] developed the seismic fragility curves of the deteriorating piers through the field instrumentation of the corrosion measurements. Cui et al. [5] applied an improved deterioration model of the reinforced concrete steel to carry out the seismic fragility analysis of the reinforced concrete bridges with the marine chloride-induced corrosion. Panchireddi and Ghosh [6] proposed an analytical strategy to consider the deterioration of the damaged bridge through updating the pier section properties. Zhang et al. [7] proposed a seismic risk assessment method for the corrosion RC bridges with shearcritical columns. Crespi [8] presented a procedure for the collapse mechanism evaluation of the existing reinforced concrete motorway bridges under horizontal loads. Overall, the previous studies have enriched the knowledge of the seismic damage assessment of aging RC bridges. However, only uniform exposure condition was considered in these studies when performing the seismic damage assessment of aging RC bridges with the corrosion effects. In fact, exposure conditions exhibit significant spatial variation characteristic along the pier direction for many coastal bridges, and the nonuniform degradation phenomenon occurs in the corroded piers, resulting in the nonuniform distribution of seismic damage of piers. Obviously, these studies may be inappropriate and/or inadequate to completely investigate probabilistic seismic damage of aging RC bridges and to reveal the effects of spatial chloride-induced corrosion. On the other hand, the knowledge of plastic hinges of piers will contribute to the ductile seismic design of RC bridges. Recently, Yuan et al. [9] investigated the damage characteristics of the coastal bridge piers suffering nonuniform corrosion by the shaking table tests. However, the yield characteristic of piers subject to the spatial chlorideinduced corrosion has not been comprehensively investigated in the previous studies, and the evolution mechanism of yield sequence of corroded piers has not been clarified. In this study, the probabilistic seismic damage assessment of aging RC bridges subject to spatial chloride-induced corrosion in marine environments is presented. Overall, the major objectives of this study are threefold: (1) to develop a probabilistic seismic damage assessment procedure for aging RC bridges suffering spatial chloride-induced corrosion, (2) to reveal the effects of spatial chloride-induced corrosion on the seismic damage characteristics of piers and other components, and (3) to discuss the evolution probability of yield sequence of piers subject to spatial chloride-induced corrosion. e paper is organized as follows: In Section 2, we describe the corrosion process of reinforcing steels and degradation properties of various materials under different marine exposure conditions. Section 3 presents the probabilistic seismic damage assessment procedure for aging RC bridges subject to spatial chloride-induced corrosion. Subsequently, details of the case study bridge are described in Section 4, and the finite element models are developed. In Section 5, the corrosion level and seismic capacity of RC piers in different exposure conditions are investigated. Moreover, the seismic damage of piers and other components is discussed in Section 6. Furthermore, a method to determine the evolution probability of yield sequence of piers subject to spatial chloride-induced corrosion is proposed in Section 7. A brief summary of the results is presented in Section 8. Chloride-Induced Corrosion Effects e coastal bridges are often exposed to high concentrations of chloride ions. e concentration gradient between the exposed surface and the pore solution of the cement makes the chloride ions penetrate from the external environment through the concrete cover and reach the surface of reinforcing steels. Moreover, the chloride ions decrease the pH in the concrete and break down the passive film of reinforcing steels, resulting in the corrosion of reinforcing steels and the damage of concrete. In this section, the corrosion processes of reinforcing steels and the deterioration mechanism of RC members are presented. Corrosion Initiation Time. e corrosion initiation time is an important parameter during the chloride-induced corrosion process of RC members, which can be defined as the time when the chloride ions concentration near reinforcing steels reaches a threshold concentration C cr . To calculate the corrosion initiation time, it is necessary to describe the diffusion process of chloride ions and determine the chloride ions concentration at different depths of RC members. In this respect, Duracrete provided a probabilistic model to predict the chloride concentration in the concrete by taking into account the timedependent characteristics of chloride diffusion, as well as the different types of uncertainties associated with the modelling of these complex processes [10]. e chloride concentration at depth x after time t can be expressed as follows: θ 0 e −t 2 dt is the error function; D 0 is the empirical diffusion coefficient; k e is an environmental coefficient; k t represents the influence of test methods on determining D 0 ; k c is a coefficient that accounts for the influence of curing; t 0 is the reference period for D 0 ; n is the age factor; C cs is the chloride concentration at concrete surface and can be represented as where w/b is the water binder ratio and A cs and ε cs are the model parameters. If the cover depth of reinforcing steels d c is known, the corrosion initiation time can be determined as follows: For many coastal bridges, the bottom of piers may be submerged in the water, whereas the middle and top of piers are exposed to chloride dry-wet cycle and atmosphere environment, respectively. e discrepancies of humidity, 2 Advances in Civil Engineering temperature, oxygen, and chloride concentration will cause the different corrosion initiation time of reinforcing steels in various marine exposure conditions. erefore, the corrosion level of reinforcing steels is highly dependent on the type of exposure conditions. Overall, four categories of exposure conditions are included in the Duracrete model: (a) submerged zone, (b) tidal zone, (c) splash zone, and (d) atmospheric zone. Table 1 summarizes the statistical parameters for corrosion coefficients in the Duracrete model. Corrosion Propagation. Generally, the corrosion form of reinforcing steels can be divided into two types: uniform corrosion and pitting corrosion ( Figure 1). e former is caused by carbonation, whereas the latter is caused by chloride penetration [11]. erefore, the pitting corrosion is considered during the corrosion analysis in this study. e classical model proposed by Val and Melchers simplified the geometry of pitting into a quadrilateral form approximately to consider the reduction of reinforcement area [12]. e time-dependent residual cross-sectional area of a reinforcing steel with pitting corrosion A res (t) can be represented as follows: where Q cor (t) is the time-dependent percentage mass loss of corroded reinforcing steels; A 0 is the initial cross-sectional area of reinforcing steels; A cor,p (t) is the time-dependent pitting area of reinforcing steels and it can be calculated as follows: , where d s0 is the initial diameter of reinforcing steels; P(t) is the time-dependent pitting depth, and it can be expressed by where is λ(t) is uniform corrosion rate; R is pitting factor, which represents the ratio of maximum pit depth to average depth considering uniform corrosion. A Gumbel (Extreme Value Type I) distribution can be applied to predict the pitting factor of reinforcing steels [13]. Consequently, the statistical parameters of R can be calculated as where μ 0 and α 0 are the scale and location parameters in the Gumbel distribution, respectively; A 0 is taken as the surface area of reinforcing steels with 125 mm length and 8 mm diameter; A U is the surface area of reinforcing steels with other sizes. In theory, the uniform corrosion rate of reinforcing steels λ(t) is related to the corrosion current density. e corrosion current density will reduce and approach a constant level with the development of corrosion. Moreover, the unconfined concrete cracking will lead to easier ingress of chlorides, oxygen, and water, entailing the corrosion rate of reinforcement undergoing a large continuous increase after crack initiation and subsequent crack growth [14,15]. To fully consider these effects, an improved time-dependent uniform corrosion rate model is proposed by Cui et al. [5] based on the Vu and Stewart model [16]: where w/c is the water cement ratio; T cr and T Wcr are initial cracking time and initiation of severe cracking time, respectively. e detailed calculation method is illustrated in [5]. Material Properties. As stated, chloride-induced corrosion will affect the effective sectional area and mechanical properties of reinforcing steels. Moreover, the expansive pressure localized at the interface between reinforcing steels and concrete can also result in the cracking and spalling of concrete cover. At the same time, the deterioration of transverse reinforcement may reduce the lateral confinement of core concrete, resulting in the decrease of strength and ultimate strain of confined concrete. To fully consider the overall performance of corroded RC members, the degradation properties of reinforcing steels, concrete cover, and confined concrete should be determined. Reinforcing Steels. Du et al. [17] proposed a linear strength reduction model as a function of the percentage mass loss Q cor of corroded reinforcing steels: where f 0 and f are the strength of uncorroded and corroded reinforcing steels, respectively; β is coefficient of strength degradation, which is taken as 0.49 for the yield strength and 0.65 for the ultimate strength. Concrete Cover. e reduction in concrete cover strength f c can be calculated as follows [18]: where f c0 is the peak compressive strength of the undamaged concrete; K is a coefficient related to the roughness and diameter of reinforcing steels, which can be 0.1 for medium diameter ribbed reinforcing steels [19]; ε 0 is the strain at peak stress in compression; ε 1 is the average tensile strain of cracked concrete perpendicular to the direction of stress, which can be calculated as follows: where b 0 is the width of pristine cross section; n bars is the amount of longitudinal reinforcement in compressed regions; κ w is the empirical coefficient, which is taken as 0.0575 mm −1 ; ΔA s is the area loss of reinforcing steels; ΔA s0 is the critical area loss of reinforcing steels for cracking initiation, which can be evaluated as [20] ΔA s0 � A s − A s 1 − δ 7.53 where δ is the pitting concentration factor, which is taken as 4 to 8; A s is the area of pristine cross section. Confined Concrete. e Mander stress-strain relationship is utilized to simulate the behaviour of confined concrete after corrosion [21]. For a circular cross section, the confined strength f cc ′ and ultimate strain ε cu of core concrete are estimated as where K e is the effective confined coefficient of section; ρ h is the residual volumetric ratio of corroded transverse reinforcement; f yh and ε u are the yield strength and ultimate strain of corroded transverse reinforcement, respectively. Time-Dependent Fragility Method In this study, the analytical seismic fragility is applied to quantify the seismic damage probability of bridges. Fragility functions describe the conditional damage probability of a component or structure exceeding a specific damage state (DS) for a given ground motion intensity measure (IM) [22]. Considering the time-dependent effect, the damage probability of aging bridges at t year after construction can be described as follows: (19) where S D (t) is a time-dependent structural seismic demand for the specific IM; S C|ds (t) is the time-dependent structural seismic capacity corresponding to the given DS. For a specific service time, S D (t) and S C|ds (t) can be assumed to follow lognormal distributions. erefore, the time-dependent seismic fragility functions take the following form: (20) where μ(t) and σ(t) are the median estimate and standard deviation of ln[S D (t)/S C|ds (t)], respectively; Φ(·) is the standard normal cumulative distribution function. Generally, μ(t) can be predicted by a power model using the least-square method as follows: where a(t) and b(t) are the time-dependent regression coefficients. Moreover, the standard deviation σ(t) is determined as follows: where y i (t) and μ i (t) are the actual and predicted values of ln[S D (t)/S C|ds (t)], respectively; N − 2 represents the freedom degree of simulations when the log-linear model is adopted in the probabilistic seismic demand analysis. Combining the above corrosion analysis method and the time-dependent seismic fragility method, we can perform the probability seismic damage assessment of aging RC bridges subject to spatial chloride-induced corrosion. Overall, Figure 2 summarizes the analysis procedure, and the critical steps are included as follows: (1) Corrosion Analysis. For given details of piers (e.g., the thickness of concrete cover, the diameter of piers, the arrangement of reinforcing steels, the water binder ratio of concrete, the properties of materials, etc.), the corrosion initial time of reinforcing steels under different exposure conditions can be calculated (equations (2) and (3)). Subsequently, the time-dependent percentage mass loss of reinforcing steels is determined by using the pitting corrosion model (equations (4)∼(10)) and time-dependent uniform corrosion rate model (equations (11) and (12)). On this basis, the time-dependent properties of reinforcing steels, concrete cover, and confined concrete in the corroded piers are determined (equations (13)∼ (18)). (2) Bridge Model Updated. e exposure conditions of each part of piers should be first determined according to the layout of bridge and the hydrological data (e.g., high level, low level, height of marine splash, etc.). Subsequently, the finite element model of bridge at pristine condition is developed. Moreover, elements of piers should be divided reasonably to ensure that each element of piers is located in the same exposure condition. For a specified time, the degeneration properties of various materials are obtained from step (1) and are associated with each part of piers. (3) Time-Dependent Seismic Fragility Analysis. e nonlinear time history and nonlinear static analysis are performed to obtain seismic demand and seismic capacity of components, respectively. By comparing the seismic demand and seismic capacity of components, the seismic demand capacity ratio can be determined, and the median estimate and standard deviation in seismic fragility function are calculated by using the regression fitting (equations (21) and (22)). By repeating step (2) and step (3), the timedependent seismic fragility functions can be developed (equation (20)). Bridge Description. To investigate the seismic damage of aging bridges, a four-span continuous RC bridge is taken as the case study, as shown in Figure 3 e longitudinal reinforcement ratio is 1.67%. Moreover, rebars with a diameter of 12 mm and yield stress of 335 MPa are used as circular stirrups with a spacing of 80 mm (corresponding to a volumetric ratio of 0.6%). e thickness of concrete cover is 50 mm. Four rubber bearings are installed at the top of each bent, and four PTFE elastomeric bearings are located on the top of each abutment. In this study, two analysis cases are considered: (a) the bridge located in shallow water and (b) the bridge located in deep water. e high water level and low water level of two analysis cases are present in Figures 3(b) and 3(c). e height of marine splash is assumed to be 1 m. Finite Element Modelling. e finite element model is developed by OpenSees (Open System for Earthquake Engineering Simulation), the PEER Center's finite element platform [23]. Figure 3(a) shows the three-dimensional finite element model of bridge. Overall, the girder is modelled by the linear elastic beam-column elements and the nonlinear beam-column element with fiber cross sections is used to simulate the piers. e element of piers is divided based on the different exposure conditions. e piers fibers use Concrete04 and Steel02 for concrete and longitudinal reinforcement, respectively. e zero-length element with the elastic and elastic PP materials is used to simulate the rubber bearings and the PTFE bearings, respectively. Shear keys are simulated in parallel with the hysteretic and elastic PP gap materials. Furthermore, the interaction effects of abutments and backfill soil are considered by using hyperbolic gap material. e expansion joints at the deck end are modelled through the gap elements. In order to consider the effects of spatial chloride-induced corrosion, five group time-dependent finite element models of bridge under different time after construction (i.e., pristine, 20, 40, 60, 80, and 100 years, etc.) are developed according to the above modelling approach. e Monte Carlo approach is used to fully consider the uncertainties in the development of seismic fragility curves. Based on the finite element model, modal analysis of the bridge at pristine condition is performed to determine the fundamental periods. e result shows that the fundamental periods of model in longitudinal and transverse directions are 1.67 s and 1.35 s, respectively. Ground Motions. To fully consider the uncertainties of ground motions, a broad range of intensities should be included in a reasonable ground motion suite. In this respect, 100 ground motions are selected to perform nonlinear time history analysis [24]. e selected ground motions include different source-to-site distances and magnitudes (Figure 4(a)): small magnitude and small epicentre distances (SMSR), small magnitude and large epicentre distances (SMLR), large magnitude and small epicentre distances (LMSR), large magnitude and large epicentre distances (LMLR), and near field (NF). Moreover, the spectra acceleration at the geometric mean of the periods with 5% damping SA GM is chosen as intensity measure in this study [25]. e linear acceleration spectra and the distribution of SA GM of 100 ground motions are present in Figure 4(b). Corrosion Process of Reinforcing Steels. To consider the uncertainties during the corrosion process, 10000 samples are randomly generated using the Monte Carlo simulation method. Figure 5 illustrates the probability density of corrosion initiation time of transverse and longitudinal reinforcement. Overall, the significant dispersions of corrosion initiation time can be observed owing to the uncertainty of chloride ions diffusion process and outside environment. e corrosion initiation time of reinforcing steels can be well described by the lognormal distributions. Given that the distance varies in the outside environment, the transverse reinforcement presents a relatively smaller corrosion initiation time than the longitudinal reinforcement. Moreover, the corrosion of reinforcing steels exposure in tidal zone is most likely to be corroded, followed by the splash zone, atmospheric zone, and submerged zone. Furthermore, it should be noted that the general thickness of concrete cover (i.e., 50 mm) is unlikely to effectively prevent the corrosion of reinforcing steels of bridges in a marine environment during the lifetime. Based on the corrosion initiation time, the time-dependent corrosion level of reinforcing steels can be determined. Figure 6 presents the distribution and mean value of the percentage mass loss of corroded transverse and longitudinal reinforcement at different time. As expected, the corrosion level of reinforcing steels also exhibits significant dispersions. From the mean of the percentage mass loss, we can observe that the corrosion level of transverse reinforcement is obviously larger than that of the longitudinal reinforcement due to its smaller diameter and shorter corrosion initiation time. Similar to the corrosion initiation time, the corrosion of reinforcing steels in tidal zone is more serious than that in other exposure conditions. Advances in Civil Engineering On the other hand, a nonlinear relationship between time and the mean of percentage mass loss of reinforcing steels can be observed. In particular, the increase rate of the percentage mass loss is relatively small at initial years. One reason is that most of samples will not be corroded at the initial years, and another reason is that the influence of corrosion depth on the pitting corrosion area is relatively slight when the corrosion level is low. Meanwhile, the percentage mass loss has a remarkable increase as time increases, and the increase rate keeps an approximate constant value. Moreover, the increase rate of the percentage mass loss of transverse reinforcements decreases when the corrosion level exceeds a threshold value (about 50% percentage mass loss). It is because the sensitivities of pitting corrosion area to the corrosion depth decrease when the corrosion level is high. Seismic Capacity of Piers. To investigate the influence of chloride-induced corrosion on the seismic capacity of piers, the properties of degradation materials should be determined. Table 2 shows the time-dependent properties of reinforcing steels and concrete. Based on the material properties and nonlinear static analysis, the time-dependent moment-curvature relationships of cross section of piers are obtained, as shown in Figure 7. Advances in Civil Engineering Referring to Figure 7, it is seen that the effects of corrosion on the initial stiffness of cross section are relatively slight. Meanwhile, the moment capacities and the ultimate curvatures of cross section exhibit a remarkable degradation. e phenomenon is consistent with the findings from some experiments [26,27]. However, an increase in the ultimate curvature of cross section after corrosion can be observed in some previous studies [28,29]. e main reason is that these studies ignore the degradation effect of confined concrete on the seismic capacity of corroded piers. In fact, the corrosion of longitudinal reinforcement will decrease the compression area of cross section of piers. Had the reduction of ultimate compression strain of confined concrete not been considered, the ultimate curvature of corroded piers would have increased slightly rather than decreasing significantly. To further investigate seismic capacity of corroded piers, the curvatures of cross section in various exposure conditions at four-level damage states (i.e., slight, moderate, extensive, and complete [30]) are shown in Figure 8 Advances in Civil Engineering 9 capacities at four damage states can be observed. As the slight and moderate damage states are defined as the first longitudinal reinforcement yields and the fully formed plastic hinge of section, respectively, the corresponding curvatures highly depend on the properties of longitudinal reinforcement. In consequence, the curvature capacities of piers at these two damage states exhibit a similar variation trend to the percentage mass loss of corroded longitudinal reinforcement. Meanwhile, the compressive strength and ultimate strain of confined concrete play an important role in the curvature capacity of piers at extensive and complete damage states. erefore, the variation trend of curvature capacity of piers at these two damage states is similar to that of the percentage mass loss of corroded transverse reinforcement. Moreover, the curvature capacity of piers in tidal zone is significantly lower than that of other exposure conditions, as expected. In particular, the maximum reduction ratios of curvatures capacity in various exposure conditions at extensive damage states are 49%, 25%, 28%, and 32%, respectively. Fragility Analysis of Piers. According to the damage assessment procedure, the time-dependent seismic fragility functions of bridges can be obtained. Figure 9 presents the time-dependent fragility surfaces of three piers in longitudinal direction when the bridge is located in shallow water. Due to the stiffness discrepancy of piers, the fragility surfaces of various piers exhibit significant different. With the increase of service time, the fragility surfaces of each pier at four damage states present upward trends. is is particularly seen in the fragility surfaces at extensive and complete damage states. For example, for an SA value of 0.3 g, the probabilities of pier 1 exceeding four damage states increase by 12%, 20%, 73%, and 195%, respectively. It is indicated that the corrosion has negative effects on the seismic damage of piers. Moreover, the different variation degrees result in the decrease of gap between fragility surfaces at each damage state. It is revealed that the ductility level of piers will significantly decrease, entailing the damage state of corroded piers much easier to transform from a low level to a high level during earthquakes. On the other hand, it can be observed from Figure 9 that the damage probabilities of piers at various damage states demonstrate nonlinear variation trends with the increase of time. Specifically, the damage probabilities of piers at 20 years are close to those at pristine condition because the degradation of piers mainly occurs at 30∼40 years after construction. Moreover, the damage probabilities of piers at slight and moderate damage states steadily increase after 40 years. Meanwhile, the increase rates of damage probabilities at extensive and complete damage states reduce after 80 years. Overall, the phenomenon is similar to the variation trends of curvature capacities (see Figure 8). It can be inferred that the seismic capacities of piers strongly affect the seismic fragility of piers. In order to investigate the effects of spatial chlorideinduced corrosion on the damage distribution of piers, Figure 10 shows the seismic fragility contour maps of pier 1 at pristine condition and 100 years after construction. Depending on the inertia force distributions of piers during earthquakes, the damage probability distribution of piers in longitudinal direction at pristine condition follows an approximate linear triangle pattern, whereas the damage in transverse direction forces on the two ends of piers and decreases from two ends to the middle of piers. However, a jaggedness damage probability distribution of piers can be observed in some cases after spatial chloride-induced corrosion. In particular, the section at the low and/or high water levels may become more vulnerable than the adjacent sections in two directions. e reason is that seismic capacities of the sections exposed in the tidal zone between the low water level and high water level will exhibit more significant degradation. erefore, the spatial chloride-induced corrosion not only increases the seismic damage probability of piers but also may alter the damage probability distribution of piers. To clearly illustrate the damage distribution of various piers, Figure 11 presents the distribution of the median SA (corresponding to a 50% damage probability) exceeding moderate damage state of each pier at pristine condition and 100 years after construction. A visible alteration of damage distribution in two directions can be observed from the pier located in shallow water. Meanwhile, the damage distribution of pier in transverse direction will be altered when it is located in shallow water. Moreover, the movement of inflection points in piers can be found after alteration of damage distribution. It should be noted that the probability exceeding moderate damage state of section at the low water level may exceed that of sections at two ends of piers after corrosion. It can be inferred that the formation of plastic hinges of section at the low water level will be earlier than that of two-end sections. In other words, the spatial chlorideinduced corrosion may alter the yield sequence of piers during earthquakes. On the other hand, the alteration of damage distribution is the most significant in pier 1 followed by pier 3 and pier 2 when the bridge is located in shallow water. Meanwhile, an opposite trend can be found from the bridge located in deep water. It is indicated that the height of piers and water level will commonly affect the damage distribution of piers. A further discussion in this regard is presented in Section 7. Fragility Analysis of Other Components. Previous investigation mainly focused on the effects of corrosion on the seismic damage of piers. In fact, the degradation of piers will affect the dynamic characteristics and seismic response of the whole bridge. To fully investigate the seismic damage of aging bridge, the seismic fragility functions of shear keys, rubber bearings, and PTFE sliding bearings are developed. Table 3 shows the seismic capacities of these components at different damage states. Figure 12 shows the time-dependent fragility curves of components at moderate and complete damage states when the bridge is located in shallow water. Due to the stiffness discrepancy between various piers and abutments, the damage of components at abutment is always more serious than that of components at piers. Moreover, the components at pier 1 are the most vulnerable followed by the components at pier 3 and pier 2. On the other hand, a decrease in damage probability of various components with increasing time can be observed from the figure, which is opposite to that of piers. When SA � 0.2 g, the probabilities of shear keys at abutment, PTFE sliding bearings at abutment, and rubber bearings at pier 1 exceeding moderate damage reduce by 24.1%, 35.1%, and 29.0%, respectively. e main reason is that the corrosion of piers will decrease the inertia force in Advances in Civil Engineering relatively slighter than that of piers. Additionally, Table 4 lists the median SAs of some critical components. By comparing the seismic fragility of piers in Figure 11(a), we can observe that the PTFE sliding bearing at abutment is the most vulnerable component at slight damage state at pristine condition. Meanwhile, pier 1 tends to dominate the vulnerability of the bridge at 100 years after construction. It is indicated that the opposite degradation trends between piers and other components may change the most vulnerable component in the whole bridge. Figure 13 presents the fragility curves of various components at extensive damage state. e difference between fragility curves in deep water and shallow water indicates that the water level will affect the damage probability of other components. For the components at abutment and pier 1, the damage probability in shallow water is relatively smaller than that in deep water. On the contrary, the damage probability of components at pier 3 in shallow water is relatively larger than that in deep water. Combined with the previous analysis, it seems that the alteration of damage distribution of piers seems to aggravate the variation degree of damage probability of other components. Evolution Probability of Yield Sequence of Corroded Piers Modern RC piers are generally designed to dissipate energy during strong earthquakes by permitting the controlled formation of plastic hinges. In this aspect, detailed design of the plastic hinges is necessary to ensure that piers have enough dissipation of energy. Meanwhile, the displacement ductility capacity of piers should be determined according to the distribution of plastic hinges. erefore, the plastic hinges of piers should be predetermined during the ductile seismic design. In general, the plastic hinges are expected to form at the bottom and/or top of piers. However, as mentioned, the spatial chloride-induced corrosion may alter the yield sequence of corroded piers. In this case, it is important to determine the yield sequence of the corroded piers during the ductile seismic design. In this section, we propose a method to determine the evolution probability of yield sequence of piers. Based on the results in Section 6, the plastic hinges of corroded piers may appear at four sections: (1) Advances in Civil Engineering 13 (4) the section at high water level. In theory, the yield sequence of piers can be determined by comparing the curvature demand during earthquakes and yield curvature capacity (corresponding curvature capacity at moderate damage state), as shown in Figure 14. eoretically, the double-column pier can be regarded as a cantilever column in longitudinal direction. erefore, the distribution of longitudinal curvature demand of piers before yielding can be simplified to a linear relationship (Figure 14(a)). On the contrary, the framing effects between the columns and the bents result in the distribution of curvature demand of piers in transverse direction presenting the double triangle curves before the pier yields ( Figure 14(b)) [31]. On the other hand, the yield curvature capacity of piers presents a stepped distribution due to the nonuniform degradation. By comparing the slope of curvature demand distribution and curvature capacity distribution, the evolution probability of yield sequence of piers can be described as follows: where α is the ratio between the distance from inflection points to the bottom of piers and the height of piers, which can be assumed to be 1 and 0.5 in longitudinal and transverse directions, respectively; λ s is the ratio between the depth of submerged zone and the height of piers; λ t is the ratio between the depth of tidal zone and the height of piers; H is the height of pier; φ 1 , φ 2 , and φ 3 are the yield curvatures of section 1 (bottom section), section 2 (top section), and section 3 (section at low or high water level), respectively. us, equation (23) can be rewritten as follows: where α 31 is the yield curvature ratio between sections 1 and 3; α 32 is the yield curvature ratio between sections 2 and 3. Subsequently, the Monte Carlo simulation method is applied to obtain the evolution probability of yield sequence based on equation (26). Figure 15(a) shows the influences of λ s on the evolution probability of yield sequence of corroded piers in longitudinal direction. From the figure, it can be seen that the evolution probability of yield sequence rapidly decreases with the increase in λ s . For the case study of bridge, the evolution probabilities of pier 1 in shallow water and deep water are 66.4% and 0.1%, respectively. In addition, the evolution probabilities of pier 1 and pier 2 in shallow water are 66.4% and 0.3%, respectively. It is indicated that the yield sequence of corroded piers in longitudinal direction is more likely to evolve in shallow water than in deep water. Moreover, the yield sequence of short pier is more likely to evolve than that of tall pier in a bridge. On the other hand, Figure 15(b) represents the influences of λ s and λ t on the evolution probability of yield sequence of corroded piers in transverse direction. For a constant λ t , the evolution probability will first decrease and then increase with the growth of λ s . For example, the evolution probability of piers will reduce from 49.0% to 9.9% for λ s in the range from 0.1 to 0.3 when λ t is equal to 0.3. Next, the evolution probability increases to 33.2% with λ s reaching 0.8. It can be inferred that the evolution of yield sequence in transverse direction will be easy to happen in both the shallow water and the deep water. Moreover, higher λ t is more likely to cause the evolution of the yield sequence of corroded piers in transverse direction. Conclusion is study assessed the seismic damage of aging RC bridges subject to spatial chloride-induced corrosion in marine environments. Moreover, a method is proposed to determine the evolution probability of yield sequence of corroded piers, and the influence factors are further investigated. Generally, the following conclusions can be obtained: (1) e corrosion level of reinforcing steels in tidal zone is the most serious, followed by the splash zone, atmospheric zone, and submerged zone. Moreover, the transverse reinforcement experiences more remarkable corrosion than the longitudinal reinforcement. e chloride-induced corrosion will significantly decrease the moment capacity and curvature ductility of piers. Meanwhile, the influence of corrosion on the initial stiffness of piers is relatively slight. (2) e seismic damage probability will present nonlinear increase trends with the increase of time when the piers suffer spatial chloride-induced corrosion. Moreover, the corroded pier is easy to turn from low damage states to high damage states during earthquakes because of its poor ductility level. Furthermore, the nonuniform degradation along the pier direction may result in the sections at the low water level and/or high water level becoming more vulnerable than the adjacent sections, entailing the alteration of damage distribution of corroded piers or even the yield sequence of corroded piers in some cases. (3) e spatial chloride-induced corrosion of piers will decrease the seismic response of other components, resulting in a reduction of seismic damage probability of various components. Moreover, the alteration of seismic damage distribution of piers will aggravate the variation degree of damage probability of components. It should be noted that the opposite degradation trends between piers and other components may change the most vulnerable component in the whole bridge. among the height of piers and the depths of submerged zone and tidal zone. A lower ratio between the depth of submerged zone and the height of piers will increase the evolution probabilities of yield sequence of corroded piers in longitudinal and transverse directions. Moreover, the yield sequence of corroded piers in transverse direction is also more likely to evolve in a higher ratio between the depth of submerged zone and the height of piers. Meanwhile, a lower ratio between the depth of tidal zone and the height of piers will relieve the evolution of yield sequence of piers. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
v3-fos-license
2022-06-25T15:11:45.490Z
2022-06-23T00:00:00.000
250009723
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2022.931231/pdf", "pdf_hash": "27399e11b18176c8050ec5c9719f40547687fa14", "pdf_src": "Frontier", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:293", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3a4a1ca891616f5e61fd9869e4744ea3774f3929", "year": 2022 }
pes2o/s2orc
Changes in Vertebral Marrow Fat Fraction Using 3D Fat Analysis & Calculation Technique Imaging Sequence in Aromatase Inhibitor-Treated Breast Cancer Women Aromatase inhibitor (AI) is a cornerstone drug for postmenopausal women with estrogen receptor-positive early-stage breast cancer. Fat-bone interactions within the bone marrow milieu are growing areas of scientific interest. Although AI treatment could lead to deterioration of the skeleton, the association between AI medication and subsequent marrow adiposity remains elusive. A total of 40 postmenopausal, early-staged, and hormone receptor-positive breast cancer patients who underwent treatment with adjuvant AIs and 40 matched controls were included. Marrow proton density fat fraction (PDFF) at the L1−L4 vertebral bodies using 3D Fat Analysis & Calculation Technique imaging (FACT) sequence at 3.0T, bone mineral density (BMD) by dual-energy X-ray absorptiometry, and serum bone turnover biomarkers were determined at baseline and at 6 and 12 months. We found that, in comparison to baseline, an increase of type I collagen cross-linked telopeptide was detected at 12 months (P <0.05). From baseline to 12 months, the PDFF measured using FACT was greatly increased. At 12 months, the median percent change of PDFF (4.9% vs. 0.9%, P <0.05) was significantly different between the AI treatments and controls. The same trend was observed for the marrow PDFF at 6 months relative to the respective values at baseline. Although BMD values were significantly reduced after 12 months in AI-treated women, changes in BMD vs. baseline condition were not significantly different between the AI-treated and control groups [Δ BMD −1.6% to −1.8% vs. −0.3% to −0.6%, respectively, P > 0.05]. In the AI-treated group, Δ PDFF was associated with Δ BMD at the lumbar spine (r = −0.585, P < 0.001), but not in the controls. Taken together, over a 12-month period, spinal marrow fat content assessed with FACT sequence significantly increased in postmenopausal women with hormone-receptor-positive breast cancer receiving AI treatment. INTRODUCTION Aromatase inhibitors (AIs) are widely recommended for use by postmenopausal women who have estrogen receptor-positive early-stage breast cancer. Treatment with AIs provides benefits to breast cancer patients in terms of improved disease-free survival and overall survival (1). However, AI-induced deterioration of bone loss and its management with bisphosphonates is still unclear. In addition, the optimal duration of AI therapy for early breast cancer remains elusive. Extended use of adjuvant endocrine therapy and persistent deterioration of the skeleton from recent findings emphasized the need to assess bone loss and fracture risk in women with hormone-receptor-positive, early-stage breast cancer initiated on AIs (1,2). Bone mineral density (BMD) evaluation by dualenergy X-ray absorptiometry (DXA) is actually limited. Accuracy of DXA measurements is influenced by degenerative changes in the spine or aortic mineralization and by the variable proportion of fat in overlying soft tissue since it uses a two-dimensional projectional measurement (3). The use of bone quality assessment by means of a based-DXA trabecular bone score may contribute to identifying those with a higher risk of fracture independent of bone density (4,5). The use of other imaging techniques, such as high resolution peripheral quantitative computed tomography by capturing more and different information on the properties of bone microstructure, have potential implications for clinical practice in the future (6). Adipocytes in the bone marrow are highly plastic, and have a distinctive characteristic to secrete an extensive number of cytokines and adipokines such as resistin, leptin, and C-C Motif Chemokine Ligand 2 (CCL2) that exert local and endocrine functions. Additionally, bone marrow adipose tissue has been proposed to have mixed brown and white fat characteristics (7,8). Both animal and human data supported a clinical association between marrow adipose tissue content and integrity of skeleton (9,10). The proton density fat fraction (PDFF) as a biomarker for osteopenia and osteoporosis enables discrimination of low bone mass from healthy controls (9,11). Accumulating evidence also highlights the importance of interactions between marrow adipocytes and tumor cells (12,13). Although a previous study reported that AI-treated patients maintained vertebral marrow PDFF values with a relatively small sample size, prospective changes of marrow fat content in postmenopausal women with breast cancer at completion of AI treatment remain poorly understood. Therefore, the current study was designed to evaluate the prospective changes in spinal marrow fat content and bone mass in postmenopausal women with early-staged breast cancer after completing AI treatment using chemical shift encoding-based water-fat magnetic resonance imaging (MRI) at 3.0T. Participants This study was performed in accordance with the ethical standards described in the 1964 Declaration of Helsinki and its later amendments. This study was approved by the Institutional Review Board of China-Japan Union Hospital of Jilin University, and all participants provided informed consent. In this prospective, observational study, we recruited 40 postmenopausal women (age, 51.7-73years) with hormone-receptor-positive early-staged breast cancer (including carcinoma in situ and stage I−II breast cancers) who were scheduled to receive treatment with adjuvant AIs (i.e., letrozole, anastrozole, and exemestane) between May 2018 and January 2022. Participants were excluded if they had (1): history of lumbar spinal surgery, known or suspected bone metastases, irradiation and/or chemotherapy, other malignancies, distant metastasis, chronic diseases such as rheumatoid arthritis, diabetes mellitus, liver and kidney dysfunction, severe cardiac, hematological, psycho, and nervous system diseases; 2) use of medications known to interfere with fat/bone metabolism such as glucocorticoids, bisphosphonates, denosumab, teriparatide, strontium ranelate, anticoagulants, anticonvulsants, alcohol abuse; (3) bone mineral density or other missing data. A healthy control group (n = 40; age, 51.5-74years) of age-matched postmenopausal women was also recruited from the community. At enrollment, all participants completed self-administered questionnaires about demographics, medical history, general risk factors, family history of breast cancer as well as lifestyle factors (e.g., alcohol consumption, current tobacco smoking, and physical activity). Physical activity was assessed with the International Physical Activity Questionnaire short form, with data reported as Metabolic Equivalent of Task hours per week (14). According to standard procedures, body weight and height were measured at baseline. Body mass index (BMI) was calculated as the weight in kilograms divided by the square of the height in meters. All participants were scheduled to undergo chemical shift encoding-based water-fat MRI, DXA, and serum bone turnover marker analysis at baseline condition, and at 6 and 12 months after receiving endocrine therapy. The study flow chart is presented in Figure 1. Biochemical Evaluation Fasting blood samples were collected after overnight fasting and between 7 a.m. and 9 a.m. on the DXA evaluation day. Biochemical evaluation included 25-hydroxyvitamin D, type I collagen cross-linked telopeptide (CTX-I), N-terminal propeptide of type 1 procollagen (P1NP) and osteocalcin. 25hydroxyvitamin D was measured by immunoassay. Serum CTX-I, P1NP, and osteocalcin were measured by chemiluminescence (ECLIA) in the analyzer Tesmi-F3999 (Tellgen Super Multiplex Immunoassay System, Shanghai, China). MRI Acquisition and Analyses MRI of the lumbar spine was performed on a 3.0 T full-body MRI unit (uMR 780, United Imaging Healthcare, Shanghai, China) to quantify marrow proton density fat fraction (PDFF) at the L1-4 vertebral bodies. Subjects were positioned head-first in the magnet bore in a prone position. Standard clinical MRI protocols, including T1-weighted imaging and T2-weighted imaging (sagittal acquisition), were performed with a built-in 12-channel posterior coil. For chemical shift encoding-based water-fat separation at the level of the lumbar spine, a sagittal prescribed 3D Fat Analysis & Calculation Technique (FACT) sequence allowing PDFF quantification, was then acquired with the following parameters: TR= 7.2 ms; six echoes with TE1/DTE = 1.21/1.1 ms; flip angle, 3°( low spin flip angle excitation to minimize T1 saturation) (11,15); slice thickness, 3 mm; interslice gap, 0 mm; acquisition matrix size, 256 × 192; field of view, 380 × 380 mm; 1 average; scan time, 17 seconds. FACT sequence images were transferred to a commercially available workstation (uWS-MR Advanced Postprocess Workstation, United Imaging Healthcare, Shanghai, China). One musculoskeletal radiologist with 5 years' experience quantitatively analyzed PDFF mappings obtained with FACT sequence ( Figure 2). The coefficient of variation was 3.07% for the repeatability of PDFF measurement. Evaluation of BMD Areal BMD values at the lumbar spine (L1-L4), femoral neck, and total hip were assessed using dual energy X-ray absorptiometry (DXA, Hologic Discovery). DXA scans were performed by a certified operator. Precision coefficients were 1.17% for the femoral neck, 1.09% for the total hip, and 1.29% for the lumbar spine. Both MRI and DXA examinations were performed on the same day. Statistical Analysis The sample size calculation was performed using G*Power software v3.1, taking into consideration the effect of aromatase inhibitor on fat fraction percentage (16). The effect size of 0.60 showed that with a significance level of 95% and statistical power of 80% (power 1−b = 0.80), the minimum number of participants required was 24. Data are presented as mean ± standard deviation (SD), median (interquartile range, IQR) or n (%) as appropriate. Normality was evaluated by the Shapiro-Wilk test. Student's t-test or Mann-Whitney test was performed to compare quantitative variables and Fisher's exact or chi-square test for qualitative analyses between groups. The marrow MRI PDFF, DXA BMD, and serum levels of bone turnover biomarkers at baseline and at 6 and 12 months were assessed using the paired t test or Wilcoxon rank-sum test. Statistical analyses were performed using SPSS software version 20.0 for Windows (SPSS Inc, Chicago, IL, USA). All statistical tests were two sided, and significance was set at P <0.05. Baseline Characteristics of Study Population A total of 34 postmenopausal women with early breast cancer receiving AI treatment and 35 healthy controls completed the study. As shown in Figure 1, 11 participants were excluded from the final analysis: two participants because of initial bisphosphonate therapy while being treated by AIs, two with renal dysfunction and thyroid disease, six with discontinued intervention or lost to follow-up, and the other one because of image artifacts. Over a 12-month period, none of the patients reported any new skeletal-related events. The demographic and clinical characteristics of all participants are shown in Table 1. At baseline, no significant differences except for marrow PDFF were observed between the breast cancer women treated with AIs and control groups. Breast cancer patients had higher marrow PDFF than that of the controls. Changes in Marrow PDFF and BMD The spinal marrow PDFF, BMD values at the femoral neck, total hip, and lumbar spine from the hormone-receptor-positive early breast cancer patients receiving AI treatment and healthy controls at baseline condition and at 6 and 12 months are shown in Figure 3. For the AIs and control groups, changes in marrow PDFF and BMD are shown in Table 2. Marrow PDFF at the 6-month follow-up visit (60.8 ± 5.5%) increased significantly compared to PDFF at the initial visit (59.0 ± 6.3%, P < 0.001) in the breast cancer patients receiving AIs, but not in the controls (53.7 ± 5.3% vs 53.4 ± 5.9%, P >0.05). Relative to the respective values at baseline, the marrow PDFF value at 6 and 12 months markedly increased by a median of 3.1% and 4.9% (all P <0.001) in the AIs group, respectively, but not in the controls (0.6% and 0.9%, all P >0.05, respectively), In the breast cancer patients receiving AIs, femoral neck BMD (0.863 ± 0.009 g/cm 2 vs. 0.877 ± 0.007 g/cm 2 ), total hip BMD (0.945 ± 0.009 g/cm 2 vs. 0.961 ± 0.011g/cm 2 ), and lumbar spine BMD (1.033 ± 0.014 g/cm 2 vs. 1.052 ± 0.012 g/cm 2 , all P <0.05) were decreased at the 12month follow-up visit compared to the initial visit. In contrast, no significant difference was found in the DXA BMD values at the femoral neck, total hip, and lumbar spine, with a median of -0.7%, −0.8%, and −1.0% (all P >0.05), respectively, between baseline condition and at 6 months. Changes in Serum Biomarkers At baseline condition, serum biomarkers including 25(OH)D, CTX-I, P1NP, and osteocalcin levels were not significantly different in the breast cancer patients treated with AIs compared with the controls ( Table 1). Similar results were observed at 6 months. CTX-I level was significantly increased after 12 months in comparison to baseline values in the AItreated group, and significant differences were found between the AIs and control groups at 12 months ( Table 2). No significant differences in the 25(OH)D, P1NP, and osteocalcin levels were observed at different timepoints. Data are presented as mean ± SD, median (IQR) or n (%) as appropriate. AIs, aromatase inhibitors; BMD bone mineral density; BMI, body mass index; CTX-I, type I collagen cross-linked telopeptide; IQR, interquartile range Q1-Q3; P1NP, N-terminal propeptide of type 1 procollagen; PDFF, proton density fat fraction; SD, standard deviation. a P < 0.05 by independent-sample t-test. Relationships Among Marrow PDFF, BMD, and Serum Biomarkers In the breast cancer patients receiving AIs group, a significantly negative relationship was found between change of marrow PDFF and change of lumbar spine BMD values (r = −0.585, P < 0.001) at 12 months relative to the respective values at baseline, but not in the controls group. Spinal marrow PDFF variation over time was not significantly related with changes of BMD at the femoral neck and total hip in both the AI-treated breast cancer patients and healthy controls. In the AIs group, D bone turnover biomarkers at 6 months and 12 months versus baseline condition was not associated with D spinal marrow PDFF or D BMD at the femoral neck, total hip, and lumbar spine. DISCUSSION In this prospective observational study, we performed DXA scans, MR FACT sequence, and serological tests to clarify changes in spinal marrow PDFF, BMD at the femoral neck, total hip and lumbar spine, and bone turnover biomarker levels in postmenopausal women with estrogen receptor-positive earlystage breast cancer receiving AIs. We found that vertebral marrow PDFF was significantly increased at 6 and 12 months post-AI treatment onset. We also showed that BMD values at the total hip, femoral neck, and lumbar spine were decreased at the 12-month follow-up visit compared to the initial visit. Changes in marrow PDFF and D lumbar spine BMD values were negatively associated in the AIs group. Bone marrow adipose tissue is now recognized as an endocrine organ. Accumulating evidence indicates that bone marrow fat plays a complex role in bone health, energy metabolism, and hematological diseases like aplastic anemia, multiple myeloma, and leukemia (9,17). A previous study demonstrated that breast cancer patients had higher marrow fat content in comparison with the age-matched controls. Expansion of marrow fat may be an independent risk factor for postmenopausal breast cancer and clinicopathological characteristics of breast cancer (14). In this present work, as compared with the healthy controls, the hormone-receptorpositive early breast cancer patients receiving AIs showed fat expansion within the bone marrow. The level of serum b-CTX is used as the reference marker for bone resorption, and P1NP can be measured as one of bone formation biomarkers. During bone formation as well as bone resorption, osteocalcin can be released into the circulation. Several studies indicated that P1NP and b-CTX are the most efficient biomarkers to predict the BMD changes (18). As expected, serum b-CTX markedly elevated at 12 months after AI treatment. Similar to our results, Catalano et al. found that b-CTX levels increased significantly after 9 and 18 months in comparison to baseline values in the AI-treated group (19,20). In contrast to ours and other studies (19), no significant change was found in serum b-CTX from baseline condition to 12 months in postmenopausal women with early breast cancer at lower and moderate risk of fragility fracture who received AIs (21). AIs are in widespread use for hormone-receptor-positive breast cancer patients. Several clinical trials have reported AIrelated bone loss and fracture risk in both premenopausal and postmenopausal women (4 19, 20, 22). In clinical practice, BMD was used to assess bone strength and risk of fracture. However, in some pathologic conditions such as diabetes mellitus patients, there is an apparent contradiction of elevated bone mass associated with a higher fracture (5), which may be due to poor bone quality assessment with BMD measurement. Seeking imaging methods other than BMD to evaluate bone strength and risk of fracture is of important implication, such as marrow fat fraction, an indirect measure of bone quality (23,24). The use of chemical shift-encoded MRI or magnetic resonance spectroscopy and bone quality by means of PDFF could additionally help to identify those with bone deterioration or higher risk of fracture independent of BMD (11,24). Bone marrow fat tissue composition and quantification may play an important role in bone pathophysiology, but has not been thoroughly studied in AI users. A recent study with a relatively small sample size (n = 8) done by Dieckmeyer et al. (16) showed that over a 12-month period, vertebral bone marrow PDFF was increased by 3.1% in subjects receiving AIs, however it was not significant (P = 0.52). Additionally, there was no significant association between PDFF and BMD for the AI treatment group at baseline or follow-up. In our current study with a large sample size and including a group of age-matched healthy controls, we observed that over a 12-month period spinal marrow PDFF was significantly increased in postmenopausal women treated with AIs. Ex vivo, estradiol may induce osteogenesis and suppress adipogenesis differentiation of bone marrow mesenchymal stromal cells (25). In vivo, estradiol deficiency leads to the increase in bone marrow adipocyte size and number, particularly in postmenopausal osteoporotic women (26). Since treatments with AIs decrease already low endogenous postmenopausal estradiol levels, we found that the PDFF at the lumbar spine was increased by a median of 3.1% at 6 months and 4.9% at 12 months (all P < 0.05), respectively. Change of marrow PDFF was associated with change of lumbar spine BMD values at 12 months relative to the respective values at baseline. Thus, marrow PDFF assessed with FACT sequence may be used as a useful early response indicator. We acknowledge that our study has some limitations. First, the sample size was relatively small, which did not allow to analyze the effects of different AIs (i.e., letrozole, anastrozole, and exemestane) on marrow fat content. This was a singlecenter study which limits the generalizability of our results. Second, many of the AI-treated breast cancer patients are postmenopausal women who not infrequently have history of multidrug use. Possible interactions between different drugs may affect the bone-fat metabolism that could not be specifically excluded. Third, although we examined both the marrow fat content and BMD, we did not explore their relationships with risk of fractures. Finally, the observation period of AI treatment is typically 5 -10 years (1), evaluating longitudinal effects over a longer period of time may help to further elucidate the longerterm effects of AIs on vertebral marrow PDFF. In conclusion, over a 12-month period, spinal marrow proton density fat fraction as measured by FACT sequence significantly increased in postmenopausal women with early breast cancer receiving AI treatment. Our results demonstrated that healthcare professionals for postmenopausal women who received AIs must pay attention to marrow fat content measurements during and after hormone-receptor-positive early breast cancer treatment. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary materials. Further inquiries can be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by The Institutional Review Board of China-Japan Union Hospital of Jilin University. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS Study design: TW, LL Study conduct: YZ; Data collection: QH; Data analysis: TW, YZ; Data interpretation: TW, QH; Drafting manuscript: TW, YZ, QH, LL. All authors contributed to the article and approved the submitted version.
v3-fos-license
2020-10-29T09:06:15.849Z
2020-10-26T00:00:00.000
229002781
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/12/21/8875/pdf", "pdf_hash": "4b12ec94e1a91ad7fb09c102b70c73ba6537be04", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:295", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "9611d8c63c6fd002a7a2c34227cc0c74aeace4d3", "year": 2020 }
pes2o/s2orc
Understanding N 2 O Emissions in African Ecosystems: Assessments from a Semi-Arid Savanna Grassland in Senegal and Sub-Tropical Agricultural Fields in Kenya : This study is based on the analysis of field-measured nitrous oxide (N 2 O) emissions from a Sahelian semi-arid grassland site in Senegal (Dahra), tropical humid agricultural plots in Kenya (Mbita region) and simulations using a 1D model designed for semi arid ecosystems in Dahra. This study aims at improving present knowledge and inventories of N 2 O emissions from the African continent. N 2 O emissions were larger at the agricultural sites in the Mbita region (range: 0.0 ± 0.0 to 42.1 ± 10.7 ngN m − 2 s − 1 ) than at the Dahra site (range: 0.3 ± 0 to 7.4 ± 6.5 ngN m − 2 s − 1 ). Soil water and nitrate (NO 3 − ) contents appeared to be the most important drivers of N 2 O emissions in Dahra at the seasonal scale in both regions. The seasonal pattern of modelled N 2 O emissions is well represented, though the model performed better during the rainy season than between the rainy and dry seasons. This study highlighted that the water-filled pore space threshold recognised as a trigger for N 2 O emissions should be reconsidered for semi-arid ecosystems. Based on both measurements and simulated results, an annual N 2 O budget was estimated for African savanna / grassland and agricultural land ranging between 0.17–0.26 and 1.15–1.20 TgN per year, respectively. Introduction Nitrous oxide (N 2 O) is a powerful and long-lived greenhouse gas (GHG) with a high global warming potential [1,2], and it contributes to stratospheric ozone (O 3 ) depletion [3]. Atmospheric N 2 O concentrations have increased since around 1960 mainly due to intensive use of synthetic nitrogen (N) fertilisers, thus leading to enhanced N 2 O emissions from soils [4,5]. The formation of N 2 O in soils is due to multiple biological and physical-chemical processes such as nitrification, denitrification, nitrifier-denitrification, chemo-denitrification, chemical decomposition of hydroxylamine and co-denitrification with nitric oxide (NO) [2]. Nitrification and denitrification are considered the major processes of N 2 O production in soils [6,7]. Nitrification occurs in aerobic conditions and leads to the oxidation of ammonium (NH 4 + ) into nitrate (NO 3 − ), with N 2 O as a by-product of this reaction, while denitrification is an anaerobic process that reduces NO 3 − and can lead to N 2 O production in function of the environmental conditions, with N 2 being the final product if denitrification is complete. The most important factors that modulate N 2 O production magnitude in soils are soil water content, NH 4 + , NO 3 − , organic matter, oxygen availability, temperature and pH [8][9][10]. These key drivers are influenced by anthropogenic actions such as agricultural management [11], e.g., crop species, tillage [12], quantity and form of N input [13], soil compaction [14] and irrigation [15]. Climate characteristics, meteorological variability (temperature, rainfall, drought) and atmospheric N deposition modulate the intensity at which the key drivers affect N 2 O production and associated N 2 O emissions. The study of N 2 O emission processes and key drivers has primarily been focused on temperate areas. In contrast, N 2 O emissions in Sub-Saharan Africa (SSA) remain relatively poorly understood, with only a limited number of studies, the need for further investigations are needed as this region has considerable impact on the global GHG budget [11,[16][17][18]. Restrictions leading to the scarcity of in-field observations are partly related to the difficulty of implementing measurement field campaigns in remote locations with little infrastructure. This is particularly the case for the savanna ecosystem, which represents more than 40% of the total area in Africa [19,20]. Soil water content of savannas in semi-arid climates is considered to be too low to trigger denitrification and N 2 O emissions. However, Zaady et al. (2013) suggested that denitrification can occur at lower water content in dry ecosystems, where microbial activity can be very strong following rainfall events and large enough to deplete O 2 concentrations in soil and allow denitrification activity to increase [21]. Their study also showed that the potential of denitrification increases when a site's average annual rainfall decreases, indicating that denitrification can be an important component even in arid areas with low Water-Filled Pore Space (WFPS). This feature could be a derivative of the 'Birch' effect corresponding to a sudden pulse-like event of rapidly increasing N 2 O emissions from soils under seasonally dry climates in response to rewetting after a long dry season [22]. This N 2 O pulse is induced by a quick mineralisation of C and N of dead organic matter (microbes, animals, plants) that has accumulated during the dry season after rewetting and results in a pulse in microbial activity, causing emissions, exceeding the ones from a permanently moist soil [23], of mineralised N available for nitrifiers and denitrifiers. Farming systems in SSA are 80% composed of smallholder farms (farm size <10 ha) with low N application as organic and/or synthetic fertiliser [24,25] and thus completely different from the highly intensified larger agricultural production system found in the temperate zones in developing countries. In some countries such as Burkina Faso, farmers receive support from governments or aid organisations for the use of mineral N fertiliser in order to boost crop production [26]. N 2 O emissions from the African agricultural sector are considered to represent approximately 6% of the global anthropogenic N 2 O emissions [27]. However, agricultural activity in Africa has quickly developed over the last two decades, involving an increasing use of synthetic N, while remaining low compared to other regions of the world [26]. Projections for the period 2000-2050 based on the four Millennium Ecosystem Assessment (MEA) scenarios coupled with the spatially explicit Integrated Model to Assess the Global Environment (IMAGE) [28] predict that an increase in N use and land-use change (conversion of forest/grassland into agricultural field) is expected to cause a significant rise in N 2 O emissions from the African agricultural sector by 2050 compared to 2010, corresponding to an increase of 0.5 to 0.8 TgN yr −1 [27,29]. This increase in soil N availability can therefore lead to substantially higher N 2 O emissions than those in low-N farming systems [5]. In addition, a recent study predicted an increase in N 2 O emissions from agriculture in SSA if currently existing yield gaps are being closed and that it might result in a doubling of the current anthropogenic N 2 O emissions [30]. Moreover, the current acceleration of savannas' conversion to cropland may increase N 2 O emissions from these lands in the future [11,31]. N 2 O emission reduction is considered as a way to mitigate climate change, in particular in the agricultural sector [32], but assessing the weight of N 2 O emissions from African crops in the global N 2 O emission burden is very difficult. Kim et al. (2015) made a synthesis of available data on GHG emissions (CO 2 , CH 4 and N 2 O) from natural ecosystems and agricultural lands in SSA and reported considerable research gaps for the continent, an observation shared by Pelster et al. (2017) and Leitner et al. (2020) [25,29,30]. This study aims at improving the understanding of N 2 O emissions for two typical ecosystems found in SSA: a grassland located in Senegal characterised by a semi-arid climate (Dahra) and an agricultural area in the Lake Victoria basin in Kenya with an equatorial climate (Mbita). The objectives of this paper are to quantify and compare these two SSA ecosystems in terms of N 2 O emissions by assessing the impacts of (i) hydro-meteorological conditions at seasonal and daily scales and (ii) land-use intensity. Field measurements were conducted to estimate the N 2 O emissions in both regions. Moreover, a modelling approach was applied to simulate N 2 O emissions at the Dahra site with the Sahelian Transpiration Evaporation and Productivity (STEP)-GENeral DEComposition (GENDEC) model [33,34], in which the denitrification module of the DeNitrification-DeComposition (DNDC) model [35] was adapted (named STEP-GENDEC-N 2 O) to propose an annual budget of N 2 O emissions from soils to the atmosphere for the Dahra site. This model was not applied to the Mbita region as it is developed only for Sahelian conditions and because crops and vegetation species present in the plots in Mbita are not included in the model settings. Moreover, as there are eight different plots in the Mbita region with only two daily N 2 O emissions measurements available for each of them, it would have been impossible to validate the model for each plot configuration. Dahra Rangeland Station The Dahra field station has been in operation since 2002 and is managed by the University of Copenhagen. It is part of the Centre de Recherches Zootechniques (CRZ) and is located in Senegal (Ferlo) in West Africa (15 • 24 10 N, 15 • 25 56 W, elevation 40 m). This site is a semi-arid savanna used as a grazed rangeland. The soils are mainly sandy with low N and C content and neutral pH ( Table 1). The region is influenced by the West African Monsoon (cool, wet, southwesterly wind) during the rainy season and the Harmattan (hot, dry, northeasterly wind) during the dry season. The rainy season occurs between July and October. Annual average rainfall is 416 mm as calculated from historical data collected between 1951 and 2013 (Table 1) and was 356 mm in 2013 and 412 mm in 2017. Tree cover is around 3% of the surface, with the most abundant species being the Balanites aegyptiaca, Accacia tortilis and Acacia Senegal [36]. The herbaceous vegetation is dominated by annual C4 grasses (e.g., Dactyloctenium aegyptum, Aristida adscensionis, Cenchrus biflorus and Eragrostis tremula) [36]. Livestock is mainly composed of cattle, sheep and goats. Grazing is permanent and occurs year-round [37]. The site was previously described in Tagesson et al. (2015b) and Delon et al. (2017Delon et al. ( , 2019 [33,38,39]. The 2017 field campaign was part of the French national program called Cycle de l'Azote entre la Surface et l'Atmosphère en afriQUE (CASAQUE-Nitrogen cycle between surface and atmosphere in Africa), and results of the 2013 field campaigns were already published in Delon et al. (2017) [39]. Three field campaigns were carried out to quantify N 2 O emissions. The campaigns lasted eight days in July 2013 (11)(12)(13)(14)(15)(16)(17)(18) July 2013), ten days in November 2013 and eight days in September 2017 (21-27 September 2017). Mbita Cropland Region The Mbita region is composed of eight plots with different vegetation, soil characteristics and fertilisation input. Five out of these eight plots are experimental plots located within the International Centre of Insect Physiology and Ecology (ICIPE) close to the Victoria Lake in Kenya and will be referred to as ICIPE from here onwards. Each of the five plots has a size of~12 × 12 m 2 and receives 100 kg N ha −1 yr −1 of either di-ammonium phosphate or calcium ammonium nitrate fertiliser during the two rainy seasons. Crops are regularly watered by overhead irrigation sprinklers. These plots are annually cultivated with maize (2 plots, referred to as Maize 1 and Maize 2), sorghum (1 plot, referred to as Sorghum), napier grass (1 plot, referred to as Napier Grass) and permanent grassland (1 plot, referred to as Grassland). The grassland plot has been a permanent non-cultivated grassland since 2010. The three other plots are cultivated fields located around the city of Kisumu: one with maize in Fort Ternan (referred to as Maize Fort Ternan, with no fertilisation), a 20 years tea plantation in Kaptumo (referred to as Tea Field Kaptumo) with NPK fertilisation of 50 kg N ha −1 yr −1 and a sugar cane plot in Kisumu (referred to as Sugar Cane Kisumu) with no fertilisation (Figure 1). These crops are representative of the main country cultivated crops since they represent 54% of the total cultivated areas [40]. Two field campaigns were carried out to quantify daily N 2 O emissions. The first one lasted nine days in January 2018 (17-25 January 2018) during the dry season and the second one ten days in November 2018 (29 October-07 November 2018) during the rainy season for the purpose of the CASAQUE program. The measurement plots in the Mbita region were located in a region of mixed crops and grassland. All sites in the Mbita region are characterised by an equatorial climate with two distinct rainy seasons throughout the year: one long rainy season from March to May and a shorter one from November to December (five months in total), often referred as long and short rain seasons. The average rainfall and temperature are 1100 mm and 24 • C in ICIPE from 1986 to 2018 (Table 1). In 2018, ICIPE experienced a rainfall of 1070 mm and an average temperature of 20 • C. All four locations (ICIPE, Fort Ternan, Kaptumo and Kisumu) have clay soils with high N and C content (Table 1). Soils in ICIPE, Fort Ternan and Kisumu have a relative neutral pH whereas the soil in Kaptumo has a low (acid) pH (Table 1). Hydro-Meteorological Data In Dahra, the available hydro-meteorological variables used for this study are rainfall (mm), air and soil temperature (°C) and soil water content (SWC) (%) at 0.05, 0.10 and 0.30 m depth. These variables were collected every 30 s and averaged and saved every 15 min (or sum for rainfall). All details on sensors and measured variables are given in Tagesson [33,38]. Due to technical issues in 2017, rainfall was measured by a manual rain gauge with direct reading. This rain gauge was installed in the frame of the International Network to study Deposition and Atmospheric chemistry in Africa (INDAAF) (https://indaaf.obs-mip.fr/). The gauge is set up one meter above the soil surface according to the World Meteorological Organization (WMO) Hydro-Meteorological Data In Dahra, the available hydro-meteorological variables used for this study are rainfall (mm), air and soil temperature ( • C) and soil water content (SWC) (%) at 0.05, 0.10 and 0.30 m depth. These variables were collected every 30 s and averaged and saved every 15 min (or sum for rainfall). All details on sensors and measured variables are given in Tagesson et al. (2015b) and Delon et al. (2019) [33,38]. Due to technical issues in 2017, rainfall was measured by a manual rain gauge with direct reading. This rain gauge was installed in the frame of the International Network to study Deposition and Atmospheric chemistry in Africa (INDAAF) (https://indaaf.obs-mip.fr/). The gauge is set up one meter above the soil surface according to the World Meteorological Organization (WMO) recommendations. It is graded from 0 to 150 mm, with a minimum resolution of 0.25 mm. Reading of the manual gauge was done every day at 8:00 a.m. by a local partner. In the Mbita region, soil temperatures were not measured during the field campaigns due to probe unavailability. SWC was measured manually with a thetaprobe ML3 (Delta-T Devices Ltd., 1% precision) at an hourly time step during the day. Rainfall was measured using the same protocol as in Dahra during 2017, with a manual rain gauge read daily by a local collaborator. For the purpose of the study, WFPS was calculated as follows and used in the modelling process of N 2 O emissions at the Dahra site: with D a the soil actual density (D a = 2.6 g cm −3 ) and D b the soil bulk density, calculated from the dry weight of soil samples collected in situ (see Section 2.2.3 for details) with a cylinder of known volume (290 cm 3 ) and dried at 40 • C for 24 h. D b = dry soil mass/total volume was 1.50 ± 0.06 g cm −3 for Sahelian soils. The use of WFPS values instead of SWC includes the concept of soil porosity together with soil humidity so it is a way to indicate how soil pores are saturated or not and to compare more objectively denitrification thresholds between ecosystems. For both regions, all hydro-meteorological variables were averaged to daily values for the purpose of this study. N 2 O Chamber Emission Measurements In both regions, N 2 O emissions were measured using a stainless-steel chamber with a base of 0.20 m × 0.40 m and a height of 0.15 m following the static chamber method (non-flow through non-steady state). The chamber was placed on a frame that was inserted 10 cm deep in the soil and sealed by a slot filled with water to keep the chamber airtight. Emission measurements were carried out at least three times a day: in the morning (10:00-12:00 a.m.), around noon (12:00 a.m-02:00 p.m) and in the late afternoon (04:00 p.m-06:00 p.m.). For the field campaigns at Dahra in 2013, samples of the chamber headspace gas were removed with a syringe through a rubber septum at times 0, 15, 30 and 45 min after placing the chamber on the frame. Air samples (in duplicate) were injected from the syringe into 10 mL glass vials that contained 6M NaCl solution capped with high-density butyl stoppers and aluminium seals. When injected, the air sample removed almost all the solution from the vials (a small quantity was kept inside), and the vials were kept upside down to ensure airtightness. For the field campaign in Dahra 2017 and the field campaigns in Mbita 2018, air samples from the chamber were collected with a syringe through a rubber septum at times 0, 20, 40 and 60 min after placing the chamber on the frame. Samples (in duplicates) were transferred into 12 mL pre-evacuated glass vials (Exetainer, Labco, UK). Samples from the campaigns in Dahra were analysed by gas chromatography 2-3 weeks after the field campaign at Laboratoire d'Aérologie, (Toulouse, France) while samples from Mbita sites were analysed at the Mazingira Centre (International Livestock Research Institute, Nairobi, Kenya) the week after the field campaign following the analytical protocol described in Zhu et al. (2019) [41]. Analyses from both laboratories were carried out following the same protocol with N 2 O partial pressures determined by Gas Chromatography (GC) with an SRI 8610C gas chromatograph (SRI, Torrance, CA, USA) equipped with an electron capture detector. For every campaign, N 2 O emission calculations were defined from the slope of the linear regression of gas samples concentration over time. The calculation was accepted if the coefficient of determination R 2 estimated from the linear regression was above 0.80 for the 2017 campaign in Dahra and the 2018 campaigns in Mbita. For the data already published from the 2013 campaigns in Dahra, the quality criteria are described in Delon et al. (2017) [39]. Details about the calculation method are given in Assouma et al. (2017) [37]. Soil Characteristics (Texture, pH, N and C Content) At each sampling location of N 2 O emission measurements, samples of soil were collected from 5-10 cm depth to assess biogeochemical characteristics. Samples from Dahra 2017 and Mbita 2018 were collected for determination of texture, ammonium (NH 4 + ) and nitrate (NO 3 − ) concentrations, C/N ratio, total C, total N and pH. Samples were frozen immediately after collection and kept frozen during transportation to France. Samples from Mbita 2018 were further analysed by GALYS Laboratory (http://www.galys-laboratoire.fr/, NF EN ISO/CEI 17025: 2005). Soil texture was determined according to norm NF X 31-107 without decarbonation. Organic carbon and total nitrogen were determined as defined in norm NF ISO 10694. Total soil carbon content was transformed into CO 2 , which was then measured by conductibility. Soil mineral and organic nitrogen content were measured following norm NF ISO 13878: the samples were heated at 1000 • C with O 2 , and the products of combustion or decomposition were reduced in N 2 . N 2 was then measured by thermal conductibility (katharometer). Determination of pH was undertaken following the norm NF ISO 10390 with soil samples stirred with water (ratio 1/5) ( Table 1). Samples from Dahra 2013 were also analysed at GALYS Laboratory following the same protocol [39]. Samples from the Dahra 2017 field campaign were analysed by the Laboratoire des Moyens Analytiques (US IMAGO-LAMA certified ISO9001:2015), Institut pour la Recherche et le Dévelopement (IRD) in Dakar (http://www.imago.ird.fr/moyens-analytiques/dakar.) Organic carbon was determined using the method of Walkey and Black (1934). pH was determined with soil samples stirred with water (1/2.5 w/v). Total carbon and nitrogen contents were determined by the Dumas method [40] on 0.2 mm ground, 100 mg aliquots according to ISO 10694:1995 for carbon and ISO 13878:1998 for nitrogen using a CHN elemental analyser (Thermo Finnigan Flash EA1112, Milan, Italy). Mineral and organic nitrogen contents were extracted with a KCl 1M solution in a ratio 1/5 (w/v), then dosed by colorimetric method (Table 1). Additionally, data on soil NO 3 − availability in July 2012 were taken from Delon et al. (2017). A summary of information on laboratories involved in soil and N 2 O analysis is reported in Table A1. Statistical Methods The model performances and the relationship between N 2 O emissions and key drivers were evaluated using the determination coefficient (R 2 ) as the square of the Pearson correlation coefficient and the Root Mean Squared Error (RMSE) as the differences between modelled and measured values. The uncertainty of the annual N 2 O budgets estimated with STEP-GENDEC-N 2 O was calculated based on the standard deviation of the error between observed and simulated values over the whole period as follows: where σ annual is the annual N 2 O budget uncertainty and σ error is the standard deviation of the error between observed and simulated values. σ error was multiplied by 365 to apply this uncertainty for a whole year, and as few observed data were available, a unique uncertainty was defined for all years. STEP-GENDEC Model A modelling approach to simulate N 2 O emissions from the Dahra site was conducted using the STEP-GENDEC coupled model. STEP is an ecosystem process model developed for Sahelian herbaceous savannas [33,34,42] and was only applied to the Sahelian site of Dahra. This model aims at estimating the temporal variation of the main variables and processes associated with vegetation functioning in Sahelian savannas at the local or regional scale [43]. STEP was coupled with GENDEC, which simulates organic matter decomposition, interactions between litter (C and N transfer), decomposer microorganisms' activities, microbial dynamics and C and N pools [44]. The coupled model was forced with standard meteorological data from site measurements (precipitation, global radiation, air temperature, relative humidity and wind speed). N in the model is split between different pools representing dead organic matter, living microbial biomass and soil N content [42]. Soil temperature is simulated by the model from air temperature [45] and SWC is calculated following the tipping bucket approach [46]. More details on equations and initial parameters specific to the Dahra site are available in Delon et al. (2019) [33]. N 2 O Emission Module in STEP-GENDEC A module of N 2 O production and emission by nitrification and denitrification was coupled to STEP-GENDEC from DNDC's equations adapted from Yuexin Liu (1996) and Li et al. (2000) to create the STEP-GENDEC-N 2 O model [9,35]. The entire module and adapted equations are available in Appendix D. As STEP-GENDEC simulates soil NH 4 in the module depends on soil NO 3 − content, SWC, soil temperature, pH, clay content, total soil carbon and total soil microbial carbon mass. A standard microbial C:N ratio of 10 was chosen for the site based on measurements (Table A2). This value is consistent with values reported in studies on various ecosystems, soils and climates, reporting C:N ratios ranging from 4.5 to 16.5, depending on the season [47,48]. The DNDC model was developed and tested on sites located in temperate climate conditions, where processes of N 2 O production and emissions are under different conditions than in semi-arid climates. Therefore, an adaptation of the module and its parameters was necessary. The model STEP-GENDEC-N 2 O was applied with the default settings of Yuexin Liu (1996) Mbita Region Measurements Results from the Mbita region are presented by plot type (instead of time series) as a different plot was monitored each day during the campaigns. There were no rain events the week before the January campaign, whereas 30 mm of rain were recorded during the week prior to the November campaign ( Figure 2). During the Mbita campaigns, N 2 O emissions varied from 0.1 ± 0.3 and 7.4 ± 2.0 and from 0.0 ± 0.0 and 14.0 ± 6.0 ngN m −2 s −1 during dry and rainy season, respectively, except at the Sugar Cane Kisumu field, where the highest N 2 O emissions were measured with 19.0 ± 3.0 and 42.0 ± 11.0 ngN m −2 s −1 , respectively, for the dry and rainy season ( Figure 3). This plot shows maximum clay content and minimum sand content of 42% and 20%, respectively, compared to the other sites in the Mbita region, which were characterised by soil clay and sand content ranging from 30 to 38% and from 41 to 46%, respectively. The lowest N 2 O emissions were measured at the Maize Fort Ternan plot with 0.4 ± 0.7 and 0.0 ± 0.0 ngN m −2 s −1 in January and November, respectively. SWC varied from 9 to 25% and from 19 to 44% (Figure 3) during dry and rainy seasons, respectively, and was systematically higher in November (rainy season) than in January (dry season) for all plots ( Figure 3). N 2 O emissions were on average larger in November than in January with 11.3 ± 5.9 and 5.1 ± 1.5 ngN m −2 s −1 , respectively. The Mbita soil NH4 + and NO3 − contents between 0 and 20 cm were generally higher in November than in January (Table A2). Among the different plots, soil NH4 + content varied from 1.5 ± 0.2 mgN kgsoil −1 (Sorghum) to 4.8 ± 0.5 mgN kgsoil −1 (Tea Field Kaptumo) during the January campaign, with an average of 2.9 ± 1.2 mgN kgsoil −1 for all plots, and from 1.6 mgN kgsoil −1 (Tea Field Kaptumo) to 11.3 mgN kgsoil −1 (Maize Fort Ternan) during the November campaign, with an average of 5.9 mgN kgsoil −1 for all plots. Soil NO3 − content ranged from 0.2 ± 0.04 mgN kgsoil −1 (Napier Grass) to 3.7 ± 1.7 mgN kgsoil −1 (Maize Fort Ternan) during the January campaign, with an average of 1.5 ± 0.7 mgN kgsoil -1 , and from 1.0 mgN kgsoil −1 (Napier Grass) to 17.4 mgN kgsoil −1 (Grassland) during the November campaign, with an average of 4.7 ± 5.3 mgN kgsoil −1 for all plots. No correlation or trend was found between N2O emissions and soil NH4 + content. Furthermore, no significant relationships were found at the daily scale, neither between N2O emissions and SWC nor between N2O emissions and soil NO3 − content (R² = 0.00 and 0.04, respectively). Dahra Site Measurements Cumulative rainfalls of 24, 0.3 and 12 mm were observed during the weeks preceding the campaigns of July 2013, October 2013 and September 2017, respectively, in Dahra. SWC was low The Mbita soil NH 4 + and NO 3 − contents between 0 and 20 cm were generally higher in November than in January ( throughout all the campaigns, ranging from 3.3 to 9.8%, the lowest values measured during November 2013 and the highest values measured during September 2017 (Figure 4). N2O emissions in Dahra ranged from 0.3 ± 0.0 to 7.4 ± 6.5 ngN m −2 s −1 , with September 2017 showing the lowest emissions and the highest SWC ( Figure 4). Soil NH4 + and NO3 -contents were also low compared to Mbita soils (Table A2). A linear relationship between soil temperature at 5 cm depth and N2O emissions was established, with a R² of 0.44 and a p-value of 0.06, showing that N2O emissions increased with soil temperature in Dahra ( Figure 5) when data from all Dahra campaigns were combined. However, no significant relation was found between SWC at 5 cm depth and N2O emissions (R² = 0.00) on a daily scale. Not enough data of soil NO3 − and NH4 + content (n = 7 and 8, respectively) were available to fit a relationship with N2O emissions. Figure 4). Soil NH4 + and NO3 -contents were also low compared to Mbita soils (Table A2). A linear relationship between soil temperature at 5 cm depth and N2O emissions was established, with a R² of 0.44 and a p-value of 0.06, showing that N2O emissions increased with soil temperature in Dahra ( Figure 5) when data from all Dahra campaigns were combined. However, no significant relation was found between SWC at 5 cm depth and N2O emissions (R² = 0.00) on a daily scale. Not enough data of soil NO3 − and NH4 + content (n = 7 and 8, respectively) were available to fit a relationship with N2O emissions. Modelling N 2 O Emissions at the Dahra Site Modifications of some initial parameters to adapt the denitrification module of DNDC to the semi-arid conditions of the Dahra site were necessary as the default settings did not correctly reproduce the observed N 2 O emissions. (1) The WFPS threshold in DNDC from which denitrification can operate is fixed at 40%, a value never reached in Dahra. A set of different WFPS thresholds from 0 to 40% was tested to find the most appropriate value, and a 9% threshold value was kept for the simulations as it gave the lowest RMSE and highest R 2 . (2) After these tests, the influence of the CON and SYN variables (see Appendix D) were decreased to fit the simulation results with observed N 2 O emissions. CON and SYN represent processes acting on denitrification co-products; therefore, the reduction applied to these values in the case of the present study will decrease the amplitude of conversion and synthesis processes and allow the adaptation of the parameterisation to semi-arid climate conditions. Soil Water Content Modelling Before simulating the N 2 O emissions from Dahra site, a validation of soil water content estimation was undertaken. Strong relationships were found between SWC measured at 5 cm depth and simulated in the 0-2 cm layer (Figure 6a,c) and between SWC measured at 10 cm depth and simulated in the 2-30 cm layer (Figure 6b,d) with R 2 values of 0.65 and 0.67 and RMSEs of 1.61 and 1.67%, respectively. Differences between simulated and measured SWC at 5 cm depth are mainly due to the difference between the depths where SWC was estimated: layers 0-2 cm/2-30 cm in the simulation, depths 5/10 cm in the measurements. Indeed, measurement depths and model layers are different because modifying the partition of soil layers in the hydrological module would have impacted the SWC validation. At both depths, SWC temporal dynamics and ranges are comparable between measurements and simulations. The thresholds observed in the simulations (Figure 6a,b) are explained by the way STEP calculates SWC. Indeed, the model follows the tipping-bucket approach: when the field capacity is reached into a layer, the water in excess is transferred to the next layer and SWC is limited to the field capacity value [33]. Modelling N2O Emissions at the Dahra Site Modifications of some initial parameters to adapt the denitrification module of DNDC to the semi-arid conditions of the Dahra site were necessary as the default settings did not correctly reproduce the observed N2O emissions. (1) The WFPS threshold in DNDC from which denitrification can operate is fixed at 40%, a value never reached in Dahra. A set of different WFPS thresholds from 0 to 40% was tested to find the most appropriate value, and a 9% threshold value was kept for the simulations as it gave the lowest RMSE and highest R 2 . (2) After these tests, the influence of the CON and SYN variables (see Appendix D) were decreased to fit the simulation results with observed N2O emissions. CON and SYN represent processes acting on denitrification co-products; therefore, the reduction applied to these values in the case of the present study will decrease the amplitude of conversion and synthesis processes and allow the adaptation of the parameterisation to semi-arid climate conditions. Soil Water Content Modelling Before simulating the N2O emissions from Dahra site, a validation of soil water content estimation was undertaken. Strong relationships were found between SWC measured at 5 cm depth and simulated in the 0-2 cm layer (Figure 6a,c) and between SWC measured at 10 cm depth and simulated in the 2-30 cm layer (Figure 6b,d) with R² values of 0.65 and 0.67 and RMSEs of 1.61 and 1.67%, respectively. Differences between simulated and measured SWC at 5 cm depth are mainly due to the difference between the depths where SWC was estimated: layers 0-2 cm/2-30 cm in the simulation, depths 5/10 cm in the measurements. Indeed, measurement depths and model layers are different because modifying the partition of soil layers in the hydrological module would have impacted the SWC validation. At both depths, SWC temporal dynamics and ranges are comparable between measurements and simulations. The thresholds observed in the simulations (Figure 6a,b) are explained by the way STEP calculates SWC. Indeed, the model follows the tipping-bucket approach: when the field capacity is reached into a layer, the water in excess is transferred to the next layer and SWC is limited to the field capacity value [33]. (Figure 8). In general, the relationship between the simulation and the observations is weak, with an R² of 0.36 and RMSE of 2.5 ngN m −2 s −1 , mostly caused by the differences found in November 2013; if the results from this campaign were removed from the dataset, the R² would rise significantly to 0.68 (p-value = 0.006). The low N2O emissions simulated during November 2013 is due to simulated WFPS under 9%, which does not allow denitrification to happen. Simulated N2O emissions follow a clear seasonal pattern, with the most notable N2O emissions reported during the rainy season and the lowest during the dry season. The rise of simulated N2O emissions during the rainy season is due to WFPS values above the 9% threshold required to trigger denitrification combined with a rise of soil NO3 − content, which results from an increasing soil biological activity, microbial growth and an increase of organic matter decomposition. Based on STEP-GENDEC-N2O simulation, N2O emissions occurring during the rainy season (between 1 July and 31 October) represent between 81 and 97% of the total annual N2O budget depending on the year (Table 2) with a mean value of 0.27 and 0.03 kgN ha −1 yr −1 for the rainy and dry season, respectively. An annual N2O budget uncertainty of 0.04 kgN ha −1 year −1 was calculated for the simulation estimations based on the methodology described in Section 2.3. Annual N2O Budget Calculation in MBITA and Dahra Savannas/grasslands ecosystems in Africa would emit 0.2 ± 0.03 TgN of N2O per year based on the mean annual N2O budget of 0.3 ± 0.04 kgN ha −1 yr −1 ( Table 2) simulated by STEP-GENDEC-N2O if the Dahra site is considered representative of these ecosystems, which cover a surface of 640 Mha in Africa [11]. An annual N2O budget of the Mbita region may be extrapolated from our measurements if the Mbita sites are considered representative of Kenya's agricultural plots (the type of crops cultivated on the experimental plots in the Mbita region represent more than 50% of the crops cultivated in Kenya). To do so, the average value measured in November (11.3 ± 4.7 ngN m −2 s −1 ) was applied to the rainy seasons (5 months, 153 days), whereas the one measured in January (5.1 ± 1.2 ngN m −2 s −1 ) was applied to the dry seasons (7 months, 212 days). This resulted in a rough N2O annual budget estimate of 2.4 ± 0.9 kgN ha -1 yr -1 . As there is no information on the land use type in the region, it was impossible to refine this estimation by weighting the calculation according to the area covered by each crop. (Table 2) simulated by STEP-GENDEC-N 2 O if the Dahra site is considered representative of these ecosystems, which cover a surface of 640 Mha in Africa [11]. An annual N 2 O budget of the Mbita region may be extrapolated from our measurements if the Mbita sites are considered representative of Kenya's agricultural plots (the type of crops cultivated on the experimental plots in the Mbita region represent more than 50% of the crops cultivated in Kenya). To do so, the average value measured in November (11.3 ± 4.7 ngN m −2 s −1 ) was applied to the rainy seasons (5 months, 153 days), whereas the one measured in January (5.1 ± 1.2 ngN m −2 s −1 ) was applied to the dry seasons (7 months, 212 days). This resulted in a rough N 2 O annual budget estimate of 2.4 ± 0.9 kgN ha −1 yr −1 . As there is no information on the land use type in the region, it was impossible to refine this estimation by weighting the calculation according to the area covered by each crop. Based on this calculation, African agricultural lands would therefore emit 1.2 ± 0.4 TgN of N 2 O per year, considering that the Mbita region is representative of croplands in Africa, which cover a surface of 480 Mha [11]. [26,51]. The two first studies showed an emission maximum at the beginning of the rainy season, which is consistent with our simulation results, with peaks reaching maximum 15 ngN m −2 s −1 , whereas no N 2 O emission peak was observed from measurements after a rain event, probably due to a too-short campaign duration as rainfall impact on N 2 O emission may be delayed [52,53]. Low N 2 O emissions at Dahra compared to Mbita sites can be explained by contrasting climate and soil properties: Dahra has a mean annual rainfall three times lower than Mbita and lower soil N, C, clay, NH 4 + and NO 3 − content (Table 1). Key Drivers of N 2 O Emissions SWC seems to be an important key driver affecting N 2 O emissions at the seasonal scale according to this study, despite the fact that SWC shows no correlation at a daily time-scale in both regions. Indeed, emissions during the rainy season were larger than those measured during the dry season (Figure 3), especially in Mbita. However, this observation cannot be strictly applied to September 2017 campaign at Dahra, where the average measured SWC was the largest but the average N 2 O emissions were the lowest (Figure 4). The absence of correlation between SWC and N 2 O emissions at a daily time-scale was also reported by Aronson et al. (2019) for a semi-arid coastal grassland in California (R 2 < 0.05) [54]. Low correlation between SWC and daily N 2 O emissions was found as well by Huang and Sorø (Denmark), respectively [56,57] (Figure 5), indicating that this parameter could also play an important role in N 2 O emissions from savanna ecosystems but with low reliability (0.05 < p ≤ 0.1). In Mbita, no conclusion can be drawn from the measurements, as soil temperature was not measured in January 2018. However, soil temperature is not likely to explain N 2 O emissions in Mbita, as it does not vary much during the year, as shown by a climatological study of soil temperature, with an average of 29.3 ± 2.5 • C at 10 cm soil depth (Bakayoko, pers. com.). No convergence on the role of temperature on N 2 O emissions is found in the literature because the role of temperature on denitrification and nitrification processes is difficult to assess and gives contradictory results depending on the study [19,[58][59][60]. In the Mbita region, the highest N 2 O emissions were measured in the Sugar Cane Kisumu field during both campaigns despite the fact that this plot had no fertilisation input. This could be due to it having the highest and lowest values of clay and sand, respectively, measured on this plot. This observation is consistent with the studies of Bouwman et al. (2002) and Rochette et al. (2018), who found that a soil with a fine texture like clay soil, which favours soil anaerobic condition, emits more N 2 O than soils with medium and coarse texture [61,62]. This result shows that basing the estimation of N 2 O emissions only on N input as is the case for the IPCC Tier 1 calculation method can lead to significant uncertainties [63]. The poor correlations found between N 2 O emissions and the key drivers at a daily time-scale can be explained by the non-linear relationships between N 2 O emissions and their key drivers as reported in several studies [2,64], especially in climates with distinct dry and rainy seasons characterised by pronounced changes in SWC. N 2 O Emissions from Semi-Arid Sites At the Dahra site, N 2 O emissions occur and can be larger than those measured in the Mbita region, even with the very low SWC in Dahra (Table A2). Observed and simulated WFPS in Dahra reached a maximum value of 30%, which is, in theory, considered to be too low to trigger denitrification or to be the minimum WFPS from which denitrification may begin to appear [51]. The STEP-GENDEC-N 2 O simulation results are thus consistent with the 'Birch' effect [22] leading to peaks of N 2 O emissions at the beginning of the rainy season. Pelster et al., (2017) also measured N 2 O emissions ranging from 0 to 27 ngN m −2 s −1 from their study sites, even though WFPS never exceeded 40% [25]. Indeed, some studies showed that N 2 O emissions from natural and agricultural tropical ecosystems in SSA have an important seasonal trend with a peak of emission at the beginning of the rainy season and just after the dry season that could be due to the 'Birch' effect [22,26,65]. In this transition period, Pelster et al. [25,26]. The N 2 O emissions measured at the Dahra site under WFPS being far under the generally recognised theoretical denitrification threshold could also be explained by variations in denitrifiers community, which can change with climate, soil and vegetation type [66,67]. As highlighted by Butterbach-Bahl et al. (2013) [2], denitrification can be classified as a microbiologically 'broad process', which can be conducted by a wide array of microbes. DNDC Denitrification Module Adaptation to Semi-Arid Conditions WFPS threshold was set to 9% in the model (Section 4.1) to allow N 2 O emissions in semi-arid conditions, contrary to the original 40% set by Yuexin Liu (1996) [35]. Even if this threshold appears insufficient to trigger a denitrification process, it seems that another kind of N 2 O production and emission processes are taking place in such dry ecosystems and support this modification. Zheng et al. (2019) also measured N 2 O emissions ranging from 0 to 14 ngN m −2 s −1 at two Tanzanian croplands, even though the WFPS was always below 40% [4]. As shown in Section 4.2., the pulse effect of N 2 O emissions after rainfall and the increase of denitrification potential with the climate aridity [21,22] support the modifications of the code to be able to simulate N 2 O emissions for semi-arid regions. The modifications of the amplitude of the conversion and synthesis factors (Section 2.4.2) are further supported by the occurrence of denitrification for different conditions than usually encountered in more humid areas. (Figure 8). This poor performance of the model during the transition between the rainy and the dry seasons has already been highlighted by Delon et al. (2019), who showed that STEP-GENDEC-NOFlux underestimated the CO 2 and NO emissions during November 2013 due to a too-steep decrease in simulated SWC [33], annihilating the possibility for microbial processes to participate in respiration and nitrification in the soil. However, this transition period is short in time, and low SWC during the dry season is well represented by the model the rest of the time. Our results, in accordance with those reported by Delon et al. (2019) [33], show that N 2 O production and emissions in semi-arid ecosystems do not react as in temperate ecosystems. Despite the November 2013 specificity, the simulation is of the same order of magnitude as in situ measurements with N 2 O emissions ranging from 0.1 to 5.2 and from 0.3 ± 0.0 to 7.4 ± 6.5 ngN m −2 s −1 , respectively, for the campaigns' period. On average, the mean N 2 O emissions in 2012-2017 from the model are 0.1 ± 0.7 and 2.3 ± 2.7 ngN m −2 s −1 in dry and rainy seasons, respectively. As far as the authors know, modelling studies of N 2 O emissions in African soils, and more specifically for semi-arid soils, are not available in the literature, and this study is the first attempt to simulate an annual cycle of N 2 O emissions for a site in the Sahelian region. Regional Scale N 2 O Budget Estimation The N 2 O annual budgets reported here must be used with caution as these calculations involve large uncertainties caused by the low number of measurements and the difficulty of the STEP-GENDEC-N 2 O model to represent correct emissions at the transition between rainy and dry seasons at the Dahra site. Furthermore, the measurements used in this study consider low or non-fertilised plots representative of smallholder farms and are not necessarily representative of larger croplands that can also be found in the Lake Victoria catchment. These budgets may therefore be underestimated. However, the N 2 O annual budgets simulated by STEP-GENDEC-N 2 O at Dahra are in the same range as previous estimates of N 2 (Table A3) [68]. As presented in the results section, the N 2 O budget in the Dahra site is 0.3 ± 0.04 kgN ha −1 yr −1 , considered here as representative of savanna/grassland ecosystem, and the N 2 O budget in the Mbita region is 2.4 ± 0.9 kgN ha −1 yr −1 , considered here as representative of cropland in Africa. However, in the Mbita region, missing data from the long rainy season period causes large uncertainties on the N 2 O budget estimation, as the rainfall magnitude is more than three times higher during this period compared to the short rainy season ( Figure 2). Therefore, if the positive effect of SWC on N 2 O emissions identified in this study is also relevant for the long rainy season, the annual N 2 O budget suggested in this study could be underestimated. These values are still of the same order of magnitude as those estimated by Kim et al. (2016), who calculated N 2 O annual budget from savanna/grassland ecosystem and cropland in Africa of 0.4 ± 0.1 and 2.5 ± 0.9 kgN ha −1 yr −1 , respectively (Table A3) [13]. Their estimations are based on the compilation of multiple studies on the continent. In their study, savanna/grassland ecosystems proved to produce the lowest emissions when compared to the other ecosystem types forest/plantation/woodland and agroforestry, with N 2 O annual budgets of 2. (Table A3), respectively [27][28][29]. Therefore, the results of the present study (1.2 ± 0.4 TgN yr −1 ) are above those projected for 2050. Moreover, according to EDGAR (Emissions Database for Global Atmospheric Research) estimations of direct N 2 O emissions from managed soil in Africa for the year 2015 [69] based on the IPCC Tier 1 methodology [63], N 2 O emissions from the agricultural sector in Africa were 0.5 TgN in 2015. This estimation is more than two times less than the one calculated in this study and is equivalent to the IMAGE simulation for 2010. Two options may be suggested: (1) the budget calculated here and by Kim et al. (2016) may not be representative of the whole continent but [11], in the case of this study, more of the agricultural lands in the equatorial bent, or (2) the two modelling approaches underestimate N 2 O emissions. Nonetheless, the EDGAR and IMAGE simulations of N 2 O emissions from agricultural soils in Africa remain approximately twice those from savannah/grassland ecosystems calculated in this study and by Kim et al. (2016) [11], which meets the same tendency as our results. Conclusions This paper provides new in situ measurements and modelling results from two contrasting ecosystems and contributes to expanding on the currently insufficient database on N 2 O emissions for the African continent. Experimental results are reported for cropland N 2 O emissions in Kenya and grassland N 2 O emissions in Senegal, along with simulated N 2 O emissions from savanna ecosystems. Measurements and simulations showed that N 2 O emissions present a seasonal variation and highlighted the importance of the rainy season on emission amplitude in both regions. Indeed, SWC proved to be an important driver at the seasonal scale, with larger emissions during the rainy season for the Mbita-region sites and at the beginning of the rainy season for the Dahra site. No direct correlation on a day-to-day scale was found in either region, due to the highly non-linear characteristic of N 2 O emissions and temporal lags between emissions and underlying production processes in the soil. The use of modelling proved to be a useful tool to deepen our understanding of N 2 O emission processes at the Dahra site, highlighting the pulse effect at the beginning of the rainy season. The results presented in this study confirm that N 2 O emissions from savannas are lower than those from croplands in SSA, despite their larger areas, with 0.2 ± 0.1 TgN-N 2 O for savannas and 1.2 ± 0.4 TgN-N 2 O for croplands. where CON i is the conversion coefficient of NO 3 − to NO 2 − (1), NO 2 − to N 2 O (2) and N 2 O to N 2 (3) and SYN i is the synthesis of NO 3 − (1), NO 2 − (2) and N 2 O by denitrifiers, described as follows: is the relative growth rate of denitrifier i (unitless), Y i is the maximum growth yield of i (unitless), M i is the maintenance coefficient of i, f pH (i) is the reduction factor of CON i due to pH (unitless), C denit is the total carbon pool of denitrifiers (kgC ha −1 d −1 ), f T denit = 2 T soil − 45 10 is the temperature reduction factor of denitrification (unitless), P denit_new = G denit ·C denit is the new-formed microbial biomass of the denitrifiers pool (kgC ha −1 d −1 ) and G denit = G i is the denitrifiers growth rate (unitless). The N 2 O producef by this process of denitrification is then called N 2 O denit . • N 2 O Emission Calculation: The N 2 O emission is then calculated as follows:
v3-fos-license
2020-03-30T14:40:10.100Z
2020-03-30T00:00:00.000
214696219
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00607-020-00804-x.pdf", "pdf_hash": "51fb1d213f92699d22cb5d2d5c1e67614016f75a", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:296", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "1d3cbefc6959ad7ee32fa3a61886aad53a55817d", "year": 2020 }
pes2o/s2orc
Strategies for array data retrieval from a relational back-end based on access patterns Multidimensional numeric arrays are often serialized to binary formats for efficient storage and processing. These representations can be stored as binary objects in existing relational database management systems. To minimize data transfer overhead when arrays are large and only parts of arrays are accessed, it is favorable to split these arrays into separately stored chunks. We process queries expressed in an extended graph query language SPARQL, treating arrays as node values and having syntax for specifying array projection, element and range selection operations as part of a query. When a query selects parts of one or more arrays, only the relevant chunks of each array should be retrieved from the relational database. The retrieval is made by automatically generated SQL queries. We evaluate different strategies for partitioning the array content, and for generating the SQL queries that retrieve it on demand. For this purpose, we present a mini-benchmark, featuring a number of typical array access patterns. We draw some actionable conclusions from the performance numbers. Introduction Many scientific and engineering applications involve storage and processing of massive numeric data in the form of multidimensional arrays. These include satellite imagery, climate studies, geosciences, and generally any spatial and spatiotemporal simulations and instrumental measurements [1], computational fluid dynamics and finite element analysis. The need for efficient representation of numeric arrays has been driving the scientific users away from the existing DBMS solutions [2], towards more specialized file-based data representations. The need for integrated and extensible array storage and processing framework supporting queriable metadata are our main motivations for the development of Scientific SPARQL: an extension of W3C SPARQL language [3] with array functionality. Within the Scientific SPARQL project [4][5][6][7][8], we define syntax and semantics for queries combining both RDF-structured metadata and multidimensional numeric arrays which are linked as values in the RDF graph. Different storage options are explored: our prior publications cover storing arrays in binary files [6], and a specialized array database [7]. In this work we focus on storing the arrays in a relational DBMS back-end. Scientific SPARQL Database Manager (SSDM) implements the query language, in-memory and external array storage, along with the extensibility interfaces. The software and documentation are available at the project homepage [9]. The contributions presented in this work are the following: • A framework for processing array data retrieval queries, which allows adaptive pattern discovery, pre-fetching of chunks from external storage; • A mini-benchmark featuring the typical and diverse array access patterns; • Evaluation of different array data retrieval strategies under different array data partitioning options and access patterns, and the conclusions drawn regarding the workload-aware partitioning options, suggestions for building array processing infrastructures, and estimates of certain trade-offs. In this work we consider an SSDM configuration where array data is partitioned into BLOBs in a back-end relational database. An SSDM server provides scalable processing of SciSPARQL queries which can be expressed both in terms of metadata conditions (pattern matching, filters) and functions over the numeric data (filters, aggregation). SciSPARQL queries allow specifying array projection, element and range selection operations over the arrays, thus defining (typically, sparse) access patterns to the dense multidimensional arrays. The techniques presented here are language-independent, and can be applied to the processing of any array query language which has these basic array operations. Typical queries which benefit from these optimizations are characterized by (1) accessing relatively small portions of the arrays, and (2) accessing array elements based on subscript expressions or condition over subscripts, rather than the element values. The rest of this paper is structured as follows: Sect. 2 summarizes the related work, in Sect. 3 we begin with an example, and then give an overview of the array query processsing technique, namely, array proxy resolution. Section 4 lists different ways (strategies) to generate SQL queries that will be sent to the DBMS back-end to retrieve the chunks, and Sect. 5 describes our Sequence Pattern Detector algorithm (SPD) used in one of these strategies. Section 6 offers the evaluation of different array storage and array data retrieval strategies using a mini-benchmark, featuring a number of typical array access patterns expressed as parameterized SciSPARQL queries. Section 7 concludes the results and suggests the way they can be applied in array storage and query processing solutions. Background and related work There are several systems and models of integrating arrays into other database paradigms, in order ot allow array queries which utilize metadata context. There are two major approaches: first is normalizing the arrays in terms of the host data model, as represented by SciQL [10], along with its predecessor RAM [11], where array is an extension of the relation concept. Data Cube Vocabulary [12] suggests a way to represent multidimensional statistical data in terms of an RDF graph, which can be handled by any RDF store. The second approach is incorporation of arrays as value types-this includes PostgreSQL [13], recent development of ASQL [2] on top of Rasdaman [14], as well as extensions to MS SQL Server based on BLOBs and UDFs, e.g. [15]. We follow the second approach in the context of Semantic Web, offering separate sets of query language features for navigating graphs and arrays. Surprisingly, many scientific applications involving array computations and storage do not employ any DBMS infrastructures, and hence cannot formulate array queries. Specialized file formats (e.g. NetCDF [16]) or hierarchical databases (e.g. ROOT [17]) are still prevalent in many domains. Parallel processing frameworks are also being extended to optimize the handling array data in files-see e.g. SciHadoop [18]. Storing arrays in files has its own benefits, e.g. eliminating the need for data ingestion, as shown by comparison of SAGA to SciDB [19]. The SAGA system takes a step to bridge the gap between fileresident arrays and optimizable queries. SciSPARQL also incorporates the option of file-based array storage, as presented in the context of its tight integration into Matlab [6]. Still, in the present technological context we believe that utilizing a state-of-theart relational DBMS to store massive array data promises better scalability, thanks to cluster and cloud deployment of these solutions, and mature partitioning and query parallelization techniques. Regarding the storage approaches, in this work we explore two basic partitioning techniques-simple chunking of the linearized form of array (which, we believe, is a starting reference point for any ad-hoc solution), and more advanced multidimensional tiling used e.g. in Rasdaman [20,21], ArrayStore [22], RIOT [23], and SciDB [19] which helps preserving access locality to some extent. We do not implement a language for user-defined tiling, as this concept has been already explored in Rasdaman [20]. While cleverly designed tiling increases the chances of an access pattern to become regular, it still has to be made manually and beforehand, with expected workload in mind. With the SPD algorithm we are able to discover such regularity during the query execution. In this work we only study sparce array access to the dense stored arrays and we use a relational DBMS back-end to store the chunks, in contrast to stand-alone index data structures employed by ROOT [23] and ArrayStore [22]. Apart from utilizing DBMS for scalability, does not make much difference in finding the optimal way to access the array data. As the SAGA evaluation [24] has shown, even in the absence of SQL-based back-end integration, the sequential access to chunks provides a substantial performance boost over the random access. The problem of retrieving array content Scientific SPARQL, as many other array processing languages (Matlab, SciPy) and query languages [14,19,10,2] do, allows the specification of subarrays by supplying subscripts or ranges independently for different array dimensions. We distinguish the projection operations that reduce the array dimensionality, like ?A[i], selecting an (n−1)-dimensional slice from an n-dimensional array bound to ?A (or a single element if ?A is 1-dimensional) and range selection operations like ?A[lo:stride:hi]. All array subscripts are 1-based, and hi subscript is included into the range. Any of lo, stride, or hi can be omitted, defaulting to index 1, stride of 1, and array size in that dimension respectively. Let us consider the following SciSPARQL query Q1 selecting equally spaced elements from a single column of a matrix, which is found as a value of the :result property of the : Sim1 node. We assume the dataset includes the following RDF with Arrays triple, containing a 10 × 10 array as its value, as in Fig. 1a, the subset retrieved by Q1 is shown hatched. In our relational back-end this array is stored in 20 linear chunks, containing 5 elements each (chunk ids shown on the picture). Figure 1b shows a variant of the same dataset, where the array is stored in 25 2 × 2 non-overlapping square tiles. The example (a) is used through the rest of this section, and we compare the two storage approaches in Sect. 6. In our setting the RDF metadata triples have considerably smaller volume than the 'real' array data, so they can be cached in main memory to speedup matching and joining the triple patterns. Our problem in focus is querying the big ArrayChunks(arrayid, chunkid, chunk) table in the relational back- end, in order to extract data from the array. In general, we would like to (1) minimize the number of SQL queries (round-trips) to ArrayChunks, and (2) minimize the amount of irrelevant data retrieved. Array query processing overview There are a number of steps to be performed before the back-end will be queried for the actual array content: • Identifying the set of array elements that are going to be accessed while processing an array query. Such sets of elements are described with bags of array proxy objects, which represent derived arrays or single elements, which are stored in an external system. We refer to the process of turning array proxies into in-memory arrays as Array Proxy Resolution (APR). • The array proxies accumulate array dereference and transposition operations. An enumerable set of array proxies can be generated using free index variables, as shown in by QT4 in the Table 2. • Identifying fragments of the derived array to be retrieved that are contiguous in the linearized representation of the original array in order to save on the number of data-transfer operations. • Identifying array chunks needed to be retrieved and formulating data transfer operations for each chunk. Buffering these chunk ids and data transfer operations. • Formulating SQL queries to the back-end RDBMS, as explained in the next section. • If the formulated SQL query is prediction-based (e.g. generated with SPD strategy, as described below), switching between the phases of (I) simulation, i.e. translating elements/fragments to chunk ids, and buffering, (II) performing the buffered operations, and (III) performing the further (unbuffered) operations, as long as the prediction-based query yields the relevant chunks. This includes taking care of false-positives and false-negatives. As the input of this process is a stream of array proxies generated during SciS-PARQL query execution, the output is the stream of corresponding in-memory arrays. Essential parts of this process are described in our previous works [4,5]. Strategies for formulating SQL queries during APR There are a number of possible strategies to translate sets of chunk ids in the buffer to SQL queries retrieving the relevant chunks. The basic one we are about to study are: • NAIVE: send a single SQL query for each chunk id. This proves to be unacceptably slow in realistic data volumes, due to interface and query processing overheads. This would work well until the SQL query size limit is reached. • IN (buffered): an obvious workaround is to buffer the chunk ids (and the description of associated data copying to be performed), and send a series of queries containing limited-size IN lists. • SPD (sequence pattern detection): sending a query like Here the condition expresses a certain cyclic pattern. Such a pattern is described by origin (2 in the example above), divisor (4 in the example above), storing the total periodicity of repetitions, and the modulus list (consisting of single 0 in the example above), containing the repeated offsets. The size or complexity of a pattern is the length of its modulus list. Section 5 describes our algorithm for detecting such patterns. While most cases the SPD strategy will allow us to send a single query retrieving all desired chunks. If the pattern was too complex to be inferred from the buffer (e.g. there was no cyclic pattern at all), some extra chunks might also be retrieved. Still, there are two problems with a straightforward application of SPD: (1) in cases when there actually is a cyclic pattern it is unnecessary to identify all the relevant chunk ids first-a small sample list of chunk ids is normally enough; and (2) in case of an acyclic (random) access, like query QT6 defined in Sect. 6, the detected pattern might be as long as the list of chunk ids, thus making it a similar problem as for IN (single). Hence two versions of SPD: • SPD (buffered): solving the two above problems by computing a small sample sequence of the needed chunk ids, and then formulating and sending an SQL query with the detected pattern. If the pattern covers all the chunks to be retrieved, the single SQL query does all the work. Otherwise (on the first false-negative, or when the false-positives limit is reached), the SQL query is stopped and the buffering process is restarted. In the worst case (when there is no cyclic pattern), it will work similarly to IN (buffered), otherwise, fewer queries will be needed to return the same set of chunks. • SPD-IN (buffered): the difference between IN and SPD-generated SQL queries is that in IN, the chunkid values are explicitly bound to a list, which allows most RDBMSs to utilize the (arrayid, chunkid) composite index directly. As we have discovered in our experiments, neither MS SQL Server nor MySQL are utilizing an index when processing a query with mod condition. However, by comparing a pattern size (i.e. length of the modulus list) to the number of distinct chunk ids in the buffer, we can easily identify if a realistic pattern was really discovered, or should we generate an IN query instead. We currently use the following rules to switch between IN and SPD buffer-to-SQL query translations: (A) If the pattern size is less than half the number of distinct chunk ids, then the cycle is not completely repeated, and is probably not detected at all. (B) If the sample size is less than the buffer limit-then we have buffered the last chunk ids for the query, so there is no advantage of using SPD either. Sequence pattern detector (SPD) algorithm Once the buffer is filled with chunk ids, an SQL query needs to be generated based on the buffer contents. An IN query is simple to generate, and the list of chunk ids does not even need to be sorted (the RDBMS performs this sorting if using a clustered index). In order to generate an SPD query, we first extract and sort the list of distinct chunk ids from the buffer. The following algorithm operates on an increasing sequence of numbers-in our case-sorted chunk ids. Since we are detecting a cyclic pattern, we are not interested in the absolute values of the ids in the sequence, we will only store the first id as the point of origin, and the input values to the algorithm are the positive deltas between the subsequent chunk ids. Each input is processed as a separate step, as shown in Fig. 2. The state of the algorithm is stored with the history and pattern lists, (initialized empty), and the next pointer into the pattern list (initialized to an invalid pointer which will fail any comparison operation). The general idea is that each input either conforms to the existing pattern or not. In the latter case the second guess for the pattern is the history of all inputs. The input either conforms to that new pattern, or the new pattern (which is now equal to history) is extended with the new input. In either case, input is appended to history, and count is incremented. The resulting pattern will have the form: where x is the chunk id value to retrieve, x 0 is the first chunk id value generated (i.e. 'reference point'), d is the divisor, and m 1 ,…,m n−1 is the modulus list. The generated pattern is the sequence of offsets P <p 1 ,…,p n >. We will compute the divisor as the total offset in the pattern, and each element in the modulus list is the partial sum of offsets: In the next section we compare this strategy of formulating an SQL query with the more straightforward approach of sending IN lists that was presented in Sect. 4. Comparing the storage and retrieval strategies For evaluation of the different storage approaches and query processing strategies we use synthetic data and query templates for the different access patterns where parameters control the selectivity. The synthetic arrays are populated with random values, as data access performance is independent of these. For simplicity and ease of validation, we use two-dimensional square arrays throughout our experiments. More complex access patterns may arise when answering similar queries to arrays of larger dimensionality. Still, as shown below, the twodimensional case already provides a wide spectrum of access patterns, sufficient to evaluate and compare our array storage alternatives and query processing strategies. We use parameterized SciSPARQL queries, which are listed in Table 2, for our experiments. The queries involve typical access patterns, such as: accessing elements from one or several rows, one or several columns, in diagonal bands, randomly, or in random clusters. The efficiency of query processing thus can be evaluated as a function of parameters from four different categories: data properties, data storage options, query properties, and query processing options, as summarized in Table 1. A plus sign indicates that multiple choices were compared during an experiment, and a dot sign corresponds to a fixed choice. The structure of the data remains the same throughout the experiments. Namely, it is the dataset shown on Fig. 1, containing a single 100,000 × 100,000 array of integer (4-byte) elements, with total size~40 GB. The logical nesting order is also fixed as row-major, changing it would effectively swap row query QT1 and column query QT2 while having no impact on the other query types from Table 2. The rest of the axes are explored during our experiments, as Table 1 indicates. Experiment 1 compares the performance of different query processing strategies (including different buffer sizes), as introduced in Sect. 4, for different kinds of queries. For each kind of query, cases of different selectivity are compared under either data partitioning approach. Experiment 2 explores the influence of chunk size on the query performance. There is obviously a trade-off between retrieving too much irrelevant data (when the chunks are big) and forcing the back-end to perform too many lookups in a chunk table (when the chunks are small). For both experiments, the selectivity is shown both as the number of array elements accessed and the number of the relevant chunks retrieved. Our expectations that the latter quantity has higher impact on overall query response time are confirmed. The experiments were run with our SciSPARQL prototype and the back-end MS SQL Server 2008 R2 deployed on the same HP Compaq 8100 workstation with Intel Query generator Similarly to the examples above, in each query template we identify an array-valued triple directly by its subject and property, thus including a single SPARQL triple pattern: Each time we retrieve a certain subset of an array and return it either as a single small array (QT1-QT3) or the single elements accompanied by their subscript values (other queries). The templates listed in Table 2 differ only in the array access involved, with conditions on variables for array subscripts. For the random access patterns, the main parameters are the random array subscripts. Nodes of type :AccessIdx with :i and ?j properties are added into the RDF dataset. Both data and query generators are part of the SciPARQL project, and are available at the project homepage [9]. Experiment 1: Comparing the retrieval strategies We compare the different query processing strategies and the impact of buffer sizes for each query presented in Table 2, with different parameter cases resulting in the varying selectivity (and, in case of QT3, logical locality). Each query and parameter case is run against two stored instances of the dataset, differing in array partitioning method: • Linear chunks The array is stored in row-major order, in chunks of 40,000 bytes (10 chunks per row, 10,000 elements per chunk, 1,000,000 chunks total) using linear partitioning. • Square tiles The array is stored in 100 × 100 tiles, occupying 40,000 bytes each (10,000 elements per tile, 1,000,000 tiles total-same as above) using multidimensional partitioning. We pick the strategies among the buffered variants of SPD, IN, SPD-IN, as described in Sect. 4. The buffer size is also varied for the IN strategy, with values picked among 16, 256, and 4096 distinct chunk ids. The SPD strategy is not affected by the buffer size in our cases-it either discovers the cyclic pattern with the buffer size of 16 or does not. We will refer to the SQL queries generated according to either SPD (buffered) or SPD-IN strategy described in Sect. 4 as SPD queries, and similarly, to the SQL queries generated according to IN (buffered) or SPD-IN strategy as IN queries. The query parameters are chosen manually, to ensure different selectivity (for all query patterns) and absence of data overlap between the parameter cases (for QT1-QT3). The latter is important to minimize the impact of the back-end DBMSside caching of SQL query results. A set of chosen parameter values for a SciSPARQL query will be referred to as a parameter case. Each query for each parameter case is repeated 5 times, and the average time among the last 4 repetitions is used for comparison. Results and immediate conclusions The performance measurements are summarized in Table 3 below. For each query and parameter case, we indicate the amount of array elements accessed, and averaged query response times for different strategies. Chunk/tile access numbers shown with * are slightly greater for SPD and SPD-IN strategies due to false positives, with **-also differ for IN strategies due to advantages of sorting, shown for IN(4096). Here are the immediate observations for each query pattern. QT1: The SPD strategy becomes slightly better than IN(4096) in a situation of extremely good physical locality (retrieving every chunk among the first 0.1% of the chunks in the database). Under more sparse access, IN with a big enough buffer, also sending a single SQL query, is preferable. QT2: Worst case workload for the linear partitioning, given the row-major nesting order. As expected, the performance is roughly the same as for QT1 in the case of multidimensional partitioning, with the same maximum of 1000 square tiles being retrieved (this time, making up for one column of tiles). The long and sparse range of rows, given by QT2 access pattern, thus incurs slower-than-index SPD performance (e.g. slower than the index lookups used by IN strategies). In contrast a short condensed range (as for QT1) is faster-than-index-as a non-selective scan is generally faster than index lookups. QT3: In the case of multidimensional array partitioning and under certain grid densities, this query becomes a worst case workload-retrieving a single element from every tile. The IN strategy with a large buffer is the best choice in all cases, regardless of the partitioning scheme. QT4: Similarly to query QT2, this one is the worst case workload for linear chunk partitioning, as the chunk access pattern, as detected by SPD changes along the diagonal, is initiating re-buffering and cyclic phase switching in our framework. The SPD strategy sends only 10 SQL queries (or a single query in case of b 1000, where it captures the complexity of the whole pattern with a single access pattern), and SPD-IN always chooses SPD. Multidimensional partitioning helps to avoid worst cases for diagonal queries, helping to speed up the execution by factor of 55.4 (for the unselective queries). QT5: SPD obviously detects wrong patterns (since there are no patterns to detect), leading to a serious slowdown. However, SPD-IN is able to discard most (but not all) of these patterns as unlikely, almost restoring the default IN performance. And, by the way, SPD is sending the same amount of 625 SQL queries as IN strategy does (for the buffer size of 16). Since the distribution is uniform, there is practically no difference between chunked and tiled partitioning, QT6: The access coordinates are generated in clusters. For the test purposes we generate 3 clusters, with centroids uniformly distributed inside the matrix space. The probability of a sample being assigned to the cluster is uniform. Samples are normally distributed around the centroids with variance 0.01*N-0.03*N (randomly picked for Table 3 Query response times and amount of chunks/tiles accessed Query Elements accessed Linear chunks QT6: random (clustered) Best each cluster once). We deliberately use such highly dispersed clusters, as the effects of logical locality already become visible at a certain selectivity threshold. Samples produced outside the NxN rectangle are discarded, thus effectively decreasing the weight of clusters with centroids close to the border. For QT6, the effect of logical locality starts to play a role already when selecting 0.001% of the array elements. At the selectivity of 0.01% the number of chunks to access is just 78% to the number of elements in the case of linear chunks, and 69% in case of square tiles. We see that the square tiles better preserve the logical query locality, especially for the unselective queries. We expect this effect to be even greater for more compact clusters w.r.t. the tile size, and Experiment 2 below (where we vary the chunk size) supports this idea. Comparing linear chunks versus square tiles In this experiment we have gathered empirical proof to a common intuition [13,20,21,24,25] that for every data partitioning scheme there is a possible worst-case and well as best-case workload. These can be summarized by the following table, listing QT1-QT6 as representative access patterns ( Table 4). The multidimensional partitioning has its only worst case (when a separate chunk needs to be retrieved for each element) on sparse enough regular grids, Also, as shown by QT6, the multidimensional partitioning is still more advantageous for random access patterns, with even a small degree of locality. Overall, it can be regarded as more robust, though having fewer best-case matches. Compact enough clusters that can be spanned by a small number of tiles would obviously be a near-best-case access pattern. Comparing SPD versus IN strategies The SPD approach in most cases allows packing the sequence of all relevant chunk ids into a single SQL query, and thus skipping all the subsequent buffering. However, we have discovered that the SPD queries generally do not perform so well in the back-end DBMS as queries with IN lists. The last two cases show very clearly that in the case where there is no pattern, so that we have to send the same amount of SPD and IN queries (for the same buffer size), the difference in query response time is greater than order of magnitude. An obvious explanation to this is index utilization. A query with an IN list involves index lookups for each chunk id in the list, while a query with mod condition, as generated with SPD strategy, is processed straightforwardly as a scan through the whole ArrayChunk table. We believe it could be highly advantageous to implement a simple rewrite on the mod() function. A condition like 'X mod Y Z' with Z and Y known, and X being an attribute in a table (and thus having a finite set of possible bindings), could be easily rewritten to generate a sequence of possible X values on the fly (thus making mod() effectively a multidirectional function [26]). This, however, would require a facility to generate non-materialized sequences in the execution plan. In our prototype generators are used for all bag-valued functions. We have run additional experiments with other RDBMSs, including PostgreSQL, MySQL, Mimer [27], and found that even though some of these support table-valued UDF, only the newer versions (tested 9.4.4) of PostgreSQL are capable of avoiding materialization of complete sequences before use. We see this as an important improvement in the present-day RDBMS architecture. Experiment 2: Varying the chunk size Here we evaluate the trade-off between the need to retrieve many small chunks in one extreme case, and few big chunks (with mostly irrelevant data) in the other extreme case. We chose QT6 as a query with certain degree of spatial locality especially with tiled array partitioning. We also study how well our back-end DBMS handles the requests for numerous small or big binary values, thus using the IN strategy with buffer size set to 4096 distinct chunks. In each case we retrieve 10,000 array elements arranged into three clusters, with variance chosen in range 0.01*N… 0.03*N. Table 5 shows the results for both partitioning cases: even though big chunk size (around 4 megabytes) results in a much smaller amount of chunks retrieved (only 1 SQL query is sent), the overall query time rises super-linearly to 612 s. Besides that, smaller chunks result in slightly better performance in this case, since the amount of 'small' chunks retrieved stays approximately the same for the same sparse random access. Using the square tiles helps to leverage the access locality even better. However, big tiles do not seem to pay off at this level of sparsity: retrieving 206 4-megabyte tiles results in a factor of 81.4 larger binary data retrieval than 9886 1-kilobyte tiles, and contributes to a factor of 9.26 longer query time. This experiment shows that for the given access selectivity (10 −6 of the total number of array elements selected randomly in clusters), small chunks perform better than big chunks, and the choice between linear chunks or square tiles is not important for small chunk/tile sizes. However, there is apparently a significant overhead in retrieving Analytically, we would model the query response time as a function T (s) of chunk size s: T (s)= P(s)N(s) where P(s) is the cost of transferring one chunk (given a fixed total number of SQL calls), and N(s) is the amount of relevant chunks to be retrieved. We expect P(s) to be linear after some 'efficient chunk size' threshold, while N(s) should experience a steep fall, corresponding to the logical locality of the query, which is saturated at some point. While the quantitative properties of P(s) depend largely on the underlying DBMS, the middleware, and the operating system used (along with hardware configurations), N(s) is statistical, and can be easily computed by simulation, as presented below. As we can see, the linear chunk case clearly exhibits a top 'plateau' for most of our cases, and thus confirms our expectations stated above. This feature is not visible for the square tiles case (Fig. 4), as the square tiles utilize the query locality much better. In order to see the plateau, we have to re-run the simulation with a greater sparsity (so that there is a greater probability of having single selected element per tile retrieved). Figure 5 shows the result of such simulation, with QT6 retrieving this time only 1000 random elements. Amount of distinct chunks as a function of chunk size Another interesting feature on Figs. 3 and 5 is a 'middle plateau' for the (not very) dispersed access patterns. The beginning of such plateau should be considered as one of the sweet spots when choosing the partitioning granularity, where chunk/tile size is adequate to the distribution of access densities. Of course, assuming the statistical properties of the workload are known before the array data is partitioned. Similarly, the earlier observations (Table 5) suggest that there is always a threshold in access density after which the bigger chunks become more favorable. We estimate it pays off to transfer 8 times more gross data from a back-end, if it results in retrieving 8 times lesser amount of chunks. Summary We have presented a framework for answering array queries, which identifies and makes use of the array access patterns found in those queries. We have implemented this approach in our software prototype supporting array query language (SciSPARQL) and storing numeric arrays externally. Buffering array access operations and formulating aggregated queries to the back-end has proven to be essential for performance. We have compared two pure and one hybrid strategy for generating SQL queries to the back-end based on the buffered set of chunk ids to be retrieved. One is putting a long IN list into the query, and the other is creating an expression for a cyclic chunk access pattern discovered. It turned out that even though the second approach allows accessing an entire array with a single SQL query, and skipping further buffering in most cases; it only pays off for very unselective queries, retrieving a large percentage of array's chunks. Apparently, current RDBMS optimization algorithms do not rewrite the kind of conditional expressions we were using, in order to utilize existing indexes. Hence, our general advice is to use long IN lists for the best performance of a contemporary RDBMS as a back-end. We have also investigated two distinct partitioning schemes-linear and multidimensional-used to store large numeric arrays as binary objects in a relational database back-end. Our mini-benchmark consists of six distinct parameterized query patterns, and it becomes clear that for each partitioning scheme one can easily define best-case and worst-case queries. For example, a diagonal access pattern works much better with square tiles than with any linear chunking, while the linear chunks in an array stored row-by-row are perfect for single-row queries and worst for single-column queries. As for the chunk size, we have empirically found a proportion when the overhead of transferring more gross data balances out the overhead of retrieving more chunks. The conclusion is that choosing the right partitioning scheme and chunk size is crucial for array query response time, and the choices being made should be workloadaware whenever possible. Though it might not be possible to know the expected workload for long-term storage of scientific data, such knowledge can certainly be inferred for materializations of intermediate results in cascading array computation tasks. As one direction of the future work, a query optimizer that makes choices on materializing intermediate results (e.g. common array subexpressions) should be enabled to choose the storage options based on the downstream operations, guided by the results of this study. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
v3-fos-license
2024-02-08T16:02:48.532Z
2024-02-06T00:00:00.000
267536285
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/journals/neurology/articles/10.3389/fneur.2024.1325548/pdf", "pdf_hash": "2127d343123db7f00912c1fc67c2135b46e95a7a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:297", "s2fieldsofstudy": [ "Medicine" ], "sha1": "94e9200f15c8d8c6aaff1fedf75393c15e6f4cc2", "year": 2024 }
pes2o/s2orc
Motor imagery ability in children and adolescents with cerebral palsy: a systematic review and evidence map Background Cerebral palsy (CP) refers to a group of permanent movement and posture disorders. Motor imagery (MI) therapy is known to provide potential benefits, but data on MI ability in children and adolescents with CP is lacking. Objective A systematic review was performed to explore MI abilities in children and adolescents with CP compared to typically developed (TD) subjects. Methods We searched on PubMed, Web of Science (WOS), EBSCO, Google Scholar, and PEDro including observational studies. Methodological quality was assessed with the modified Newcastle–Ottawa Scale and evidence map was created to synthesize the evidence qualitatively and quantitatively. Results Seven cross-sectional studies were selected, which included 174 patients with CP and 321 TD subjects. Three studies explored explicit MI, two MI–execution synchrony, and four implicit MI domains. Methodological quality ranged from 6 to 8 stars. Moderate evidence supported the absence of differences in vividness between the groups. As there was only limited evidence, establishing a clear direction for the results was not possible, especially for the capacity to generate MI, mental chronometry features, and MI–execution synchrony domains. Moderate evidence supported a lower efficiency in cases for hand recognition, derived from a lower accuracy rate, while reaction time remained similar between the two groups. Moderate evidence indicated that patients with CP and TD controls showed similar features on whole-body recognition. Conclusion Moderate evidence suggests that patients with CP present a reduced ability in hand recognition, which is not observed for whole-body recognition compared to healthy controls. Severe limitations concerning sample size calculations and validity of assessment tools clearly limits establishing a direction of results, especially for explicit MI and MI-Execution synchrony domains. Further research is needed to address these limitations to enhance our comprehension of MI abilities in children, which is crucial for prescribing suitable MI-based therapies in this child population. Introduction Cerebral palsy (CP) is a group of permanent developmental disorders of movement and posture, causing a limitation of activity (1). There are several definitions of CP proposed in the literature, and they are all based on the presence of permanent motor and posture disorders, which are usually accompanied by other cognitive, sensorial, and behavioural disorders, or even with epilepsy (2).These impairments are attributed to non-progressive disorders that occur in the developing fetal or infant brain (2).However, the aetiology of CP, which is critical for its diagnosis, shows that prenatal influences appear to play a more significant role in its manifestation, while perinatal factors contribute to a lesser extent (3). Cerebral palsy has a global prevalence of 2.11 per 1,000 live births (4), exhibiting an increasing trend from 1988 to 2019 (5).The economic costs of CP can reach billions of dollars per patient over a lifetime (6). Cerebral palsy causes significant functional limitations, as 61.8% of the patients are found to exhibit difficulties conforming to those between level II and IV of the Gross Motor Function Classification System (GMFCS) and 43.7% have no independent gait (7).Children and adolescents with CP exhibit increased medio-lateral deviations in gait compared with typically developed (TD) individuals, thus presenting with an increased gait performance difficulty (8).Patients with CP not only experience deficits in the execution of movement, but also in movement planning (9).Optimal movement planning facilitates the achievement of efficient movement execution and appears to improve with age (10). The mental representation of movement is a complex process that involves the preparation, planning, and organization of movement (11).This process occurs unconsciously but can be intentionally elicited through various techniques, such as motor imagery (MI), action observation (AO), or visual feedback.MI is a cerebral process of constructing a motor action without the actual execution (12).This process is developed through the involvement of perceptual-sensory mechanisms that facilitate the formulation of motor actions, which involve the working memory (13).MI processes may occur implicitly or explicitly.Although implicit and explicit MI are conceptually, theoretically, and practically distinct, they are found to recruit similar sensorimotor neural networks associated with movement (14). Implicit MI is concerned with motor representations that occur in prospective action judgements and in perceptual decisions regarding behaviour (15).Implicit MI tasks require participants to make judgements by making use of visual stimuli that automatically (and implicitly) activate the mental simulation of actions ( 16).An example of implicit MI is when an individual performs a mental transformation of their own hand to solve a task in an attempt to find congruence with the presented hand (17).Laterality judgement tasks are the most commonly used for assessing implicit MI ability (18). Implicit MI is typically evaluated in terms of accuracy and reaction time with images of different body parts (19).Hand laterality judgement (HLJ) task, a widely employed task, requires the participants to identify whether the presented hand image is the left or right hand.Although controversial evidence exists suggesting that certain individuals can complete these body recognition tasks without employing MI strategies (20,21), it would be useful to explore this ability in children and adolescents with CP, as body recognition tasks can also serve as a therapeutic tool for rehabilitation (22). Explicit MI involves consciously mentally performing an action (23).MI outcome measures have been covered extensively in a recent systematic review (19).These measures include the capacity to generate MI, its vividness, and mental chronometry (time required to imagine) (24)(25)(26).Explicit MI, usually evaluated in terms of mental chronometry, can also be contrasted with execution performance, to identify the synchrony between both the abilities (MI-execution synchrony) (19). Considering that MI abilities develop between the ages of 5 and 12 years (27), it would be of interest to analyse this capacity in children and adolescents with CP to determine who would benefit most from MI interventions.Existing evidence supports employing MI and AO therapies to enhance functional abilities in adults with neurological and musculoskeletal disorders (28).Unfortunately, limited evidence exists on the efficacy of MI therapy for CP patients, with only one reported randomized controlled trial (29).Notably, MI therapy did not significantly enhance functional performance compared to conventional physiotherapy in these patients.This stresses the need for further research to clarify the effectiveness of MI as a therapeutic tool.Evaluating MI abilities in children and adolescents with CP may help clinicians to determine the potential benefits of MI-based therapy for improving functional abilities in patients with CP.Indirect evidence from healthy adults suggests that a higher capacity to generate MI correlates with marked functional enhancements after MI interventions (30).These findings should also be contrasted in CP patients. After conducting a preliminary search across several databases, we found no systematic reviews summarizing MI abilities in children and adolescents with CP.To address this knowledge gap, we conducted a systematic review with the aim of evaluating explicit MI, MI-execution synchrony, and implicit MI in children and adolescents with CP in comparison to TD subjects. Methods This systematic review was registered in the International Prospective Register of Systematic Reviews (PROSPERO) under the registration number CRD42022345725.It was conducted following the Preferred Reporting Items for Systematic Reviews and Metaanalysis guidelines recommended by Moher et al. (31). Selection criteria The selection criteria for study inclusion were based on population of interest, control of interest, outcome measures, and study design. Cases The case subjects selected for the studies were children (6-12 years) and/or adolescents (13-18 years) who had been diagnosed with CP.As mentioned earlier, the aetiology of CP appears to be more strongly influenced by prenatal factors than by perinatal features.While stroke, traumatic brain injuries, and other events are considered perinatal factors in CP development, their exclusive presence is not a definitive indicator of the presence of CP.Confirming a CP diagnosis requires the identification of additional signs and symptoms (2).Therefore, this review will focus solely on cases with a confirmed diagnosis of CP. Controls Cases should be compared with a control group of TD children (6-12 years) and/or adolescents (13-18 years). Outcome measures The outcome measures of interest included: (1) capacity to generate MI; (2) vividness during MI; and (3) mental chronometry.These outcome measures are categorized as explicit MI capacities and were extracted regardless of MI perspective (first/third person) or modality (kinaesthetic, visual internal, or visual external).MIexecution synchrony outcome measures were also included as an outcome, including the performance over-or underestimation coefficients (ratio or difference between MI and execution time), and the variance analysis of MI and execution time data.The following outcome measures categorized as implicit MI capacities were also included: (4) hand recognition through HLJ task; (5) feet recognition through a foot laterality judgement task; and (6) whole-body recognition tasks.Eligible data could be presented either in terms of accuracy, reaction time, or efficiency indexes. Study design Observational studies were eligible for inclusion. Data sources and searches Systematic searches were performed in PubMed, Web of Science (WOS), EBSCO, Google Scholar, and PEDro databases on 7 July 2022.Additional records were identified through manual searches until 28 September 2022.We conducted an updated systematic search in PubMed on 13 December 2023 and identified additional records through manual searches until the same date. Non-scientific articles, study protocols, and articles without full text were excluded.No restrictions were applied on language.The screening process was performed manually, analyzing the title, abstract, and full text. Search engines, databases, equations, and registries retrieved are presented in the Supplementary material. Data extraction The following information was extracted from the included studies: author(s), publication date, study design, groups examined, sample size, bilateral cerebral palsy (BCP) or unilateral cerebral palsy (UCP), children and/or adolescents, age, and other demographic features.Only outcome measures of interest were extracted and categorized into MI assessment domain, task, and outcome measure.The results were narratively summarized and the performance between cases and controls were noted. Neuroimaging data were not included in the extraction process.The extracted information was presented in both narrative and tabular formats. Methodological quality assessment The methodological quality of cross-sectional studies was evaluated using the Newcastle-Ottawa Scale (NOS), adapted to cross-sectional studies (32).This scale presents a moderate inter-rater reliability (33).The scale consists of seven items divided into three dimensions (selection, comparability, and outcome), with scores ranging from 0 to 10 stars. Two independent reviewers assessed the methodological quality of all the included studies using the same methodology.The level of agreement was analysed using Cohen's Kappa coefficient.Agreement scores were categorized as almost perfect if κ coefficients were in the range 0.81-1.00;substantial if 0.61-0.80;moderate if 0.41-0.6;fair if 0.21-0.4;slight if 0.00-0.20;and < 0.00 as poor (34).Disagreements between reviewers were resolved by consensus and by including a third reviewer. A categorization of methodological quality was established for the included studies following the procedure employed by Elizagaray-García et al. as follows (35): 1. Good quality: 3 or 4 stars in selection, 1 or 2 stars in comparability, and 2 or 3 stars in outcomes.2. Moderate quality: 2 stars in selection, 1 or 2 stars in comparability, and 2 or 3 stars in outcomes.3. Poor quality: 0 or 1 star in selection, 0 stars in comparability, and 0 or 1 star in outcomes. Synthesis of evidence The synthesis of evidence was based on an adaptation method proposed by La Touche et al. (36) from the system developed by Van Tulder et al. (37).The levels of evidence were categorized as follows: "No evidence": Absence of observational studies, including crosssectional or longitudinal studies. "Contradictory evidence": Inconsistent findings among multiple studies (cross-sectional and longitudinal observational studies). "Limited evidence": One low-quality case-control study and/or cohort study and/or at least two cross-sectional studies of low quality.For the present study, an additional modification was madeincluding the presence of one or two low-quality and/or one or two moderate-quality cross-sectional studies. "Moderate evidence": Consistent findings from multiple low-quality case-control studies and/or cohort studies and/or crosssectional studies or one high-quality case-control study and/or cohort study.An additional modification was applied in this category for the present study: including the presence of one or two high-quality crosssectional studies. "Strong evidence": Consistent findings among multiple highquality case-control studies and/or cohort studies and/or crosssectional studies (at least three of these studies). Qualitative evidence mapping A qualitative evidence map was developed to visually summarize the obtained results.The following parameters were employed to develop the evidence map: X-axis: This axis was divided into three categories based on the methodological quality assessment method employed by Elizagaray-García et al. (35).Studies were categorized on this axis according to their respective methodological quality ratings.Bubble color: Each bubble was assigned a colour indicating the results of the comparison between patients with CP and TD subjects.Three colours were employed to represent different types of information reported in the studies: (1) patients with CP presented better performance in blue; (2) no performance difference between groups in yellow; and (3) patients with CP presented poorer performance in red. Bubble external pattern: Each bubble included an external pattern indicating the population of CP included in the study.Three categories were used: (1) unilateral cerebral palsy (UCP) with a vertical line; (2) bilateral cerebral palsy (BCP) with a horizontal line; and (3) UCP and BCP with a cross pattern. Quantitative evidence mapping Available quantitative data from the included studies were extracted and presented as a forest plot in order to graphically represent the direction of the different MI abilities between CP and TD subjects.This graphical representation would aid in observing, the direction tendency of the MI abilities, along with qualitative synthesis and evidence map. Available data was extracted from text, tables, and graphics (using WebPlotDigitizer online software1 ).Transformations were performed if needed for transforming the data into mean and SD.Standardized mean differences, with the Hedges' g (38), were calculated and displayed in a forest plot. All these procedures were conducted in R Studio software version 2023.06.0 + 421, employing the R version 4.3.1 (39).Calculations for Hedges' g was performed with the package "metafor" 3.8.2version (40). Selection process The process of identification, screening, and inclusion of studies is shown in Figure 1. Outcome measures assessed 3.2.2.1 Explicit MI Explicit MI was assessed in terms of capacity to generate MI from kinaesthetic, visual internal, and visual external modalities (29) using the Movement Imagery Questionnaire for Children.Vividness was also evaluated from kinaesthetic and visual external modalities, employing the Vividness of Movement Imagery Questionnaire Revised 2nd version (45).Mental chronometry was also analysed from unilateral UL tasks (42). MI-execution synchrony MI-execution synchrony was explored in terms of performance overestimation on the basis of Delta coefficient for LL tasks (29).Delta values >0 indicate that participants employ less time to imagine than executing, suggesting that they overestimate their real performance.Values <0 suggest that participants underestimate their performance, as they would require greater times to imagine than for executing the task. Delta time time MI time time MI time Execution Execution Additionally, one study explored the variance distribution of MI and execution chronometry across CP and TD subjects, for unilateral UL tasks (42). Efficiency was measured with the inverse efficiency (IE) index: Methodological quality assessment Among the seven cross-sectional studies, four presented good methodological quality (41,(44)(45)(46), two moderate methodological quality (29, 43), and one poor methodological quality (42).They accounted for an overall methodological quality of 7.29 ± 0.76 (6-8 stars).An almost perfect level of inter-rater agreement was observed on the NOS scale adapted for cross-sectional studies (κ = 0.832).The results of the methodological quality analysis are shown in Table 2. Explicit MI -capacity to generate MI "Limited evidence" from one moderate-quality study (29) shows that patients with CP exhibited poorer capacity to generate MI from kinaesthetic, visual internal, and visual external modalities compared to controls. Explicit MI -vividness "Moderate evidence" from one good-quality study (45), indicates that patients with CP and TD present similar MI vividness when kinaesthetic and visual external modalities were evaluated. Explicit MI -mental chronometry "Limited evidence" from one poor-quality study (42) suggests that patients with CP and TD subjects exhibit similar mental chronometry features during unilateral UL tasks, with either the more or less affected UL. MI-execution synchrony -performance overestimation "Limited evidence" from one moderate-quality study (29) shows that patients with CP greatly overestimate their performance compared to TD subjects in LL tasks (timed-up and go test, and 10 meter walk test). MI-execution synchrony -variance distribution of MI and execution chronometry "Limited evidence" from one poor-quality study suggests that patients with CP and TD subjects took similar times for MI and execution of unilateral UL tasks, with either the more or less affected UL (42). Implicit MI -hand recognition: accuracy "Moderate evidence" from two good-quality studies (41,46) demonstrates that patients with CP presented poorer accuracy than TD controls. "Limited evidence" from one moderate-quality study (43) indicates that patients with CP and TD controls exhibited similar accuracy. Implicit MI -hand recognition -reaction time "Moderate evidence" from one good-quality study (41) supports that patients with CP and TD controls had similar reaction times. "Limited evidence" from one moderate-quality study (43) shows that patients with CP and TD controls exhibited similar reaction times. Implicit MI -hand recognition -efficiency "Moderate evidence" from one good-quality (44) study found a poorer efficiency in patients with CP compared to TD subjects. Implicit MI -whole-body recognitionefficiency "Moderate evidence" from one good-quality (44) identified similar efficiency values between patients with CP and TD subjects. Study and design Population MI assessment domain Task Outcome measure The qualitative evidence map synthesized the available information regarding methodological quality, outcome measures and measurement tools, sample size, CP population, and comparison results.See Figure 2. Quantitative evidence mapping Authors were only able to extract quantitative data from four (41,(43)(44)(45) out of the seven studies that explored MI abilities between patients with CP and TD subjects.Data from three studies (41,43,44) were transformed from a standard error of the mean (SE) into SD, employing the following formula: SD n SE | u ª ¬ º ¼ proposed in the Cochrane Handbook for Systematic Reviews of Interventions section 6.5.2.2 (47).See Figure 3. Discussion This study aimed to gather and synthesize the evidence of MI abilities in children and adolescents with CP compared to TD subjects. The evidence obtained from seven studies, considered for review in this study, poses significant limitations for drawing clear conclusions regarding explicit MI abilities in CP patients.Three different domains of explicit MI were assessed, such as the capacity to generate MI (29), vividness (45), and mental chronometry (44).There is moderate evidence to show that patients with CP and TD controls display similar vividness during MI.This finding stands out as the most reliable, whereas evidence concerning the capacity to generate MI and mental chronometry features between CP and TD controls is obviously limited. Similarly, the current evidence concerning MI-execution synchrony is also limited (29, 42), clouding the direction of results between patients with CP and TD controls. Conversely, implicit MI yields more straightforward interpretation of the results.A clear trend is observed for hand recognition accuracy (assessed with the HLJ task), with patients with CP displaying a lower accuracy than TD subjects.This observation is supported by moderate evidence (41,46).There is also moderate evidence to show that patients with CP and TD controls exhibit similar reaction time values in the HLJ task (41).Only moderate evidence is available demonstrating a lower efficiency (the ratio between reaction time and accuracy) in the HLJ task (44).These findings align with the earlier studies, suggesting that reduced efficiency may stem from lower accuracy (41,46), although similar reaction time is maintained (41) between patients with CP and TD controls. Implicit MI ability for whole-body recognition appears to be similar between cases and controls, supported by moderate evidence (44). Different interpretations and hypotheses may arise from the results obtained for the implicit MI ability for hand and whole-body recognition tasks.First, somatosensory body representations may be more impaired in children and adolescents with CP than TD subjects (48).Therefore, a more focused hand recognition task could point out detectable differences between the two groups, while a whole-body approach may not do so, suggesting a bodyspecific representation difference.Additionally, the lack of experience when employing upper limbs may limit the representation of these body regions, constraining the ability of recognition.In fact, sensorimotor cortex overactivations have been identified in children with CP compared to TD subjects when performing bimanual tasks (49), suggesting an association between the constraints during a bimanual task and the sensorimotor activity in order to perform the task. The HLJ task has proven to be effective in assessing implicit MI, particularly in the context of brain damage.However, recent research indicates that the mental rotation involved in tasks like HLJ might not sufficiently conclude MI capacity in CP patients, which necessitate more explicit and targeted approaches (50, 51). Neural insights and self-reported MI measures As indicated by recent research, CP not only leads to deficits in movement execution but also to causes difficulties in motor planning (9) and MI.MI involves the internal simulation of a movement without physical execution, activating sensorimotor circuits similar to those used during actual movements.Key areas such as the supplementary motor area, dorsal and ventral premotor cortices, and the inferior parietal lobule play a critical role in this process (52-54).Research suggests MI may be a valuable tool for motor function recovery in children with CP, though the ability of these to implement MI strategies might be compromised (55). Understanding the impact of early brain damage and cerebral development on MI capacity is important beyond therapeutic implications.Studies suggest that damage to specific brain areas can disrupt children's ability to effectively perform MI, restricting both the planning and execution of movements (56, 57).Previous findings indicate that children with right-sided congenital CP exhibit greater difficulties in MI tasks compared to their counterparts with left-sided congenital CP (58).Furthermore, it has been observed that individuals with right-sided congenital CP demonstrate deficits in anticipatory movement planning (59).These two deficits should be taken into account when devising motor imagery-based interventions for children with congenital CP. To enhance our understanding of the capacity for MI, it is important to look into prior findings pertaining to cerebral behaviour inferred from neuroimaging studies for MI tasks.In this context, a previous study observed that patients with right-side early brain damage exhibited activation in the bilateral frontoparietal network, encompassing the majority of nodes associated with MI in healthy individuals.Conversely, patients with early left-side brain lesions demonstrated diminished cerebral activation during these tasks.Furthermore, there was only a minimal influence as regards the side of the imagined hand movement.This attenuated activation in patients with right UCP underscores the predominance of the left hemisphere in MI tasks (60). In contrast to these findings, another study analysed brain activations during explicit MI in children with UCP and TD subjects (42).The results demonstrated that some children with UCP retained brain activations in cortical and subcortical areas during kinaesthetic MI tasks similar to those observed in TD children.Notably, a comparable parieto-frontal activation was observed in the right contralesional hemisphere during the imagination of reaching and grasping actions with the non-preferred hand.Furthermore, a correlation was noted between MI scores and the inferior parietal lobule and the dorsal sector of the premotor cortex were activated in both UCP and TD children, suggesting role for parietal activation in the online control of action execution.The study also highlighted the involvement of subcortical regions such as the putamen and cerebellum in explicit MI for complex grasping actions, indicating the engagement of a crucial cortico-basal-thalamic-cortical circuit in motor planning and learning.Interestingly, some children with UCP exhibited increased activations not only in the contralesional hemisphere but also in the ipsilesional one, particularly in those with primarily subcortical damage (42).The specificity of the employed MI modality, which involves imagining the action from a first-person perspective, might account for the discrepancy of these results with previous findings (42).These findings provide a neural foundation for integrating MI tasks into rehabilitation strategies for patients with UCP. Adopting explicit MI strategies, such as mental chronometry, could provide a more accurate assessment of MI capacity and reveal specific deficiencies related to brain damage.Moreover, understanding how the activation of brain regions during MI tasks correlates with motor performance can offer valuable insights for designing targeted therapeutic interventions.As our understanding of the interaction between MI, cerebral development, and CP is deepened, improved rehabilitation strategies can be devised to optimize recovery and functionality in children affected by CP.The assessment of MI capacities in this child population could enhance our approach to prescribing MI-based therapies.For instance, gaining a comprehensive understanding of their capacity to generate MI and vividness across the three distinct MI modalities (kinaesthetic, visual internal, and visual external), or across MI perspectives (external or internal with no modality specified), would enable us to recommend the most effective approach.For example, prescribing MI through the modality or perspective in which the most patients with CP demonstrate greater capacities would be prioritized.Furthermore, comprehending their perceived difficulty and MI-execution synchrony across tasks of varying complexity could help us establish a progression order based on their performance.This involves starting with tasks perceived as less difficult, and where their MI-execution synchrony is closer, and gradually progressing to more complex tasks with a wider disparity between MI and its overt execution. Limitations All the studies included in this review had limitations with statistical procedures, mainly with sample size calculations, as no sample size calculations were performed (41)(42)(43)(44)(45)(46)61) or was not described in depth (29).This limitation poses difficulties for hypothesis testing and detecting the real amount of magnitude of difference between groups becomes difficult. Additionally, the studies that analysed the ability to generate MI (29) and the vividness of MI (44) have methodological limitations due to the tools employed, MIQ-C an VMIQ-2, as they have only been validated in healthy children (62) and young athletes (24,25), respectively.No validity processes have been conducted for these tools in children and adolescents with CP.Therefore, the certainty of the results may cannot be guaranteed.The findings of this systematic review highlight the need for further research on exploring the MI as a therapeutic tool for children with CP. Therefore, the limitations mentioned above pose challenges for establishing a clear direction of the effect with regard to explicit MI and MI-execution synchrony domains.Nevertheless, a consistent direction could be ascertained for implicit MI abilities, especially hand and whole-body recognition, because of the availability of moderate evidence.Patients with CP present a lower capacity for hand recognition in the HLJ task, evident from a lower accuracy rate, but they maintain a similar reaction time as TD subjects.This leads to a reduced efficiency in hand recognition.This constraint could be specific for certain impaired body regions, such as the arms, as whole-body recognition features did not differ between the two groups. Conclusion Current research on MI abilities in children and adolescents with CP is scarce.Evidence is available for explicit MI domains like the capacity to generate MI, vividness, and mental chronometry, as well as MI-execution synchrony domains, and implicit MI domains such as hand and whole-body recognition.Notably, studies have significant limitations in sample size calculations, impacting the certainty of their results.However, a clear conclusion could be derived from implicit MI results, with moderate evidence suggesting that patients with CP present a reduced ability in hand recognition (HLJ tasks), but a similar capacity for whole-body recognition compared to TD controls.Previous research has questioned the validity of HLJ tasks in evaluating implicit MI.This has led to the proposal of alternative approaches like explicit MI or MI-execution synchrony tasks for assessing MI ability.The present review observed severe limitations for stating a clear direction with regard to explicit MI and MI-Execution synchrony domains.First, the absence of validated tools for assessing the capacity to generate MI and vividness (explicit MI domains) restricts the scope of their findings.Second, mental chronometry (explicit MI domain), performance overestimation, and MI-execution chronometry distribution (MI-execution synchrony domains) offer only limited evidence, posing difficulties in establishing a clear direction for their results.Future research should include improved research methodologies, including proper sample size calculations, and employ validated and reliable measurement procedures.A better understanding concerning MI abilities in patients with CP would lead to the development of tailored MI therapeutic interventions for them based on their strengths and the challenges they encounter. This axis represented the outcome measures and measurement tools employed in the reviewed studies.Studies were positioned along this axis based on the outcome measures they assessed.Figuresize:The size of each bubble corresponded to the number of children and adolescents with CP analyzed in each study. FIGURE 1 Flow FIGURE 1Flow chart of selection process according to PRISMA. TABLE 1 Characteristics of included studies. TABLE 2 Quality assessment of cross-sectional studies with Newcastle-Ottawa Scale.
v3-fos-license
2020-10-12T13:43:02.707Z
2020-10-12T00:00:00.000
222279974
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10546-020-00570-5.pdf", "pdf_hash": "f7019067c013f905df92296170393339ddcce190", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:298", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "sha1": "f7019067c013f905df92296170393339ddcce190", "year": 2020 }
pes2o/s2orc
Large-Eddy Simulations of Stratified Atmospheric Boundary Layers: Comparison of Different Subgrid Models The development and assessment of subgrid-scale (SGS) models for large-eddy simulations of the atmospheric boundary layer is an active research area. In this study, we compare the performance of the classical Smagorinsky model, the Lagrangian-averaged scale-dependent (LASD) model, and the anisotropic minimum dissipation (AMD) model. The LASD model has been widely used in the literature for 15 years, while the AMD model was recently developed. Both the AMD and the LASD models allow three-dimensional variation of SGS coefficients and are therefore suitable to model heterogeneous flows over complex terrain or around a wind farm. We perform a one-to-one comparison of these SGS models for neutral, stable, and unstable atmospheric boundary layers. We find that the LASD and the AMD models capture the logarithmic velocity profile and the turbulence energy spectra better than the Smagorinsky model. In stable and unstable boundary-layer simulations, the AMD and LASD model results agree equally well with results from a high-resolution reference simulation. The performance analysis of the models reveals that the computational overhead of the AMD model and the LASD model compared to the Smagorinsky model is approximately 10% and 30% respectively. The LASD model has a higher computational and memory overhead because of the global filtering operations and Lagrangian tracking procedure, which can result in bottlenecks when the model is used in extensive simulations. These bottlenecks are absent in the AMD model, which makes it an attractive SGS model for large-scale simulations of turbulent boundary layers. Introduction or phenomenological arguments (Verstappen 2011). The dynamically significant part of the motion is confined to large eddies by damping the velocity gradient with an eddy viscosity. The eddy viscosity is calculated such that the energy transferred from the large eddies to the SGS is dissipated at a rate that ensures that the production of SGS eddies by the non-linear terms in the Navier-Stokes equations becomes dynamically irrelevant. To understand the performance of the different SGS models it is necessary to test them under different conditions. Therefore, we compare the performance of the standard Smagorinsky model (Smagorinsky 1967), the LASD model (Bou-Zeid et al. 2005;Porté-Agel 2006, 2008), and the AMD model (Abkar and Moin 2017) for different atmospheric conditions. We analyze the first-and second-order turbulence statistics and the surface similarity for a neutral, stable, and unstable ABL. This provides more insight into the performance of two distinct classes of scale-dependent models (i.e., the LASD and the AMD model) for different atmospheric conditions. The primary consideration in evaluating the performance of a SGS model is how accurately the model can capture the relevant flow physics. However, practical considerations can also play a role in the selection of an appropriate SGS model. The Smagorinsky model is by far the easiest to implement, but the limited accuracy of the Smagorinsky model is a significant drawback (Meneveau and Katz 2000). Scale-dependent models can capture the flow physics more accurately than the Smagorinsky model. While the LASD model (Bou-Zeid et al. 2005) has been used widely in the literature (Calaf et al. 2010;Wu and Porté-Agel 2011;Zhang et al. 2019; Gadde and Stevens 2019), the AMD model has only been developed relatively recently (Rozema et al. 2015;Abkar et al. 2016;Abkar and Moin 2017). While the LASD model has been shown to provide accurate predictions (Stevens et al. 2014), it has some practical drawbacks. It is challenging to implement, due to the required filtering operations and Lagrangian averaging procedure that are employed. Due to the additional filtering operations, the LASD model generates a computational overhead, and the numerical implementation of the Lagrangian averaging involves numerous interpolation operations, which requires MPI communication between multiple processors. Besides, the LASD model has an additional memory overhead due to the requirement to store the time histories of different quantities. These are all essential considerations for simulations performed on modern supercomputers. The AMD model, on the other hand, has low computational complexity and is straightforward to implement. Therefore, it is particularly interesting to see how the AMD model performs compared to the LASD model to assess whether it is a good alternative for the LASD model when considering large-scale simulations of ABL. Large-Eddy Simulations In LES, turbulent motions larger than the grid scale are resolved, and the SGS motions are parametrized. In a thermally stratified ABL, the Boussinesq approximation to model buoyancy leads to the following governing equations where the tilde represents spatial filtering, represents planar averaging, u i and θ are the filtered velocity and potential temperature, respectively, p is the kinematic pressure, g is the acceleration due to gravity, β = 1/θ 0 is the buoyancy parameter with respect to the reference potential temperature θ 0 , δ ij is the Kronecker delta, f is the Coriolis parameter, G j = (U g , V g ) is the geostrophic wind velocity, and ε ijk is the alternating unit tensor; τ ij = u i u j − u i u j is the traceless part of the SGS stress tensor and q j = u j θ − u j θ is the SGS heat flux vector. Wall-resolved LES are limited to moderate Reynolds numbers due to the very high computational cost (Piomelli 2008). Consequently, simulations of high Reynolds number ABL flows rely heavily on wall and SGS modelling. As it is impossible to resolve all the flow scales, an accurate representation of the SGS properties is crucial in these simulations (Sagaut 2006). It is common practice in LES to parametrize SGS stresses and fluxes using an eddy viscosity and an eddy diffusivity. Thus, the traceless part of the SGS stress and heat flux are modelled as where S ij = 1 2 ∂ j u i + ∂ i u j represents the filtered strain rate tensor, ν T is the SGS eddy viscosity, and Pr sgs is the SGS Prandtl number. Equations 4 and 5 are generally known as the Smagorinsky model (1963). In any SGS model, ν T and Pr sgs are not known a priori. They are modelled by the mixing length approximation, which includes the strain rates calculated using the grid scale velocities, where the eddy viscosity is modelled as ν T = (C s,Δ Δ) 2 | S| with the Smagorinsky coefficient C s,Δ at the grid scale Δ, and |S| = 2 S ij S ij is the strain-rate magnitude. The eddy diffusivity is modelled as ν T /Pr sgs = (D s,Δ Δ) 2 | S|, where D s,Δ is the Smagorinsky coefficient for the SGS heat flux. We emphasise that in the LASD and the AMD model, both C s,Δ and D s,Δ are dynamically calculated. However, in the Smagorinsky model C s,Δ and Pr sgs are chosen constants, and it is worth mentioning that the results obtained using the Smagorinsky model are sensitive to the choice of these constants (Shi et al. 2018). Smagorinsky Model For LES of ABL, the Smagorinsky coefficient C s,Δ is determined using empirical formulations, field observations, and turbulence theory. Assuming the existence of in the turbulence spectrum inertial range spectrum, Lilly (1967) evaluated the Smagorinsky constant to be around 0.17 for homogeneous isotropic turbulence. To further account for the inhomogeneity of the flow, Moin and Kim (1982) used an ad hoc wall damping function in simulations of channel flows. This wall damping function was further modified by Mason and Thomson (1992) using phenomenological arguments to account for the scale-dependence of the SGS coefficients as 1 where κ = 0.4 is the von Kármán constant, C s0,Δ is the mixing length away from the surface, n = 2 is the damping exponent, z is the distance from the surface, and z 0 is the roughness length. In addition to C s,Δ , the value of the SGS Prandtl number Pr sgs has to be specified when thermal stratification is included. Several stability corrections have been proposed to account for the effect of thermal stability. The value of Pr sgs ranges from 0.44 for free convection, to 0.7 for neutral conditions, up to 1.0 for the critical Richardson number (Mason and Brown 1999). In our simulations we use C s0,Δ = 0.17 and Pr sgs = 0.5 when using the Smagorinsky model. The values were chosen by trial and error such that the results closely match the results of the dynamic models. We note here that Porté-Agel et al. (2000) used C s0,Δ = 0.17 in the simulation of similar pressure-driven neutral ABLs. In addition, the wall damping function proposed by Mason and Thomson (1992) is applied, with a damping exponent n = 2. Lagrangian-Averaged Scale-Dependent Model A significant drawback of the Smagorinsky model is that the model coefficients have to be specified a priori. Besides, the use of an ad hoc wall damping function requires tuning of the constants on a case-by-case basis. Dynamic models overcome this limitation by computing the model coefficients based on the local flow properties (Germano et al. 1991). In a dynamic model, the model coefficients are calculated by relating stresses at two different scales by using the Germano identity. The filtering at two different filter sizes is known as test filtering. The stresses at these two different scales are equated by using the Smagorinsky approximation. The error due to the Smagorinsky approximation is then minimized by averaging over a plane (Porté-Agel et al. 2000), by dynamic localization (Ghosal et al. 1995), or averaging over fluid path lines (Meneveau et al. 1996). Inherent to the derivation of these models is the assumption of scale invariance. However, this assumption is inappropriate when the flow is anisotropic. In the LASD model, to break the scale invariance, a second test filter is used, and the process of error minimization is carried out over fluid path lines (Bou-Zeid et al. 2005). A similar process is employed for the calculations of the SGS heat flux. We refer to Bou-Zeid et al. (2005) and Porté-Agel (2006, 2008) for a detailed derivation of the LASD model for neutral and thermally stratified conditions, respectively. If two test filters of size 2Δ and 4Δ are used to relate stresses at two different scales, the scale-dependence parameters for the stresses γ and the heat flux γ θ are given by where C 2 s,2Δ and C 2 s,4Δ are the calculated SGS coefficients for the filter sizes 2Δ and 4Δ, respectively. Assuming that γ and γ θ are scale-invariant over the test-filter scale, i.e., γ = C 2 s,4Δ /C 2 s,2Δ = C 2 s,2Δ /C 2 s,Δ and γ θ = D 2 s,4Δ /D 2 s,2Δ = D 2 s,2Δ /D 2 s,Δ , results in the model coefficients at grid scale Δ Technically, γ and γ θ can vary between 0 and ∞. However, when γ approaches zero the C 2 s,Δ values become vary large, which causes numerical instabilities. Following Bou-Zeid et al. (2005) and Stoll and Porté-Agel (2008) we clip the γ and γ θ values to 0.125 to ensure numerical stability. This procedure does not impact the final statistics. It is worth mentioning here that the only tuning parameter used in this model is the Lagrangian averaging time scale, for which different choices are available (Meneveau et al. 1996) Anisotropic Minimum Dissipation Model In a minimum dissipation model, the main requirement is that the energy of the sub-filter scales in a filter box Ω b does not increase. The upper bound for this energy is obtained from the Poincaré inequality, which is given by where C i is the modified Poincaré constant that controls the energy in the filter box. We refer to Abkar and Moin (2017) for a detailed description of the model. In the model, the eddy viscosity and eddy diffusivity are given by and respectively, where∂ i = √ C i δ i ∂ i (for i = 1, 2, 3) is the scaled gradient operator. The model constants are obtained based on the argument that in a filter box the energy of the SGS eddies does not increase with time. Essentially, in the filter box, the minimum dissipation required to balance the production of scales smaller than the grid scale is used to calculate the SGS coefficients (Verstappen 2011;Rozema et al. 2015;Abkar et al. 2016). The value of the modified Poincaré constant depends on the used discretization method. It has been shown that C i = 1/12 gives good results when a spectral method is used (Rozema et al. 2015;Abkar and Moin 2017). Rozema et al. (2015) find that for decaying turbulence simulations C i = 0.3 provides good results, when a second-order central finite difference method is used. For a fourth-order method C i = 0.212 works well. Abkar and Moin (2017) found that C i = 1/3 works well for LES of thermally stratified boundary layers. We note here that Abkar and Moin (2017) used a code similar to ours, i.e., pseudo-spectral in the horizontal direction and second-order central difference in the vertical direction. Following them, we use C i = 1/12 along the horizontal direction and C i = 1/3 in the vertical direction throughout this study. Numerical Method We use a pseudo-spectral method and periodic boundary conditions in the horizontal directions and a second-order central difference scheme in the vertical direction. Time integration is performed using a second-order accurate Adams-Bashforth scheme. The aliasing errors resulting from the non-linear terms are prevented by using the 3/2 anti-aliasing rule (Canuto et al. 1988). Viscous terms are neglected as we consider very high-Reynolds-number flows. This method is based on work by Albertson and Parlange (1999). The computational domain is uniformly discretized with n x , n y , and n z points, with grid sizes of Δ x = L x /n x , Δ y = L y /n y , and Δ z = L z /n z in the streamwise, spanwise, and vertical directions, respectively, where L x , L y and L z are the dimensions of the computational domain in the streamwise, spanwise, and wall-normal direction. The computational planes are staggered in the vertical direction with the first vertical velocity plane at the ground. The first grid point for the streamwise and spanwise velocities and the potential temperature is located at Δ z /2 above the ground. Free-slip boundary conditions with zero vertical velocity are used at the top boundary. The instantaneous shear stress and buoyancy flux at the surface, which form the boundary condition, are modelled with the Monin-Obukhov similarity theory (Moeng 1984) using the resolved velocities and temperature at the first grid point, i.e. and where τ xz|w , τ yz|w , and q * are the instantaneous shear stress and buoyancy flux at the surface, respectively. Friction velocity is represented by u * , and z 0 is the roughness length for momentum. Filtered velocities at the first grid level in the streamwise and spanwise directions are represented by u and v respectively and α = tan −1 (ṽ/ũ). Vertical grid size is denoted by Δz, θ s is the potential temperature at the surface, and z os is the thermal surface roughness length. Stability corrections for momentum and temperature are denoted by ψ M and ψ H , respectively. In classical works, the thermal surface roughness length is set to z os = z o /10 (Brutsaert 1982). However, to facilitate easier comparison, we follow the reference cases Beare et al. 2006), which use z os = z o , in the present study. For the convective boundary layer we follow Brutsaert (1982) and set the stability corrections as follows where ζ = (1 − 16z/L) 1/4 and L = −(u * 3 θ 0 )/(κ gq * ) is the Obukhov length. For the stable boundary layer we use the stability correction suggested by Beare et al. (2006) In addition to the surface stresses, the vertical gradients of the velocity at z 1 = Δz/2 are required for the calculation of SGS stress. They are given by the similarity relations ∂ṽ It is worth mentioning here that the surface similarity relations (Eqs. 12, 13, and 14) are defined for the mean stresses and fluxes. However, Moeng (1984) used this mean relation to calculate the 'instantaneous' stresses, which now is an established practice in the literature. However, this procedure also contributes to the logarithmic layer mismatch (Brasseur and Wei 2010;Yang et al. 2017). To reduce the effect, Albertson (1996) proposed calculating the mean gradients with the similarity theory and the fluctuations with finite differences. This technique has been used in our code. Furthermore, for the neutral boundary-layer cases, the correction proposed by Porté-Agel et al. (2000) is used to further reduce the effect of the log-layer mismatch. To simplify the notation, the tilde representing the spatial filtering of the LES quantities is omitted hereafter. Results and Discussion Three canonical boundary layers with neutral, stable, and unstable temperature stratification are studied here. First, a mean pressure-driven neutral boundary layer is used to assess the performance of different models in truly neutral conditions. Second, we consider the Global Earth and Water Experiment (GEWEX) ABL Study (GABLS−1), which is a moderately stable stratified boundary layer (Beare et al. 2006). Finally, an unstable convective boundary layer with moderate capping inversion is considered . Neutral Boundary Layer We performed simulations of a neutral ABL over a rough homogeneous surface using the Smagorinsky, LASD, and AMD models. The Coriolis forces are neglected for this case, and the boundary layer is driven by an imposed pressure gradient 1/ρ(∇ p) = −u 2 * /H , where H is the domain height. The domain length L and width W are both set to 2π H . The domain is discretized with a grid of spacing Δ x = Δ y = 2πΔ z , where Δ x , Δ y , and Δ z represent streamwise, spanwise, and vertical grid spacing, respectively. The computational domain is 1000 m in height and the vertical grid spacing is Δ x , Δ y = 43.630 m, Δ z = 6.944 m. The roughness used to model the surface stresses is set to z o /H = 10 −4 . The simulations are run until the flow has reached a statistically stationary state. The set-up considered here is the same as in Bou-Zeid et al. (2005). The planar-averaged streamwise velocity component obtained from the simulations using the different SGS models is presented in Fig. 1a. This velocity profile is expected to follow the logarithmic law u (z) = (u * /κ) ln (z/z o ) in the surface layer, i.e., up to z/H ≈ 0.1-0.2. The figure shows that the streamwise velocity profiles obtained from the AMD and the LASD models agree excellently with the logarithmic law in the surface layer. However, in agreement with previous studies (Porté-Agel et al. 2000;Bou-Zeid et al. 2005;Yang et al. 2017), the velocity profile obtained from the simulation with the Smagorinsky model shows a mismatch with the logarithmic profile. The resolved and modelled SGS stresses obtained from the simulations with different SGS models are presented in Fig. 1b. The figure shows that the ratio of the resolved to modelled stresses increases with the distance from the surface. For the AMD and the Smagorinsky models, the resolved stresses increase smoothly with increasing height (Abkar et al. 2016). However, in agreement with Bou-Zeid et al. (2005), we find that the transition between the resolved and modelled stresses is very sharp in the LASD model. In all cases, the sum of the resolved and the modelled stresses follows the expected linear stress profile, which occurs at a steady state in the absence of Coriolis forces. Fig. 2. The spectra is defined as ∞ 0 E 11 (κ 1 )dκ 1 = u u /2, where E 11 (κ 1 ) represents the spectral energy associated with wavenumber κ 1 and u represents streamwise velocity fluctuations. In the inertial subrange (for κ 1 z > 1, where κ 1 is the streamwise wavenumber and z is the distance from the surface) the turbulence is unaffected by the flow configuration, dissipation, or viscosity. The flow in the inertial subrange is nearly isotropic and the spectrum generally follows the Kolmogorov −5/3 scaling (Monin and Yaglom 1971). Figure 2 shows that close to the surface the spectra obtained using the Smagorinsky model decay faster than κ −5/3 , while the LASD and AMD models accurately capture the Kolmogorov scaling. This indicates that the Smagorinsky model is too dissipative close to the surface. In the production range (κ 1 z < 1) the turbulence is affected by the flow configuration (Perry et al. 1986;Bou-Zeid et al. 2005). For a neutral ABL the production is expected to follow a κ −1 scaling (Bou-Zeid et al. 2005). Figure 2 shows that the LASD and AMD model capture the κ −1 scaling in the production range better than the Smagorinsky model. That the LASD and AMD model predict the spectra more accurately than the Smagorinsky model indicates that these scale-dependent models have better dissipation characteristics due to which the flow physics can be captured more accurately. A detailed comparison between The columns from left to right give the case name, the isotropic grid resolution Δ, the friction velocity u * , the boundary-layer height z i , the surface heat flux q * , and the momentum flux τ the AMD and LASD model reveals that the LASD model captures the expected κ −5/3 and κ −1 laws in the production and inertial subranges slightly better than the AMD model. Stably Stratified Boundary Layer In this section, we study the GABLS-1 inversion capped boundary layer with a constant cooling rate at the surface. The potential temperature is initialized with the two layer temperature profile given by Beare et al. (2006) The initial velocity is set to the geostrophic wind speed of 8 m s −1 everywhere except at the surface. Turbulence is triggered by adding random perturbations. A random noise term of magnitude 3% the geostrophic wind speed is added to velocities below 50 m, and for the temperature a noise term with an amplitude of 0.1 K is added. The reference temperature θ 0 is set to 263.5 K. The Coriolis parameter f = 1.39×10 −4 s −1 , which corresponds to latitude 73 • N, and the surface cooling rate is set to 0.25 K h −1 . The simulations are performed in a computational domain of 400 m × 400 m × 400 m, which is discretized on an isotropic grid with a spacing of 2.08 m. Gravity waves are damped out by a Rayleigh damping layer with a strength of 0.0016 s −1 in the top 100 m of the computational domain (Klemp and Lilly 1978). The simulations were run for 9 h to ensure that quasi-equilibrium is reached. The statistics are gathered over the final hour. This is approximately equal to 400 large-eddy turnover times T = z i /w * , where the velocity scale is w * = (gq * z i /θ 0 ) 1/3 and z i is the boundary-layer height. As theoretical results and experimental data are very limited, we also compare our results against the high-resolution results from Sullivan et al. (2016) and Beare et al. (2006). Even though these high-resolution simulations provide a useful reference, it is worth noting that these results still depend on the surface and SGS modelling. In Table 1 we compare various integral boundary layer properties obtained from our simulations with these high-resolution simulation results. We calculated the boundary-layer height z i by determining the height where the mean stress falls below 5% of its surface value (Beare et al. 2006). We note that Sullivan et al. (2016) used a different method to determine the boundary-layer height. Therefore, to avoid confusion, any comparison to the boundary-layer height from their study is left out. Beare et al. (2006) report that the time averaged buoyancy flux gθ −1 0 wθ ranges from −3.5 × 10 −3 to −5.5 × 10 −3 m 2 s −3 , which agrees well with the value of −3.8 × 10 −3 m 2 s −3 that we find in our simulation with the LASD and AMD model. In addition, Beare et al. (2006) report that the mean momentum flux u w 2 + v w 2 ranges from 0.06 to 0.08 m 2 s −2 , which corresponds to a friction velocity of 0.24 − 0.28 m s −1 . The mean momentum flux in our simulations varies between 0.064 − 0.069 m 2 s −2 , which corresponds to a friction velocity range of 0.252-0.265 m s −1 . Hence, these values lie well within the range reported in the LES intercomparison study by Beare et al. (2006). A comparison of our simulation results with the high-resolution data presented by Beare et al. (2006) and Sullivan et al. (2016) shows that the AMD and LASD model provide more accurate predictions for the friction velocity, mean momentum fluxes, boundary-layer height, and the surface heat than the Smagorinsky model. The surface-normal streamwise and spanwise velocity profiles are presented in Fig. 3a and reveal the pronounced super-geostrophic jet that is characteristic of the GABLS-1 case (Beare et al. 2006). The figure shows that the LASD and the AMD model results agree better with the high-resolution results of Sullivan et al. (2016) than the Smagorinsky model results. Figure 3b shows that the LASD and the AMD model results for the vertical temperature profile are closer to the high-resolution results by Sullivan et al. (2016) than the corresponding Smagorinsky model results. The planar-averaged vertical momentum and heat flux are presented in Fig. 3c, d. In agreement with the integral properties presented in Table 1, we find that the results obtained using the AMD and LASD model agree excellently. In contrast, the momentum The experimental data by Nieuwstadt (1984) are given for comparison and buoyancy fluxes due to which the velocity and temperature profiles are not accurately captured with the Smagorinsky model. In Fig. 4a we compare the mean momentum flux obtained from the simulations with the theoretical model proposed by Nieuwstadt (1984). This model states that the normalized vertical momentum profile is given by: ( τ xz and τ yz ). Nieuwstadt (1984) defined the boundary-layer height as the height where the turbulence is nearly zero. Therefore, only for this plot, the boundary-layer height is defined as the height where the turbulence is 1% of the surface values. Figure 4a shows that the mean momentum flux profiles obtained using all three SGS models agrees well with the theoretical prediction. Figure 4b shows that the horizontal velocity variance u 2 + v 2 with u 2 = u 2 + τ x x − u u and v 2 = v 2 + τ yy − v v obtained using the three SGS models is similar. We note that the kinetic energy obtained using the LASD model shows a sharp peak at the first grid point above the surface. We believe this peak is related to the sharp transition between the resolved and modelled stresses in the LASD (see Fig. 1b). To assess the effectiveness of the different models in capturing the surface-layer similarity profiles we plot the locally scaled momentum, and the locally scaled heat diffusivity, where = −τ 3/2 /(κ g w θ ) is the local Obukhov length (in Fig. 4c, d). Results obtained from two different models by Beare et al. (2006), i.e., the IMUK (University of Hannover) and NCAR (National Center for Atmospheric Research) are also included in the figures to provide a better perspective. The crosses in the Fig. 4c, d represent the mean values, and the shaded areas show the standard deviation from the observations of the stable boundary layer by Nieuwstadt (1984). According to the local scaling hypothesis of Nieuwstadt (1984), the quantities φ K M and φ K H can be expressed as a function of z/ . We find that φ K M and φ K H reach a nearly constant value for large z/ , which is known as the z-less stratification regime. Beare et al. (2006) report that the GABLS-1 boundary layer falls within the range of values (shaded region in Fig. 4c, d) found in Nieuwstadt (1984). Our results are consistent with the findings by Beare et al. (2006). The overlap of the results in the shaded region shows that the results fall within the limits of the observation at high z/ , i.e. the z-less stratification limit. Our results are similar to the IMUK and NCAR results reported in the LES intercomparison of Beare et al. (2006). Overall, the results show that the LASD and the AMD models have similar performance, while the Smagorinsky model results are significantly different. Unstably Stratified Boundary Layer Following Moeng and Sullivan (1994), we performed simulations of an inversion-capped convective boundary layer in a computational domain of 5 km × 5 km × 2 km on a 480 3 grid. The boundary layer is driven by a constant geostrophic wind speed of 10 m s −1 and the Coriolis parameter f = 10 −4 s −1 . The surface roughness lengths for momentum and heat are set to 0.16 m. The surface was heated at the bottom with a constant surface buoyancy flux of q * = 0.24 K m s −1 . The reference potential temperature is set to θ ref = 301.78 K. The initial velocities are set to the geostrophic wind speed with randomly seeded uniform perturbations in the region 0 < z ≤ 937 m to spin up turbulence. The potential temperature is initialized with a three-layered structure The simulation reaches a quasi-stationary state in 10 large-eddy turnover times T = z i /w * , where the convective velocity scale is w * = (gq * z i /θ ref ) 1/3 and the boundary-layer height The columns from left to right give the case name, the grid spacing in streamwise, spanwise, and vertical direction (Δ x × Δ y × Δ z ), the friction velocity u * , the Obukhov length L, and the boundary-layer height z i . The high-resolution simulation using the LASD model, which is used as the reference, is performed on a grid with 960 3 nodes instead of a 480 3 grid z i is defined as the height at which the buoyancy flux is minimum . The presented statistics are obtained from the time interval of 13T to 18T . Table 2 gives a summary of the simulation results, which are in good agreement with the results reported by Moeng and Sullivan (1994) and Abkar and Moin (2017). As these studies only provide results obtained on coarser grids, we also performed a high-resolution reference simulation on a 960 3 grid using the LASD model. It is worth noting that the integral boundary layer properties obtained by this high-resolution simulation can still depend on the surface and SGS modelling. Nevertheless, it provides a useful reference to judge the performance of the different SGS models. The results in Table 2 show that the AMD model predicts a lower friction velocity, surface Obukhov length, and boundary-layer height than the LASD and the Smagorinsky model. Furthermore, the results obtained with different SGS models agree reasonably well with our high-resolution results and the studies of Moeng and Sullivan (1994) and Abkar and Moin (2017). The results show that all the models predict values within an acceptable range and only minor variation is visible in the values of different quantities. Overall, all models perform well in predicting the friction velocity and boundary-layer height. Figure 5a shows the variation of the planar-averaged horizontal wind speed u mag = u 2 + v 2 normalized by the geostrophic wind velocity. In agreement with the lower friction velocity, the AMD model predicts a higher velocity in the boundary layer than the LASD or Smagorinsky model. Figure 5b shows that the variation of the potential temperature θ /θ 0 with height predicted using the LASD and AMD model agrees excellently. Due to the intense turbulent mixing the velocity and temperature are almost constant in the mixed layer (0.1 < z/z i < 0.9), which is a characteristic feature of convective boundary layers . Overall, the AMD model is as effective as the LASD model in predicting the velocity and temperature profiles. Figure 5c compares the vertical profiles of the horizontally averaged vertical heat flux w θ . We observe that the heat flux decreases linearly over the boundary layer and reaches a minimum at the inversion height. The depth of the entrainment zone is defined as the region where w θ is negative. The Smagorinsky and the LASD model results show a wider entrainment region than the AMD results. This means that there is more turbulent mixing at the inversion height when the Smagorinsky and LASD model are used. In an unstable boundary layer the profile of the temperature fluctuations is expected to show a sharp maximum at the inversion height, where the entrainment flux becomes negative (Lenschow et al. 1980). θ 2 /θ 2 * , where θ * = q * /w * compared to the observational data from the AMTEX experiment by Lenschow et al. (1980) and the high-resolution reference simulation Figure 5d compares the temperature variance as a function of height obtained with the different models against the Air Mass Transformation Experiment (AMTEX) observational data by Lenschow et al. (1980). The profiles show a sharp peak in the temperature variance at the inversion height. The origin of this peak is described by Sullivan et al. (1998), and a sharper peak corresponds to a smaller vertical extent of the entrainment zone. The AMD and LASD model results agree excellently and show a sharper peak than the Smagorinsky model results. The figure shows that the temperature variance obtained from the high-resolution reference simulation has an even sharper peak. This means that, for a given grid resolution, the Smagorinsky model strongly underestimates the temperature variance at the inversion layer when compared to the results of the LASD and AMD models. Therefore, we conclude that the LASD and the AMD model provide better predictions than the Smagorinsky model. The vertical profiles of the horizontal velocity variance obtained using the three SGS models are compared in Fig. 6a. Although the results are nearly the same near the surface, there is a significant difference around the capping inversion z/z i ≈ 1. This difference is a consequence of the different depth of the entrainment zone, which we discussed above. Furthermore, the AMD model agrees better with the high-resolution reference data than the LASD and the Smagorinsky model results. Figure 6b shows that temperature profiles Planar-averaged vertical profiles of the a horizontal and the b vertical velocity variance for the unstable boundary layer simulations compared to the AMTEX observational data by Lenschow et al. (1980) and the high-resolution reference simulation. The planar-averaged vertical profiles for the c non-dimensional velocity gradient φ M , and d non-dimensional temperature gradient φ H obtained using the LASD and AMD agree better with the high-resolution reference data than the corresponding Smagorinsky model results. To assess the effectiveness of the model in capturing the surface-layer similarity profiles, we plot the non-dimensional shear, and the non-dimensional temperature gradient, where θ * = q * /u * . The vertical profiles of φ M and φ H are compared with the empirical formulations proposed by Brutsaert (1982) in Fig. 6c, d. The empirical similarity profiles, which are applicable close to the surface (−z/L < 1.5) are φ M = (1 − 16ζ ) −1/4 and φ H = (1 − 16ζ ) −1/2 . Figure 6c shows that both the LASD and the AMD model results are relatively similar when compared to the Smagorinksy model results. The deviation in the results from the empirical formulation is expected. This is similar to the deviation of the φ M values from the theoretical value of 1 in the case of the neutral boundary layer, observed in Fig. 1d. Furthermore, we also observe the log-layer mismatch in the plots of non-dimensional shear. Figure 6d shows that the results from the different models agree equally well with the similarity profile. This is consistent with the observation of a similar temperature profile for all models (see Fig. 5b). For both quantities, φ M and φ H , we observe minor differences between the results obtained with the LASD and the AMD models. Discussion and Conclusions In summary, we compared the performance of the Smagorinsky, the AMD (Rozema et al. 2015;Abkar et al. 2016;Abkar and Moin 2017), and the LASD models (Bou-Zeid et al. 2005;Porté-Agel 2006, 2008) for neutral, stable, and unstable conditions. For neutral conditions, we find that the LASD and the AMD models capture the logarithmic velocity profile and the streamwise wavenumber spectra of the streamwise velocity more accurately than the Smagorinsky model. For the stably stratified GABLS-1 boundary layer, we compared the results obtained using the different models with the higher resolution results of Sullivan et al. (2016) and Beare et al. (2006). A comparison with these high-resolution results reveals that, on a relatively coarse grid, the LASD and the AMD models provide better predictions than the Smagorinsky model. Also for the unstably stratified boundary layer we find that the AMD and the LASD model results obtained on a relatively coarse grid agree better with the high-resolution reference than the corresponding Smagorinsky model results. Furthermore, for turbulence characteristics such as the horizontal and vertical velocity variances, temperature variances, non-dimensional shear, and temperature gradients, the results obtained with the LASD and the AMD models are nearly the same, while the Smagorinsky model results are significantly different. From all the above, we conclude that, with a given grid resolution, the LASD and the AMD models capture the flow physics better than the Smagorinsky model. While the LASD model (Bou-Zeid et al. 2005) has been successfully used for about 15 years (Stoll and Porté-Agel 2006;Calaf et al. 2010;Wu and Porté-Agel 2011;Stevens et al. 2014; Gadde and Stevens 2019) the AMD model was only developed recently (Rozema et al. 2015;Abkar et al. 2016;Abkar and Moin 2017). Here we show, using a one-to-one comparison, that the LASD and AMD models provide similar results for neutral, stable, and unstable test cases. Both the AMD and LASD models provide three-dimensional variation of the SGS coefficients, which is essential when considering heterogeneous flows, such as flows over complex terrains or in extended wind farms. An inspection of the streamwise wavenumber spectra of the streamwise velocity suggests that the LASD model predicts the velocity spectra in the inertial and production ranges slightly better than the AMD model. Our results suggest that the AMD model is nearly as good as the LASD model in simulations of horizontally homogeneous atmospheric boundary layers. In simulations of neutral boundary layers, the AMD model provides similar dissipation characteristics as the LASD model, which is established by a similar turbulence spectrum. The performance analysis of different sub-grid models on three grid sizes 240 3 , 480 3 , and 960 3 reveals that the computational overhead of the AMD model compared to the Smagorinsky model is 11.3% (240 3 ), 9.8% (480 3 ), 11.3% (960 3 ), while the corresponding numbers for the LASD model are 29.5% (240 3 ), 33.8% (480 3 ), 34.5% (960 3 ), respectively. The numbers show that the computational overhead of the LASD model is higher than the AMD model and increases slowly with grid size due to increased MPI communication related to the interpolations in the Lagrangian model calculations. Furthermore, we emphasise that the Lagrangian averaging in the LASD model requires the storage of time histories of various terms in the model, which requires at least 8 additional three-dimensional arrays and numerous two-dimensional temporary arrays. As indicated in the Introduction, we emphasize that the AMD model has several practical advantages and is more straightforward to implement than the LASD model. The reason is that the AMD model does not require filter operations or Lagrangian tracking of fluid parcels, which are required in the LASD model. The computational and memory overheads are essential considerations for simulations performed on modern supercomputers. Therefore, we conclude that the AMD model is an attractive alternative to the LASD model when considering large-scale LES of turbulent boundary layers. We have shown here that the results obtained with the AMD model are almost as good as the LASD model results. However, AMD model results depend on the modified Poincaré constant, which requires tuning in complex flow scenarios such as flow over cubes (urban boundary layer) or wind farms, while the LASD model is tuning free. In the future, it would be beneficial to study the performance of these models in the aforementioned complex flow scenarios.
v3-fos-license
2019-09-22T13:34:31.548Z
2019-09-21T00:00:00.000
202712203
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.pneurobio.2019.101696", "pdf_hash": "bf64a477ed765e2ad7e905eb842e4aa00522b8a1", "pdf_src": "Elsevier", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:299", "s2fieldsofstudy": [ "Biology" ], "sha1": "bf64a477ed765e2ad7e905eb842e4aa00522b8a1", "year": 2019 }
pes2o/s2orc
3D cellular reconstruction of cortical glia and parenchymal morphometric analysis from Serial Block-Face Electron Microscopy of juvenile rat With the rapid evolution in the automation of serial electron microscopy in life sciences, the acquisition of terabyte-sized datasets is becoming increasingly common. High resolution serial block-face imaging (SBEM) of biological tissues offers the opportunity to segment and reconstruct nanoscale structures to reveal spatial features previously inaccessible with simple, single section, two-dimensional images, with a particular focus on glial cells, whose reconstruction efforts in literature are still limited, compared to neurons. Here, we imaged a 750000 cubic micron volume of the somatosensory cortex from a juvenile P14 rat, with 20 nm accuracy. We recognized a total of 186 cells using their nuclei, and classified them as neuronal or glial based on features of the soma and the processes. We reconstructed for the first time 4 almost complete astrocytes and neurons, 4 complete microglia and 4 complete pericytes, including their intracellular mitochondria, 186 nuclei and 213 myelinated axons. We then performed quantitative analysis on the three-dimensional models. Out of the data that we generated, we observed that neurons have larger nuclei, which correlated with their lesser density, and that astrocytes and pericytes have a higher surface to volume ratio, compared to other cell types. All reconstructed morphologies represent an important resource for computational neuroscientists, as morphological quantitative information can be inferred, to tune simulations that take into account the spatial compartmentalization of the different cell types. Introduction Automated serial-section electron microscopy (3DEM), image stack processing, segmentation and 3D reconstructions are techniques that improved substantially in the last ten years, mostly driven by connectomics; however, its ambitious aim to image an entire human brain using electron microscopy to reconstruct every single synaptic connection is still for a distant future, although, very recently, an important milestone has been reached by imaging the entire brain of the drosophila . In contrast to connectomics that requires dense reconstructions to infer total connectivity of brain networks in mammals, several recent studies have taken advantage of similar imaging and reconstruction pipelines in order to study the ultrastructure of individual cells, a task achievable through the use of sparse reconstructions and focusing the analysis on neurons (Della Santina et al., 2016;Graydon et al., 2018;Kasthuri et al., 2015;Morgan et al., 2016). A recent report investigated the different patterns of myelination in neocortical pyramidal neurons of mice from different cortical layers (Tomassy et al., 2014). Interestingly, the myelination profiles of individual reconstructed axons correlated with the size of the soma of the corresponding neuron, showing how different features might be extracted and quantified from 3D models. The resolution of electron microscopy, together with the ease of large imaged volumes, also helped to investigate the intracellular organization of neuronal organelles at the whole cell level, in contrast to earlier works, limited by conventional serial section TEM (ssTEM), therefore focusing on the ultrastructure of individual structures, such as dendritic spines. For instance, (Bourne and Harris, 2011) highlighted the location of polyribosomes in enlarged, potentiated spines on 3D reconstructions. In the pioneering work from Licthmann's lab (Kasthuri et al., 2015) examples of reconstructed mitochondria, as well as T synaptic vesicles from a reconstructed volume of 1500 μm 3 are shown; and in a report from De Camilli's lab, the authors explored the intimate relationship between mitochondria and endoplasmatic reticulum . A recent work from the team of Graham Knott have produced an extensive collection of data from densely reconstructed blocks from layer I somatosensory cortex (Calì et al., 2018), including quantifications of volumes and surface areas of mitochondria. Although 3DEM cannot be considered as a high-throughput technique, it can be used to address morphological modifications following pathological impairments. A very recent report (Abdollahzadeh et al., 2019) describes how white matter axons and their ultrastructure including individual mitochondria, is impaired in rats following lesions; another work has used the same technique to analyze the ultrastructure of Corpora amylacea in human samples from aged or PD patients (Navarro et al., 2018). 3DEM pipeline is also a powerful tool to analyze non-brain samples; for instance, whole cultured cells and their intracellular ER and mitochondria were reconstructed to analyze their dynamics during cell division in a very elegant report (Puhka et al., 2012). Another work has addressed an in-depth 3D analysis of the African sleeping sickness parasite Trypanosoma brucei and its detailed intracellular organelles distribution. The vast majority of 3D neuroanatomical studies based on 3DEM are focusing on neurons, with only a few examples of reconstructions and analysis of astrocytes or other non-neuronal cell types, as well as their intracellular content (Calì, 2017;Calì et al., 2016;Mohammed et al., 2017). In one report from the Ellismann group (Hama et al., 2004), entire astrocytes were rendered in 3D for the first time, and interesting quantifications such as surface area to volume ratio, or total perimeter were produced, even though images were not generated from serial sections, but from electron tomography. A later report focusing on the perivascular apposition of astrocytes, pericytes and endothelium (Mathiisen et al., 2010) showed nice 3D reconstructions of mitochondrial arrangements within astrocytes, generated from serial section electron micrographs. Here, we combine the reconstruction of entire cells together with their intracellular content by imaging a volume of brain parenchyma from layer VI somatosensory cortex of a P14 rat, using a serial blockface scanning electron microscope (SBEM) equipped with a 3View module (Coggan et al., 2018). We identified several cell types within the imaged volume: neurons, astrocytes, microglia, pericytes, endothelial cells, and a few non-identifiable cells, most likely oligodendrocytes or precursors. We reconstructed four astrocytes, microglia, pericytes and four neurons. Although the imaged block was still too small to host the complete morphology of astrocytes and neurons, the reconstructed volume still contained all the proximal processes of these cells, showing for the first time in three-dimension the complexity of the astrocytic arborization. Also, we reconstructed the three-dimensional ultrastructure of microglial cells and pericytes in unprecedented detail, showed the location of their mitochondria, and provided empirical rules on how to distinguish each cell type on a single section micrograph. Material and methods 2.1. Sample preparation 2.1.1. Rat fixation Brain sections were a gift from Paola Bezzi (University of Lausanne, Switzerland). P14 rats were euthanized according to the Swiss Laws for the protection of animals. Animals were hosted in the animal facility of the Department of Fundamental Neuroscience (DNF) of the University of Lausanne, and did not show any sign of distress. 14 days old rat was deeply anesthetized with an i.p. injection of 150 mg/kg sodium-pentobarbital and killed by transcardially perfused fixative (2% PFA, EMS, 2.5% GA, EMS in PB 0.1 M 200 ml). 2 h after perfusion, brain was removed, and 100 μm coronal slices were cut using a Leica VT1000 vibratome. Sections were then collected and in PB 0.1 M and stored until embedding. We then selected sections including somatosensory cortex to proceed for staining and embedding in durcupan. Staining for 3View This protocol (Deerinck et al., 2010) was designed to enhance signal for backscatter electron imaging of resin-embedded mammalian tissue at low accelerating voltages (1-3 keV). The contrast of membranes is emphasized by using heavy metals incubation steps, as well as en bloc lead aspartate staining. 100 μm thick sections were washed into cacodylate buffer (0.1 M, pH7.4) prior postfixation for 1 h in ice-cold reduced osmium (1.5%KFeCN, 2% osmium tetroxide in 0.1 M cacodylate buffer). Sections were then rinsed in ddH2O and placed in a freshly prepared, filtered thicarbohydrazide solution (1% TCH, Ted Pella in ddH2O) at room temperature. After 20 min, sections were washed in ddH2O at room temperature and placed in a 2% osmium tetroxide solution in ddH2O for 30 min. Following this second exposure to osmium, tissues were rinsed again and then placed in 1% uranyl acetate (aqueous) and left overnight at 4°. The next day, en bloc Walton's lead aspartate staining was performed: Sections were rinsed with ddH2O at rising temperature (up to 60°), then immersed into warm (60°) lead aspartate solution (10 ml H2O 0.04 g aspartic acid, 0.066 g lead nitrate. pH at 5.5 at 60°) for 20 min, inside an oven to assure that the temperature is kept constant during the incubation. Then sections were washed again in ddH2O at room temperature, prior embedding in durcupan resin. Embedding in durcupan After heavy membrane staining, tissues were embedded in Durcupan resin (Sigma-Aldrich; components A and B, 33.3 g; components D and E, 1 g). Sections were dehydrated in aqueous solutions containing increasing concentration of ethanol (50%, 70%, 96%, 100%), prior to placing tissue into a 50% durcupan -ethanol mix. The mix is then replaced gently with increasing concentration of durcupan, until reaching pure resin. Sections were left so overnight, then embedded in a thin layer of fresh resin in an aluminum weigh boat and place is a 60°oven for about 24 h. 3View imaging The following procedure is used to mount specimens to minimize specimen charging. Region of interests corresponding to somatosensory cortex were dissected under a stereoscopic microscope using a razor blade, then mounted on aluminum specimen pins (Gatan, Pleasanton, CA) using cyanoacrylate glue. The blocks are faced and precision trimmed with a glass knife to a square of approximately 0.5-1.0 mm side length, so that tissue is exposed on all four sides. Silver paint (Ted Pella) is used to electrically ground the exposed edges of the tissue block to the aluminum pin taking care not to get the paint on the block face or edges of embedded tissue that will ultimately be sectioned. The entire surface of the specimen is then sputter coated with a thin layer of gold/palladium (25 mA, 3 min). After the block is surfaced with the 3View ultramicrotome (Gatan, Pleasanton, CA) to remove the top layer of gold/palladium, the tissue can be imaged using BSE mode. The stack used for reconstruction resulted 1513 images, sections having been cut at 50 nm thickness, and side length is 101 μm. Micrographs were acquired using a FEI Quanta 200 SEM. Each image has 4096 × 4096 pixels, with a pixel size of 20 nm. The acquisition parameters of the microscope have been adjusted as the best tradeoff between quality of imaging and ultramicrotomy: high-vacuum mode, voltage 3 kV, current 75pA, pixel time 7.5 ps, spot size 3, magnification 1500. 3D reconstruction, proofreading and rendering Serial micrographs were first registered using Multistackreg plugin, freely available on Fiji (Thevenaz et al., 1998). Neurons and glial cells C. Calì, et al. Progress in Neurobiology 183 (2019) 101696 were then segmented using a hybrid pipeline involving TrakEM2 for manual segmentation, as well as proofreading, and a version of iLastik that use TrakEM2 masks as seeds for semi-automated segmentation . Mitochondria, blood vessels and cells nuclei were automatically segmented. For blood vessels and nuclei, we used iLastik on a downsampled version of the stack. The models extracted from these two software were then imported and rendered using Blender. Images were processed using Adobe Photoshop to adjust brightness, contrast, and add pseudo colors to highlight structures of interests. Analysis All analyses were performed using Graphpad Prism 6. All data are expressed as mean ± SE of the mean, and error bars represent SE. Quantifications on volume data (volume measurements, skeletons) or surface meshes (surface area and length measurements) were performed using Avizo, or custom tools python-coded for Blender based on Neuromorph (Jorstad et al., 2015), respectively. Sphericity of the nuclei was calculated using the following equation: where V n and SA n are the volume and surface area of the nuclei, respectively. Maximum and minimum axis of the nuclei were obtained by fitting an oriented bounding box around each of the nuclei, using a custom script. Quantifications on mitochondrial distribution, and qualitative visual assessments using Virtual Reality (VR) were based on the GLAM algorithm (Agus et al., 2018a;Calì et al., 2017). Specifically, we modelled mitochondria as emitting surfaces, and we evaluated energy absorption on the surrounding processes according to a vertex-based radiance transfer scheme. For each vertex x p of the 3D surface reconstruction of the cellular process, the absorption value is computed by considering the KNN mitochondria vertices x i m according to the following equation In this way, we were able to map the localization on the processes according to the halo projected on the membrane in form of absorption map. We then evaluated the highest absorption peaks and we marked them as spheres with varying radii. To evaluate their length and radii, mitochondria were fed into an automated custom-made pipeline that skeletonized the morphologies from the meshes and calculated the length and radii of individual tubules. Reagents All chemicals used were from Sigma-Aldrich, unless otherwise specified. Limitations Electron microscopy is still the only method available to resolve details at nanometer resolution. It is therefore the technique of choice in neuroscience to study synapses and intracellular organelles (Knott and Genoud, 2013). Nevertheless, the treatments used to render a tissue sample suitable for EM imaging are known to create severe cellular and extracellular artifacts. Extracellular space which is estimated around 20% of the in vivo brain volume, drops to 1-2% when imaged under EM (Nicholson and Hrabětová, 2017)most likely due to swelling of astrocytic processes during perfusion. It has been shown that the use of high-pressure freezing, a technique using vitrification of the water present in the sample to fix the tissue, followed by freeze-substitution protocol better preserves the extracellular space. Under these conditions astrocytic processes appear qualitatively different from the ones in chemically fixed samples (Korogod et al., 2015), suggesting that the latter preparation might not be suited for an in-depth analysis of the ultrastructure of these cells. While high-pressure freezing might seem ideal for ultrastructural studies, it cannot be used reliably to preserve large samples that are hundred microns thick, leaving no other choice to date but to use chemically fixed tissue. Moreover, since high-pressure freezing has not yet been applied systematically to various brain structures, it is not clear if the ground-truth representing the real cellular ultrastructure can be determined with this technique. Results From an imaged total volume of 750,000 cubic microns ( Fig. 1A; 100 × 100 × 75 μm), we segmented and reconstructed in 3D a total of 16 cells, including their cell bodies, out of 186 in total (Fig. 1B), two blood vessels and 213 myelinated axons ( Fig. 1C and D; Supplementary Video 1). Cell classification The total number of cells present in the volume was calculated by reconstructing all the nuclei present in the stack. The total cell density was therefore evaluated as 220,000 cells per cubic millimeter. A total number of 186 cells were present in the cube. The type of cell was then classified by visual assessment by looking at specific morphological traits (Table 1; Fig. 2). We classified five major types of cells (Supplementary Table 1): 124 neurons, 22 astrocytes, 17 microglia, 11 pericytes, 6 endothelial cells, and 6 of an unidentified, unknown cell type, most likely oligodendrocytes, oligodendrocytes precursors (OPC), or NG2 cells (Butt et al., 2002;Chelini et al., 2018). Nuclei can be also automatically classified by approximating their shape with appropriate fitting functions, such as hyperquadrics or spherical harmonics, and by analyzing the parameters of these functions (Agus et al., 2018c(Agus et al., , 2019b. Neurons were identified by the presence of a dendritic arborization developing from the soma (the proximal dendrites), and most importantly by the presence of spines and synaptic contacts ( Fig. 2A). Their nuclei are almost spherical, and on average largest among the studied cell types (Fig. 3C, D). Interestingly, all neuronal nuclei in this sample show a dark artefact typical of electron accumulation in Table 1 Qualitative features for visual assessment of brain cell types. portions of the sample where there is low density of material, and therefore are poorly conductive, similar to the lumen of the blood vessel ( Supplementary Fig. 1). Astrocytes, unlike neurons, do not show specialized domains along their processes, to allow recognizing them readily. Their cytosol is generally clearer than that of neurons or other types of glia (microglia), but the largest processes rising directly from soma are typically packed with intracellular organelles such as ER and mitochondria. The absence of synaptic contacts, the high branching order resulting from many small processes intermingled within the neuropil, and the presence of at least one perivascular endfoot, are some distinctive traits of astroglial cells. Their nuclei have an irregular shape and they are smaller than in neurons ( Table 2). Microglial cells were identified by the presence of branches that appeared regular in size and direction, without the extensive branching seen in astrocytes along their path. Also, 35% of the total microglial cells in the imaged volume has a cytosol darker than in neurons and astrocytes, a condition seen before (Bisht et al., 2016). The size of their nuclei is smaller than neurons and astrocytes and their shape is irregular and highly diverse among individual microglial cells, depending on the shape of their soma which might need to flatten in order to move within the neuropil. Vasculature was easy to identify as a hollow round pipe crossing the image stack. There are two cell types wrapped tightly around the vessel: the endothelial cells and the pericytes. Endothelial cells are very easy to identify, as they are directly facing the lumen of the blood vessel and are therefore in direct contact with the blood flow. Their nuclei are also flattened, but have a regular, round shape if looked from the x-axis, giving them the appearance of a "potato chip" in 3D, confirmed by a small sphericity index (0.67; Fig. 3C). Endothelial cells have a dark cytosol, and their cross-section is flat and thin, roughly ranging from 300 nm (in portions without organelles) up to 4 μm (in correspondence of the thicker portion of the nucleus). The average size of the nuclei was the smallest among all those analyzed, Microglia (right panels) can be recognized by a slightly darker cytosol, and presence of thin straight processes with a fairly regular diameter. (C) Oligodendrocytes or OPC (dark green) have a darker cytosol, but are considerably bigger compared to microglia, and their soma is very rich is ER. (D), (E) Pericytes (red) are facing the brain parenchyma from their convex side, and the vasculature endothelium (yellow) from their concave side. Scalebars: A,B,C, main panels, 10 μm; insets, 2 μm; D,E, main panels, 5 μm; insets, 1 μm. together with the smallest sphericity index (0.47; Fig. 3C). Pericytes sit on top of endothelial cells from one side, and interface with the astrocytic endfoot on the other side. They have a very tight, dark, electron-dense membrane interfacing with the endothelium, and have as well a very dark cytosol similar to endothelial cells. Lastly, so called "unknown cells" (4 in total) have a dark cytosol and their soma is filled with smooth ER. Their morphology is reminiscent of oligodendrocytes or their precursors (OPC) which, given the development stage of the brain sample (P14), are more likely. Surface area to volume ratio Both surface area, and volume of each reconstructed morphology (Fig. 4A), were calculated; nevertheless, because of the total volume sampled was not sufficient to contain the complete structure of the identified neurons and astrocytes, comparing absolute numbers was not feasible. We therefore decided to evaluate surface area-to volume ratios, which can be considered as a measure of how much the morphology of a cell is adapted to interact with its environment (Supplementary Table 4). Neurons and microglia have little statistical difference (neurons, 1.63 ± 0.05; and microglia, 2.8 ± 0.3, N = 4, p < 0.05), whereas astrocytes and pericytes have higher ratios, and are not significantly different (astrocytes, 4.39 ± 0.3; and pericytes, 4.26 ± 0.09, N = 4) (Fig. 4B). Interestingly, linear regressions of surface areas versus volume graphs (Fig.4C) show that in this respect astrocytes and pericytes are very similar to each other (they fit on the same slope, R 2 = 0.97), and neurons and microglia match each other (R 2 = 0.94). Astrocytes Astrocyte data was subjected to a number of qualitative assessments, and their features of interest were quantified. First, by applying the same thinning algorithm to the 3D volume data of the segmented astrocytes we could extract a GFAP-like morphology that allowed visual identification of their soma and main processes (Fig. 5B). Each of the four astrocytes have 4 primary processes branching out from the soma. By analyzing the skeletons, we found that these astrocytes have 1853, 6001, 6240 and 1853 branches, respectively (Fig. 5C). The number of branches correlates with the size of the cell (See Fig. 4C). For each astrocyte, we have calculated a maximum and minimum radius by fitting an ellipsoid to encompass the outer surface of the cell and centered in the nucleus of each astrocyte. In order to make a rough estimate of the percentage of cell reconstructed, as all astrocytes have processes cut at the boundary of the image stack, we have assumed that the cells are roughly symmetrical, and have calculated the percentage of the volume of the fitting ellipsoid lying within the bounding box. We found that astrocytes 1 and 2 are the least complete ones (52% and 66% complete, respectively), followed by astrocytes 3 and 4, both complete at 78%. Of the four reconstructed astrocytes, three of them (1, 3 and 4) have two perivascular processes (example from astrocyte 1 in Fig. 6A); the remaining cell (number 2) is lying on a bifurcation, hence its one perivascular process has larger surface area than in the other three astrocytes (Fig. 6B, empty square). The surface area of the process facing the lumen of the blood vessel was quantified using the blender addons family Neuromorph, and the ratio with the volumes of the processes ( Fig. 6C; Supplementary Table 5), which in all seven cases appeared as a thin, flat, curved surface, was between 1 and 2.5. Indeed, linear regression from the volume-surface area representation of the data (Fig. 6B, red dotted line) had a coefficient close to 1 (0.97). Neurons Four neurons were randomly selected out of the 124 present in the stack whose nucleus was complete, segmented and their morphology reconstructed in 3D (Fig. 7A). They had between 4 and 7 proximal dendrites, with neuron number 2 not showing any axon, possibly because the process is cut at the proximal portion of the neuron. Although neuron number 1 had the smallest number of spines within the imaged volume (262), it had the highest total dendritic length (816.1 μm), and the longest dendrite (106.7 μm) compared to the other neurons. Consequently, its spine linear density (Fig. 7B) was significantly different compared to the other 3 neurons (#1, 0.35 ± 0.033 spines/μm; #2, 0.71 ± 0.06 spines/μm, p < 0.001; #3, 0.62 ± 0.06 spines/μm, p < 0.01; #4, 0.54 ± 0.05 spines/μm, p < 0.05). A similar analysis was performed on the surface density of spines, by calculating the volume of individual dendrites, and the results revealed the same trend (Supplementary Table 6). Linear regression of the dendritic volumes versus number of spines indeed confirm that neuron 1 differs from the other ones. We evaluated the minimum and maximum radius, by placing a sphere in the soma of each neuron until it reached the shortest and longest process of each cell, respectively. In all cases, the maximum radius was close to the longest dendrite of each neuron. Pericytes and vasculature Four pericytes out of 11 were reconstructed in 3D (Fig. 8). The reconstructed ones (Fig. 8B) have a thin cytosol, and most of their surface is either facing the endothelium surrounding the lumen of the blood vessel, or the neuropil on the opposite side. Since pericytes do not have a clear symmetry around their nuclei, we have calculated their maximum and minimum axis from their bounding box. Pericyte 2 and 3 are both lying on a bifurcation, and are also the ones with the highest occupancy, with a maximum axis of 42.1 and 45.7 μm, respectively. Two vasculature segments were found in the sample, showing a black artefact in the middle of their lumen due to the imaging technique that is inducing electron accumulation within empty resin. The largest segment is 103 μm long (segment 1), and cross the entire image stack on the z-axis. Another segment (number 2) is 49.45 μm long, from which another bifurcation departs (segment 3), at the top of which astrocyte 2 is lying (Fig. 5A). By observing the location of nuclei, it was possible to estimate the density of pericytes along the length of the blood vessel (Fig. 8C). We found a positive correlation with the density and the diameter of the vessel. Microglia Four microglial cells out of 17 were reconstructed in 3D (Fig. 9). The morphology of the four cells varies strongly, although number 2 and 3 have close number of total branches (54 and 49, respectively) and their total volume is quite similar (545 μm 3 and 550 μm 3 , respectively). The number of branches has a non-linear correlation with the volume of the cells (Fig. 9B), but does not appear to be correlated with their surface area ( Fig. 9C; Supplementary Table 7). To evaluate the occupancy of each cell, we evaluated the minimum and maximum radius, by placing a sphere in their soma until it reached their shortest and longest C. Calì, et al. Progress in Neurobiology 183 (2019) 101696 process. We found that microglia 3 and 4 have the shortest minimum (7.7 μm and 7.8 μm respectively) and maximum (29.7 μm and 35.4 μm respectively) radii. Mitochondria All the mitochondria from the sample were segmented together, using an automated classifier that was trained on a manually segmented portion of the stack, and then various qualitative and quantitative analyses were performed based on cell and cell type distribution ( Fig. 10; Supplementary Tables 8-10). The percentage of mitochondria per each type of cell calculated as a ratio between the volume of mitochondria and the volume of each cell (Fig. 10E) was not significantly different among the cell types, although quantification revealed a similar trend between pericytes and astrocytes, and neurons with microglia (pericytes, 9.6 ± 1.57%; astrocytes, 8 ± 1.1%; neurons, 7.3 ± 0.4%; microglia, 6.4 ± 0.4%; N = 4 per each cell group). From a qualitative point of view, the spatial distribution of mitochondria in neurons and microglia ( Fig. 10B and D, respectively) was similar, as it follows the skeleton of the cells from the nucleus to their distal processes. Within pericytes, mitochondria were evenly distributed within the cytosol (Fig. 10B), whereas in astrocytes (Fig. 10A) they followed the lines drawn by the major processes, and were particularly dense in the proximity of the soma and in the perivascular endfoot. By using an automated skeletonizer, we have extracted quantitative information about the length and radius of mitochondria per cell type (Supplementary Tables 9 and 10), within their soma and their processes ( Fig. 10F and G, respectively). In the soma, astrocytes had the longest average mitochondrial length (2.23 ± 0.11 μm, N = 203), followed by pericytes (2.13 ± 0.15 μm, N = 125) and neurons (1.95 ± 0.03 μm, N = 1036), while microglia had the shortest mitochondrial average (1.71 ± 0.11 μm, N = 93). In the processes, neurons had the longest mitochondrial average length (2.86 ± 0.08 μm, N = 1014), followed by astrocytes (2.61 ± 0.05 μm, N = 1209) and microglia (2.23 ± 0.10 μm, N = 258), while pericytes had the shortest mitochondria (1.9 ± 0.06 μm, N = 317). Finally, in order to easily infer the intracellular distribution of the mitochondria, and overcome the visual occlusion caused by the cellular complexity, we adopted a strategy similar to GLAM glycogen mapping (Agus et al., 2018b;Calì et al., 2017). We considered mitochondria as light sources, mapped their localization on the processes based on the halo projected on the membrane, and reported the highest absorption peaks as spheres (Fig. 11 A and B). We then proceeded to count the number of peaks (Fig. 11B) in different cell compartments, per each cell type (Fig. 11C). Virtual Reality (VR) was used to better understand the data (Supplementary Video 3). We found that in the vast majority of the cases high-density peaks were found along cellular processes rather than in the somatic part of the cell. As pericyte morphology is best described in relation with the lumen of the blood vessel (inside), or the parenchyma (outside), these terms were used in describing the location of the peaks. Myelinated axons Axons in EM micrographs have a well-known morphology. They show a relatively straight, tubular structure (Shepherd and Harris, 1998), with a tortuosity depending on the axonal type and swelling at sites, called boutons, where spheroidal vesicles accumulate. Myelinated axons have a much less convoluted path, a more regular and larger diameter compared to non-myelinated axons. Myelinated processes are large and smooth, and present a thick, dark electron-dense sheet around their circumference (Kreshuk et al., 2015). We reconstructed a total of 240 myelinated axons, organized in four major bundles of 42, 30, 18 and 120 axons, as well as 3 fibers that were isolated and appeared clustered. The total length of the reconstructed fibers was 11,295 μm, with an average length per bundle of 97.63, 10.47, 75.51 and 44.89, respectively, all with a standard error between C. Calì, et al. Progress in Neurobiology 183 (2019) 101696 1 and 3% ( Supplementary Fig. 2). All the fibers were crossing the stack at approximately 45°, considering the z direction to the proximal dendrite of neuron 1. Location and orientation of the sample together with the number of axons prompt us to assume that they originate from corpus callosum (Supplementary Video 2). Discussion The aim of this work was to provide for the first time a detailed description of the three-dimensional morphology of glial cells, with a particular emphasis on astrocytes, a dataset that is still largely lacking in literature with a few exceptions (Calì, 2017) and to adapt the tools originally developed for neuronal image segmentation, reconstruction and analysis to glial cells. Cell recognition The first challenge was to recognize the different categories of cells using EM. Ultrastructural analyses at the EM level are usually carried on in brain neuropil, and analyses on larger fields of view are limited to neurons. Although there is a general consensus regarding recognition of neurites and postsynaptic densities (PSDs), as well as their type (Shepherd and Harris, 1998;Stuart et al., 2016), it is rare to find similar definitions for other brain cell types, such as glial cells and the endothelium of the blood vessel (Table 1). This is despite the fact that endothelial structures are very simple to identify, as they are facing the blood vessel lumen (Fig. 2D,E). For glial cells, there is much less consensus, probably due to limited literature. Comments on astrocyte structure in EM tend to be general and often misleading. Commonly, they are referred as empty structures with a relatively clear cytosol (Witcher et al., 2010(Witcher et al., , 2007; this misconception is probably due to the fact that investigations on astrocytic ultrastructure in the last twenty years have been mostly focused on gliotransmission, meaning on the ultrastructural features in astrocytes that could subserve the release of signaling molecules (Calì, 2017). These studies have looked at perivascular, lamelliform processes on separate 2D images, which might appear clear and empty, depending on the section shown. In reality, larger processes are full of organelles, such as ER and mitochondria, that we observed in close proximity of the soma ( Fig. 2A). Furthermore, perivascular processes (Mathiisen et al., 2010;Fig. 6) appear more complex than expected, showing local synthesis machinery such as the Golgi apparatus (Boulay et al., 2017), as well as glycogen granules (Agus et al., 2019a;Calì et al., 2016). Microglia are very mobile cells that patrol their neighborhood in search of inflammations (Nimmerjahn et al., 2005), and have recently been shown to be responsible for synapse pruning (Thion and Garel, 2018). Microglia squeeze through other cells in brain parenchyma. Their morphology is adapted to that and these cells are easy to identify by the small size of their soma (Fig. 2B) and the convoluted shape of their nucleus (Savage et al., 2018). Interestingly, 6, out of the 17 microglia identified in the stack had a darker cytosol than other cell types. This phenotype has been reported previously (Bisht et al., 2016;Savage et al., 2018) and associated to a possible activated state of the cell. First reports about dark cells date back to Peters (Murakami et al., 1997); this observed phenotype was not limited to microglia, but also to so called "dark neurons", as well as other cell types, similar to another category of cells that we found and referred to as "unknown". Compared to the dark microglia observed, these cells have an even darker cytosol, and are probably oligodendrocytes or OPC (oligodendrocytes precursors). Comparison of cell numbers between glial cells (astrocytes, microglia, pericytes and unknown, together) and neurons is of interest (Supplementary Table 1). Although it was not our aim to make a stereological count, as the volume is probably too limited, the estimation based on the nuclei present in the sample results in 30% glia, the remaining 70% being neurons. Contrary to a popular misconception that astrocytes are the most abundant cell type in the CNS, the total number of glia does not surpass the number of neuronal cells. Estimations using isotropic fractionator, a technique evaluating cell numbers based on nuclear staining, show that the number of neurons is higher than that of glial cells in rodents and primates, and is the same in humans (Herculano-Houzel, 2014;von Bartheld et al., 2016). Interestingly, this ratio depends on the brain region: for instance, in the cortex, the glia to neurons ratio shows that there is more glia than neurons, which is the opposite than what we found. Two possible explanations have been presented: First, the density of neurons is heterogeneous across cortical Fig. 6. Analysis of astrocytic perivascular arrangement. (A) Rendering of one of the reconstructed astrocytes with a detail from its two perivascular processes (red). (B) Quantification of the perivascular surface area (shown in A in red as an example). (C) Ratio between surface area of the contact and the volume of the perivascular process. layers. This is supported by data from layer I that has no (or very little) neurons, which means that by averaging numbers over entire cerebral cortex, the ratio could be unbalanced by the more glia in upper layers. Layers V and VI are known to have higher neuronal density (Santuy et al., 2018). Second, cellular densities of cells must change across developmental stages and the present data set is from a rapidly changing P14 brain. Size of nuclei All mammalian cells of one individual store the same genetic material in their nuclei. For rats, the Rat Genome Sequencing Project Consortium estimated the size of the rat genome to be 2.75 Gb, between human (2.9 Gb) and mice (2.6 Gb) (Rat Genome Sequencing Project Consortium, 2004), which, considering a weight of 660 g/mole-bp, corresponds roughly to 7 pg of nuclear DNA within each normal diploid cell, although a recent report shows that human cell types might go well beyond that expected value (Gillooly et al., 2015). From a morphological point of view, we noticed differences in the appearance, both in 2D and in 3D, that we quantified by calculating the sphericity, an index of how close the shape of a particle is to a sphere (Supplementary Table 2). Nuclei from neurons, astrocytes and unknown cells had sphericity index closer to 1 (Fig. 3D), while pericytes and endotelial cells nuclei had the lowest, confirming the visual qualitative visual assessment of their shape. This observation matched the ratio between the maximum and minimum axis, (Fig. 3C; Supplementary Table 3) which was closer to one for the former group, and around 3 for the latter. Microglial cells, which are highly mobile and need to adapt their shape in order to migrate from site to site of the brain parenchyma, were somehow in between these two groups, showing some variability. Interestingly, we found that by measuring the volume of nuclei of the different cell types (Supplementary Table 2), there was a substantial variability (Fig. 3E), with differences up to 10 fold, comparing the smallest nuclei (pericytes and endothelium, less than 100 cubic micrometers) with the larger ones (neurons, more than 900 cubic micrometers). If they all contain the same amount of genetic material, the difference must rely on the folding state of the cell, and possibly the expression of histone deacetylases and other epigenetic effectors, which are known to be affected during aging, and their impairment has been shown to favor phenomena like LTP, and improve long term memory and cognition (Cao et al., 2018;Maze et al., 2013;Penney and Tsai, 2014;Walsh et al., 2015;Shu et al., 2018) by having a more fluid nucleus that facilitates the access of nuclear information to transcriptional factors, and therefore protein turnover (Maze et al., 2013). As a further proof of this point, an imaging artefact was useful to confirm that neuronal nucleus was less "dense". As we imaged using back scattered electrons from the block surface of the tissue, in order to obtain images similar to the more familiar TEM ones, non-reflective, less conductive surfaces absorb electrons and therefore appear black. Notably, this occurs in the lumen of blood vessels, where there are no proteins, but where only resin is present, therefore no combination of osmium, lead and uranium can generate an image, because there is nothing to bind to. Interestingly, all the nuclei from neurons had at their core this black artefact ( Supplementary Fig. 1), whereas none C. Calì, et al. Progress in Neurobiology 183 (2019) 101696 from other cells, including the larger nuclei of astrocytes and microglia that matched in size the smallest nuclei of neurons, showed the same hallmark. This feature is peculiar for our sample preparation and our imaging conditions, and more recent SBEM setups have introduced systems to reduce substantially the charging using focal injection of nitrogen (Deerinck et al., 2018). We used this dataset to train shape analysis AI algorithms to infer cell type solely from the size and morphology of the nuclei (Agus et al., 2019b(Agus et al., , 2018c. By comparing different methods of representation of the nuclei (hyperquadrics versus spherical harmonics), we could train a support vector machine classifier (SVMC) to distinguish between cell types, and found that neurons can be distinguished with almost 100% accuracy, as different parameters clusters well together when plotted for dimension reduction, compared to other cell types (Agus et al., 2019b). SVMC can also classify astrocytes relatively well, although in some cases they can be mistaken for neurons; in contrast, other cell types have too high variability and automatic classifiers fails to distinguish them. By acquiring more dataset to enhance the accuracy of training, the SVMC might be a powerful tool to pre-assess the identity of a cell by its nucleus, which is easy to identify and segment. C. Calì, et al. Progress in Neurobiology 183 (2019) 101696 4.3. Individual morphologies 3D models of neurons are increasingly common but very little effort has been done to segment morphologies of other brain cells from EM, although recently developed algorithms, such as flood filling networks (FFNs; Januszewski et al., 2018) are not specific for one cell type and are generally used for dense segmentation, suggesting that they could be applied to glial cells (Harris et al., 2015). One possible reason is that neurons have a clear morphological compartmentalization (axons, dendrites, spines, boutons, PSDs), and the changes in these compartments can be directly correlated to functional changes; for instance, dendritic spines can become smaller or bigger, newer ones can form based on synaptic inputs. Such compartmentation is less known for glial cells. In the absence of morphologically identifiable functional domains, the few reconstructions of glial morphology present in literature have been produced to elucidate processes of neuronal physiology, rather than for studying astrocytic function per se. In this study, we aimed to reconstruct detailed individual morphologies of cells included in an imaged cortical volume. We first compared surface area to volume ratio (SVR) of different reconstructed cells (neurons, astrocytes, microglia and pericytes) as measured by their three-dimensional morphology (Fig. 4B). In contrast to absolute volume or surface area values that are not very informative (Fig. 4C), SVR is an indication of the fractality of the cell, and indicates how much a cell would sacrificing its intracellular volume to "dedicate" surface for interacting with its neighbors (Supplementary Table 4). It is not surprisingly then, that small pericytes have high SVR values (around 4), similar to astrocytes, while neurons and microglia (Fig. 4B) have lower SVR of 1.6 and 2.8 respectively. Pericytes are in general flat cells with a fairly homogeneous morphology that are like sheets wrapped around the blood vessels (Fig. 8B) and interface with the astrocytic vascular endfeet (Fig. 6). Astrocytes have highly variable morphology, and although their average SVR is similar to pericytes, the SVR of morphologically distinct astrocytic domains varies widely. We measured the contact surface area to volume ratio of astrocytic perivascular processes (Fig. 6 to be close to 1 while the smallest processes in the neuropil has been reported to have values up to 14 (Hama et al., 2004). Astrocytes clearly have a much higher surface area compared to neurons and therefore have a much larger membrane interface with the parenchyma around them. While all reconstructed astrocytes have a very simple morphological backbone with four primary processes (Fig. 5B), similar to neurons (Fig. 7), their full morphology (Fig. 5A) can include up to thousands of branches. Interestingly, astrocytes 2 and 3 have three times more processes than astrocyte 1 and 4 (Fig. 5). Their proximity to each other makes it unlikely that such difference might arise from neuronal network diversity, although we cannot rule this out. More probable reason for these differences is that at this stage of development (P14, age is crucial for synaptogenesis), some astrocytes not yet fully developed. This difference is also supported by astrocytes 2 and 3 having higher absolute volume and surface area (Fig. 4C). In constrast, the minimum and maximum width of cell seem unrelated with the branching of the cells. The high branching order of an astrocyte is due to the large number of lamelliform processes stemming from its primary processes. In neurons, dendritic spines generate hundreds of spines (Fig. 7), similar to hundreds, or thousands of astrocytic lamelliform processes stemming from main processes. Dendritic spines are input stations for dendrites, and have a simpler morphology than astrocytic processes, because the surface area dedicated to the interaction with the volume in their proximity is mostly polarized towards the PSD. But even when we compare the branching order of a neuron in terms of number of spines to astrocytes there is a difference of a factor 10 (hundreds for neurons while up to thousands for astrocytes; Figs. 5 and 7). This suggests that the higher SVR in astrocytes is an indication of their capacity to interact C. Calì, et al. Progress in Neurobiology 183 (2019) 101696 with multiple structures at once. The only known specialized astrocytic structure, the perivascular process (Fig. 6), indeed has a small SVR in spite of its relatively large interactive surface. Mitochondria We identified and reconstructed individual mitochondria from within each reconstructed cell (Fig. 10). It is a matter of general knowledge that mitochondria provide cells energy by transforming metabolites into ATP via oxidative phosphorylation using the Krebs cycle. However, under anoxia, cells can generate smaller amounts of ATP in glycolysis and pass to the resulting pyruvate to lactate instead. This is what happens to muscle cells during intense activity when the oxygen supply is not adequate . Also, tumor cells mainly rely on glycolysis even when oxygen is available using a mechanism called aerobic glycolysis or Warburg effect (Magistretti, 2014). In brain, neurons die shortly after oxygen deprivation, whereas astrocytes are able to survive solely on glycolysis, even for many years (Supplie et al., 2017). Not surprisingly, astrocytes are the site of aerobic glycolysis very similar to cancer cells (Magistretti and Allaman, 2018). The lactate that they produce through this pathway supports neurons in both their energetic and signaling functions as detailed in the astrocyteneuron lactate shuttle (ANLS) model (Magistretti andAllaman, 2015, 2018). The shape of mitochondria has recently been associated with cognitive impairments. During aging, presynaptic mitochondria acquire a "donut" shape that can be rescued with estrogen, and has been associated with decreased learning abilities (Hara et al., 2016). Dendritic mitochondria have been shown to acquire an elongated shape, which seems to indicate impaired fission, as seen in a genetic Alzheimer's disease model (Zhang et al., 2016). Therefore, a simple assessment, in terms of global cell occupancy, and/or shape, might indicate differences in metabolic function between different cell types, although the present study is concerned with cells in physiological conditions. We quantified the percentage of mitochondrial occupancy against total cell volume, but could not observe any significant difference between cell types (Fig. 10E), although there seems to be a trend, similar to surface area to volume ratio (SVR) measurements, where astrocytes are more similar to pericytes, and neurons to microglia. Distribution of mitochondria within individual neurons and microglia (examples in Fig. 10B-C) shows that, as a rule of thumb, they concentrate to main processes. Pericytes (Fig. 10C) that are flat cells lacking any visually distinct subdomain show no evident bias in mitochondrial distribution. Interestingly, in astrocytes ( Fig. 10A), mitochondria follow the main processes in a "GFAP-like" fashion, while a few, scattered mitochondria are present within smaller processes, with a width of at least 200 nm, roughly the size of a single tubule. This is in contrast with recent literature showing a widespread activity of mitochondria across most of the cell (Agarwal et al., 2017;Bindocci et al., 2017). This difference could be due to the early stage of development (P14) where the energy generation of mitochondria is still directed to internal rather than external use. At this stage, P14 neuronal networks are still forming and stabilizing, therefore the absence of mitochondria in astrocyte lamellar processes might correlate with a more important glycolysis within the neuropil controlled by individual astrocytes. This would necessarily result in an enhanced astrocytic L-Lactate production that would be locally shuttled to neuronal domains to sustain the structural maturation of neuronal processes, which is energetically expensive. Moreover, since, astrocytic lamelliform processes have a cross section of tenths of nanometers, where mitochondria cannot fit, ATP synthesis should necessarily pass through glycolysis, also taking into account that lamelliform processes are rich in glycogen (Calì et al., 2016). The size of individual mitochondrial tubules might give an indication of their activity. The dynamics of fission and fusion are important indicators of a healthy mitochondria (Archer, 2013;Chen et al., 2005;Suen et al., 2008), and impaired fission might be related to pathological conditions such as Alzheimer's disease (Zhang et al., 2016). The average length of the mitochondria for all cells and cell types was longer in the processes compared to the soma, possibly indicating that the metabolic needs of the cells are highly compartmentalized. A higher average length in astrocytes than in neurons might suggest a higher glycolytic activity in glial cells compared to neurons (Supplie et al., 2017), although the measure is reversed in processes, where neurons shows a longer average length of mithochondria. This could be due to very long tubules present in the long proximal dendrites of neurons, in particular neuron 2 which has one 35 μm length mitochondrion. Such measurement differs from a recent report, where observations on individual mitochondria reconstructed in neurons imaged using expansion microscopy (Gao et al., 2019) reported a distribution of lengths ranging from 0.2 to 8 μm, which is similar to what we found among the 2nd and 98th percentile was ranging (0.2 and 10 μm), and to the range we observe in the soma (0.1-9.8 μm). The reasons for these differing results could equally well stem from different species or developmental stage of the samples or from some unknown physiological factor. To further characterize the possible interaction of mitochondria with the environment within different cells, we performed a semiquantitative analysis that is based on the GLAM paradigm (Agus et al., 2018a;Calì et al., 2017) where we considered mitochondria as a source of light, and counted the absorbing peaks on the membrane of each cell, all within a Virtual Reality (VR) head mounted display environment. Peaks were then plotted on the surfaces as spheres (Fig. 10F, bottom panels) and we counted manually how many were distributed in the different compartments. For neurons and microglia (grey and yellow bars, respectively), the highest concentrations of mitochondria were present on individual processes with neurons showing the vast majority of them in dendrites. Astrocytes (green bars) showed a variable distribution, with number 1 and 4 having less polarized peaks towards the perivascular processes, compared to number 2 and 3 that showed an opposite trend. Interestingly, these latter two cells are more branched (Fig. 5). Keeping the rapidly developing stage of the sample in mind, the correlation between the polarization of the perivascular distribution of the mitochondria and the branching level might suggest a difference in the maturity of the cells. In those cells, whose perivascular process are also larger in terms of surface area and volume indicating higher maturity (Fig. 6). Finally, pericytes did not show any particular preference between lumenal and parenchymal portion of the brain, two of the studied cells (1 and 2) showing an opposite preference compared to the other two (3 and 4). Taken individually, pericytes 1 and 2 sit on a vascular bifurcation ( Fig. 8A and B), compared to number 3 and 4 which are surrounding a straight segment; we could speculate that their energetic reserves need to be spent differently based on their location but this needs more observations the be relied on. Conclusions In this work, we were able for the first time to describe detailed 3D morphologies of neurons and glial cells using 3DEM. Technical advances in reconstructing complex astrocytic structures are finally opening them up to precise morphological analysis. In addition, we were able to place complete intracellular mitochondrial distributions to that morphology that allowed us to speculate on functional relevance of this distribution. Within this framework, which is common to connectomics, one can envision a widespread study of all other brain cell types, whose morphological features are still poorly understood, but need to be investigated, to make sense of many physiological roles supposedly mediated by cells like astrocytes, that are still not completely understood.
v3-fos-license
2019-03-18T14:03:06.135Z
2019-04-01T00:00:00.000
81279865
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.aginganddisease.org/EN/article/downloadArticleFile.do?attachType=PDF&id=147742", "pdf_hash": "ace33016c214ea951c2840f9d7b206c899a3ee1a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:300", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "ace33016c214ea951c2840f9d7b206c899a3ee1a", "year": 2019 }
pes2o/s2orc
Influence of Environment and Lifestyle on Incidence and Progress of Amyotrophic Lateral Sclerosis in A German ALS Population Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disease mainly affecting upper and lower motor neurons in the brain and spinal cord. Pathogenesis of ALS is still unclear, and a multifactorial etiology is presumed. The remarkable clinical heterogeneity between different phenotypes of ALS patients suggests that environmental and lifestyle factors could play a role in onset and progression of ALS. We analyzed a cohort of 117 ALS patients and 93 controls. ALS patients and controls were compared regarding physical activity, dietary habits, smoking, residential environment, potentially toxic environmental factors and profession before symptom onset and throughout the disease course. Data were collected by a personal interview. For statistical analysis descriptive statistics, statistical tests and analysis of variance were used. ALS patients and controls did not differ regarding smoking, diet and extent of physical training. No higher frequency of toxic influences could be detected in the ALS group. ALS patients lived in rural environment considerably more often than the control persons, but this was not associated with a higher percentage of occupation in agriculture. There was also a higher percentage of university graduates in the ALS group. Patients with bulbar onset were considerably more often born in an urban environment as compared to spinal onset. Apart from education and environment, ALS phenotypes did not differ in any investigated environmental or life-style factor. The rate of disease progression was not influenced by any of the investigated environmental and life-style factors. The present study could not identify any dietary habit, smoking, physical activity, occupational factor as well as toxic influences as risk factor or protective factor for onset or progression of ALS. Living in rural environment and higher education might be associated with higher incidence of ALS. onset (bulbar, cervical, lumbar), involvement of upper and lower motor neurons, progression rate or variety and severity of additional symptoms as dementia (5,6). Given these remarkable clinical heterogeneities between ALS patients, the question arises whether certain environmental and lifestyle factors could have an impact on the disease phenotype or whether there are specific risk factors for subgroups of patients. For example, several studies showed that specific comorbidities occur in ALS patients in different frequencies as compared to the normal population and can possibly also influence the disease course (7)(8)(9). According to the current literature, cardiovascular risk factors may have a protective impact whereas there is an ongoing discussion whether high levels of physical activity could be a negative factor (1,(10)(11)(12)(13)(14)(15)(16)(17). However, a definite environmental risk factor has not been identified so far. In the present study we aimed to investigate whether distinct factors of environment e.g. birth and living in rural or urban areas or toxic influences and lifestyle e.g. smoking, diet, level of education, physical activity during leisure or at work of ALS patients can influence disease onset and progression and whether specific risk factors for the development of distinct phenotypes can be identified. MATERIALS AND METHODS We analyzed a cohort of 117 ALS patients who have been treated as either inpatients or outpatients in our university hospital. All patients with ALS treated at Hannover Medical School in the period between January 2016 and February 2017 were asked and included after written informed consent. In addition, 93 controls were included into the study (patients' spouses, hospital staff, other acquaintances). Data were collected by a personal interview developed by the ONWeBDUALS consortium, a group of European ALS researchers, based on input from an international survey among ALS experts worldwide (18). The ALS patient group was characterized regarding gender, onset, age, disease duration and disease severity ( Table 1). We compared ALS patients and controls regarding physical activity, dietary habits, smoking, residential environment, potentially toxic environmental factors and profession before and throughout the course of disease. Physical activity was graded as "intense physical exercise > 1 year", within this category either as "150min/week moderate aerobic activity" or "75min/week vigorous aerobic activity" versus "mild physical exercise". These graduations should allow to investigate the possible impact of physical activity on onset or progression of the disease more precisely, e.g. to identify distinct sports or exercise habits as risk or protective factors. Occupational physical activity was classified as low (mostly sedentary), mean and high levels. Special dietary habits such as vegetarian, vegan, gluten-free and high protein diet were recorded. Smoking habits were classified as current smokers, non-smokers and ex-smokers. Regarding residential environment, birth place and urban/ rural living during the last five years before disease onset and more than five years before disease onset was assessed. A distinction was made between villages (<1000 inhabitants), country towns (1000-5000 inhabitants), small towns (5000 -20000 inhabitants), middle towns (20000 -100000 inhabitants) and large towns (>100000 inhabitants). Living near possibly toxic influences (sewage plant, landfills, waste incinerators, sources of EM fields as high voltage transmission lines or radar) was recorded. Regarding the level of education, data of ALS patients were also compared with data from the German population obtained via the federal statistical office (19). In addition, sub-groups of distinct phenotypes of ALS patients (region of onset, extent of UMN/LMN involvement) were analyzed for differences concerning the named parameters. We also studied the influence of these factors on the progression rate of the disease. Progression rate was defined as decrease of ALSFRS-R per month. Data for progression rate estimation was available for 58 of the 117 included ALS patients. The assumption of a normal distribution was checked by boxplot graphics. Normally distributed variables were described using mean and standard deviation and compared using the two-sided t-test for independent samples (two groups) or using a univariate analysis of variance (ANOVA, more than two groups). Metric of non-normally distributed variables were described using median and quartiles (25% and 75%) and compared using the Mann-Whitney-U-Test (two groups) or using the Kruskal-Wallis-Test (more than two groups). Categorical variables were described using absolute and relative frequencies and compared using the Chi-Squared-Test or Fisher's exact test (for cell frequencies < 5). To identify the influence of several factors on progression rate we used ANOVA. First all factors were analyzed separately by simple ANOVA. Parameters with presumably relevant influence (p < 0.2) were then analyzed together by multiple ANOVA. Using backward selection (exclusion limit: p = 0.05) relevant factors were finally identified. The percentage of university graduated persons in the ALS groups was compared to the national average in Germany using the exact binomial test for one proportion. Because of the inflation of the alpha error due to multiple testing in the same sample, the collected p-values were assessed descriptively. The term "significant" was avoided. Comparison of ALS patients and controls Disease specific variables are described in Table 1 and all results of the comparison of ALS patients and controls are shown in Table 2. There was no considerable difference between ALS patients and controls regarding age (p= 0.128) but there was a substantially higher proportion of men in the ALS-patient group than in the control group (p = 0.001). ALS Patients and controls did not differ regarding smoking and diet in our cohort. Special dietary habits (vegetarian, vegan, etc.) and distinct toxic influences (living near sewage plant, waste incinerators etc.) were only sporadically reported in both groups so that statistically relevant differences could not be identified. Also, the extent of physical training and occupational physical activity was not different between ALS patients and controls. Regarding physical activity, there was a tendency towards an increased extent of moderate training in ALS patients while controls tended to more frequently undergo training with higher intensity (p=0.081). Patients with regular physical exercise and/or more intense occupational physical activity did not show earlier disease onset. ALS patients considerably more often than control patients lived in smaller towns and rural environment (p= 0.021 and p=0.013) ( Fig. 1A and 1B, Table 2). The place of living was recorded for the last five years before disease onset and also for time period longer than five years before disease onset. There were no relevant differences between these two-time periods (not shown). Due to inclusion of personal acquaintances into the control group, the percentage of university graduated persons might be erroneously high. For this parameter we therefore compared the ALS group with age-matched information from the federal statistical office (19). This analysis showed that there was a higher percentage of university graduates in the ALS group than in the general population regarding the age groups 30-39, >65 and by tendency 40-49 years (Table 3). Interestingly, university graduated patients had a tendency towards an earlier onset of disease (54.9 years (sd 13.32) vs. 59.4 years (sd 10.31), p= 0.069). Despite the potentially biased composition of our control group regarding occupation our data indicate, that ALS patients did not have more frequent occupations with moderate or higher physical activity or that there was an accumulation of special occupations in the ALS-group e.g. agriculture, contact with animals or electricity. (Fig. 2, Table 4). Patients with bulbar vs. spinal onset did not differ regarding smoking habits, physical activity in leisure time or occupation, education or living environment (Table 4). However, ALS patients with bulbar onset were considerably more often born in an urban environment compared to spinal onset patients (p= 0.017) (Fig. 3, Table 4). This result was independent of age, which did not differ relevantly between urban and rural born patients. Beside the more frequent spinal onset in LMN dominant patients, patients with LMN and UMN dominant phenotypes did not differ in any investigated environmental factor ( Table 5). The ALS-patients of our group have been tested for mutations in the SOD1 and C9orf72 genes. Two patients carried SOD1 mutations and seven patients C9orf72 mutations. The comparison of the seven C9orf72 patients with the other ALS-patients showed a relevant result regarding place of birth as six of the C9orf72 patient were born in a rural area (p=0.047). Influence on disease progression Neither physical activity at leisure nor occupational physical activity had an impact on the progression rate of the disease. Neither did smoking and diet or living-and birth-environment, level of education and the presence of SOD1/C9orf72 mutations influence the progression rate. Only bulbar onset (Fig. 4) and right-handedness were associated with faster disease progression (p=0.001 and p=0.027). However, as we could only include three left handed patients the latter result was not considered any further. DISCUSSION To investigate possible risk or protective factors in lifestyle and environment we analyzed a cohort of 117 ALS patients and 93 controls recruited at Hannover Medical School for the ONWeBDUALS register (18). Physical activity has been intensively discussed as a potential risk factor for developing ALS (1,13). An association between ALS and professional soccer or American football playing has been described (11,20,21). However, patients of our group did not report increased physical activity at leisure or occupation compared with control patients. These results are in keeping with recent and rather consistent (level A) evidence from the literature which concludes that physical activity rather is not a risk factor for ALS (10,14,15,22,23). The relations between professional athleticism and ALS could also be due to other unknown environmental or lifestyle factors like frequent traumas or drug intake (10,16). Recently it was even suggested that physical activity may eventually be a protective factor (12). Within our ALS patient group, the extent of reported physical activity had no influence on the disease progression. We could therefore not support previous findings that physical exercises might have a neuroprotective effect (22). Our data showed no influence of smoking on the incidence of ALS, also not in bulbar cases as previously described (24)(25)(26)(27). Neither did smoking accelerate disease progression in our study population, as suggested by another previous study (28). Our results are in line with several other studies showing no relation between smoking and ALS (29,30). We further could not find an increased exposure to toxic influences in the ALS population. ALS patients lived relevantly more often in rural environment than controls. Living in rural environment has been previously described as a risk factor for ALS in several studies (31)(32)(33) but the literature is controversial (34)(35)(36). A potential reason might be increased exposure to agricultural chemicals (i.e. herbicides, insecticides, fungicides). Some studies found an increased number of ALS patients with an occupation in agriculture and therefore concluded that not rural residence itself but agricultural activities could influence the risk of ALS (29,(37)(38)(39). However, this finding is also controversial (35,(40)(41)(42) and we could not find a cluster of patients with agriculture occupation in our ALS group, also not in the bulbar subgroup. Some studies identified an association of other occupations with ALS e.g. in the building and construction sector, veterinarians, hairdressers, electricians, those exposed to magnetic fields and some others (24,(42)(43)(44)(45) but a reliable proof that would confirm a possible connection between any occupation and ALS is still lacking and there were no clusters of specific occupations in our cohort. In this context it is interesting that six of seven C9orf72 patients were born in a rural area. One can speculate whether this mutation is more frequent in rural areas in Germany. There were more university graduates in the ALS group than one might expect compared to the general German population. Moreover, patients with a university degree, interestingly, by trend develop ALS at younger age. Together this could hint to a higher risk of ALS for people with higher education. Previous studies have described both an association between high (31) and low (29,32) levels of education and ALS, others did not find any association between education and ALS (46). The fact that individuals with ALS who did not seek medical advice or who died before diagnosis cannot be included in any study needs to be addressed in this context. Patients with higher education are more likely to notice a symptom earlier and to visit a tertiary ALS center. In Parkinson's disease, door-to-door studies showed that almost 40% of patients remain undiagnosed (38,47). Therefore, larger and thoroughly conducted studies are necessary before drawing any conclusions. Disease progression was not influenced by diet, smoking, physical activity at leisure or work, environment of birth and living place, toxic influences or educational level. Regarding the different phenotypes of ALS, bulbar onset was considerably more frequent in patients born in urban than in rural environment. This has not been described before and the reason remains unclear for now. The present study has several limitations. With its retrospective, interview-based design there might be a potential recall bias even though patients were carefully instructed to focus on the presymptomatic stage when answering the questions regarding occupation, place of living and physical activity. The recruitment of an appropriate control group for epidemiological questions presents some difficulties. As ALS is more frequent in men, including only the spouses of patients leads to a relevantly higher amount of women in the control group compared with the ALS patient group which was the case in our study population. Including hospital staff and personal acquaintances can also distort the control group with a greater extent of higher educated people as it might have happened in our control group. This bias can also influence other parameters beside education level such as living place or habits of physical activity and diet. On the other hand, if one intends to investigate epidemiological questions like education level or living environment, it is not possible to match the ALS and control group for these parameters as one cannot analyze their frequencies afterwards. Therefore, it is hardly possible to imitate the random compilation of an ALS patient group, if it is random at all. Perhaps recruitment of multiple groups of controls could address this problem in future case control studies (47). Another limitation is that the number of participants in the present study is only moderate and did not allow for investigation of all known phenotypes as the subgroups would have been too small for statistical analysis. Data for progression rate only were only available for 50% of the ALS group. It has to be noted that the results should be interpreted with caution and need to be verified independently. Despite these limitations, our present investigation is one of the few studies that addressed numerous environment and life style factors in relation to different phenotypes of ALS as previously suggested (48) and also analyzed a possible impact on the disease course. Therefore, it provides valuable information and should trigger similar but larger longitudinal multicenter studies. Conclusion The variability in phenotypes and progression rate strongly suggests an impact of occupational, environmental or lifestyle hazards in ALS. The present study could not identify any risk or protective role for dietary habits, smoking or physical activity. Living in rural environment might be associated with higher incidence of ALS. Patients with higher education levels appear to have a higher risk of developing ALS or at least were more frequently diagnosed with ALS and included in studies. Regarding different phenotypes, bulbar onset was considerably more frequent in patients born in urban than in rural environment. Disease progression was not influenced by any of the investigated environmental and life-style factors. Spinal onset patients had a slower progression. Larger multicenter studies are necessary to reassess the results of our study.
v3-fos-license
2020-12-31T09:02:43.114Z
2020-01-01T00:00:00.000
234965767
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBYSA", "oa_status": "HYBRID", "oa_url": "https://singipedia.singidunum.ac.rs/preuzmi/43287-global-tourism-measurements-and-response-to-covid-19-crisis/4314", "pdf_hash": "ba7def45f926629a6109195c936c64ed4f424c00", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:301", "s2fieldsofstudy": [ "Economics", "Business" ], "sha1": "466b28162a89e2fe0a2737c20c5e3e350099c81b", "year": 2020 }
pes2o/s2orc
Global Tourism Measurements and Response to Covid-19 Crisis Pandemics represent a great challenge in tourism development, and as shown by crisis caused by Covid-19, effects are instant and almost totally destructive. This paper provides literature review on influence of medical crisis on tourism, followed by an understanding of economic downturn in tourism in Q1 due to Covid-19.The lLast section of the Paper provides an overview of structure, form and concrete recommendations defined by leading global tourism policy makers and scenarios for tourism re-start. INTRODUCTION: INFLUENCE OF MEDICAL CRISIS ON TOURISM Various research showed that pandemics do represent a challenge not just for society, but especially for tourism (Hall, 2020;Gössling, 2002), and this was, unfortunately proven, by the research of negative economic impacts (Fan et al., 2018) caused by SARS (Siu and Wong, 2004;McKercher and Chon, 2004), swine flu (Russy and Smith, 2013) and Spanish flu (McKibbin and Sidorenko, 2006). The world is now experiencing a temporary de-globalization, due to heavy travel restrictions and the suspension of international travel that made geographical barriers among places to re-emerge (Niewiadomski, 2020). Covid-19 changed heavily all individuals and industry, but the crisis demonstrated that tourism, hospitality and travel industry are less of a necessity and hence are highly sensitive to significant shocks (Chang et al., 2020). Although in recent years great performance of global tourism sector was demonstrated in terms of number of tourists, overnight and income (UNWTO, 2020c), the effects of corona virus devastated this sector. In a short period of time global market faced the shift from overtourism (Séraphine et al., 2018;Séraphine et al., 2019;Jover and Diaz-Parra, 2020) to non-tourism and negative impact down the tourism and hospitality value chain (Gössling et al., 2020, pp. 2) and the collapse of the entire sector (Higgins-Desbiolles, 2020, pp. 1). SITCON 2020 TOURISM AND SUSTAINABLE DEVELOPMENT -CHALLENGES, OPPORTUNITIES, AND CONTRADICTIONS Covid-19 crisis challenged neoliberal approach, best driven through reducing the government's role and regulations and seen through ideas and attitudes that globalization is an unstoppable fore, while consumerism is the key of expressing our identity. It influenced new forms of government intervention and redevelopment of social caring and networks as a tool used to defend against the crisis (Higgins-Desbiolles, 2020). National institutions and authorities have proven as much more effective in implementing necessary measures in life and economy (Niewiadomski, 2020). However, the question asked is whether the global and national authorities recognized this shocking effect on tourism, and whether the defined measures would support the surviving of tourism and hospitality sector. In this section the Paper provides an overview over selected global tourism authorities and institutions that were active in the period of global pandemic in a form of providing guidelines, tourism policy measures and tactical support for the recovery of tourism. However, before the overview is discussed, we need to note that the general background of the Paper is that these institutions, as well as national governments, national tourism boards, global hotel chains and aviation were not ready for the crisis of this size. In addition, no crisis management was put properly in place. Those are the reasons why some of the measures are focused on creating crisis management plans, incentivization and recovery packages that could be defined beforehand within tourism strategic development documents and initiatives. However, everyone needs to have in mind that "the measure put in place today will shape the tourism of tomorrow" (OECD, 2020a, pp. 2). International tourism falldown due to Covid-19 According to UNWTO (2020a), international tourism in March 2020 showed strong decrease with Asia and Pacific reaching up to -35% when compared to March 201; and Europe reaching up to -19% in the same period with the proven loss of 67 million international arrivals and $ 80 billion in lost exports. However, in April the situation has worsen to a greater extent, especially in Europe and North America, with international traveling being almost entirely stopped, and reaching up to -95% in comparison to the same time last year. According to STR (2020), hotels in China showed slight recovery at the end of March and in April with occupancy coming back to +15% in comparison to January and February when it was almost 0%. According to UNWTO (2020a), as of April 20 th , 100% of all worldwide destinations have introduced travel restrictions in response to the pandemic, in a form of the following: total or partial closing the borders, totally or partially suspending international flights, and by banning the entry for passengers from specific countries or origin. In addition, based on the research, 86 countries introduced 360 various restrictive policies and restrictive initiatives impacting leisure and business travel market (WTTC, 2020b). Revised studies by OECD (2020a) estimated that total impact will be at the level of 60% decline for international tourism in 2020, with potential 80% decline if recovery is delayed. International tourism sector recovery recommendations In order to support the process of recovery of global tourism, recommendations were introduced in the following areas (UNWTO, 2020b): 1) managing the crisis and mitigating the impact, 2) providing stimulus and accelerating recovery and 3) preparing for the future. Besides various soft development initiatives such as a promotion (WTTC, 2020d), most of the measures in the field of managing the crisis and providing stimulus for recovery and future development can be further classified into the following (WTTC, 2020c): a) fiscal support and tax incentives / reduction / removal, b) protecting human capital and livelihood of workers, c) injecting liquidity & cash, and d) various forms of recovery funds and investments. In addition to lifting travel restrictions both government and tourism industry should also focus on liquidity support and the preparation of recovery plans, on restoring traveler confidence and stimulating demand with new, safe and clear labels of the sector (OECD, 2020a). The first and the foremost steps is introducing tools and actions that will help managing the crisis and influence further mitigation of the negative impacts. Having in mind that tourism is a major job creator with 330 million jobs generated around the world, which is 10% of global employment (WTTC, 2019a), most recommendations are putting employers and employees in the first place in order to save and recover tourism and hospitality companies. Tourism re-start scenarios Mass tourism will most probably remain a desired target for most destinations, but it needs to be carefully planned (Nepal, 2020). Covid-19 crisis should be used for re-thinking tourism development. Strong tourism growth is a measure reached through a rise in the number of tourists, which on the other hand raise the question of destination carrying capacities and willingness to fully understand needs and wants of those tourists. In addition, destinations need to invest resources in educating tourists about the new experiences approach and sustainability of destination products. Destinations need to learn from their past mistakes of prioritizing quantity over quality, destruction over sensible development, and therefore have in mind that future travelers will have high awareness of environmental health and wellbeing (Nepal, 2020). In order for tourism to re-start, two components should be taken care of (IATA, 2020, pp. 1): 1) governments must be prepared to allow passengers to travel among countries and regions and 2) passengers must have sufficient confidence that they can travel safely, and achieve what they wish to do during their journey. Nevertheless, prior to lifting travel restrictions health criteria needs to be fulfilled (European Council, 2020): epidemiological criteria, sufficient health system capacity and appropriate monitoring capacity. In addition to general tourism measures, each service provider along the value chain needs to follow certain global and national standards. In that sense, hotel industry sector needs to implement various measures apart from the obvious health protection measures. It is advised that hotels should develop a Management plan that will include an action plan tailored to the situation and implemented in accordance with the national SITCON 2020 TOURISM AND SUSTAINABLE DEVELOPMENT -CHALLENGES, OPPORTUNITIES, AND CONTRADICTIONS authorities, and mobilization of resources that will allow seamless implementation of the action plan, proper supervision, logbook of actions, clear communication among employees and specific training (WHO, 2020). According to the implemented measures that have been based on local development levels and the characteristics of tourism, excluding potential additional Covid-19 impacts and waves, the following scenarios have been identified (OECD, 2020a, pp. 3): ◆ Scenario 1: International tourist arrivals start to recover in July and strengthen progressively in the second half of the year, but at a slower rate than previously foreseen (-60%). ◆ Scenario 2: International tourist arrivals start to recover in September and then strengthen progressively in the final quarter of the year, but at a slower rate than previously foreseen (-75%). ◆ Scenario 3: International tourist arrivals start to recover in December, based on limited recovery in international tourism before the end of the year (-80%) In response to Covid-19 restrictions of travel and lockdown, tourism businesses (such as hotels and restaurants) are facing new reality in which their standard business model is put to a test. In "traditional" mass tourism destinations, tourism and hospitality service providers were transforming destinations into standardized and homogenous tourist spaces that are serving tourists and visitors, and at the same time differentiating them from local socio-cultural and economic environment (Saarinen 2017, pp. 429). The new reality will strongly challenge them, due to the tendency of making autonomous urban function of entertainment for non-locals (Judd, 2009). This places the destination in a position to choose between locals and tourists and this time making locals potential users of their services (Diaz-Soria, 2017). During the Covid-19 crisis, hotels looked for alternative ways to reach the desired cash flow and to maintain the service, mostly through food-delivery service, putting them in a competitive set with classical and fast food restaurants which have already been working in that environment. DMC agencies that were focused on corporate events and incentives for foreign clients are in "stand by position", since most of the events have been cancelled or postponed until next year. This challenges their business survival and the level of operations. Professional congress organizers will need to focus on domestic events since international traveling will surely not be in place until September 2020. CONCLUSIONS Domestic travelers and domestic tourism movements are already taking a leading position in the recovery of global position, due to them having a market share of 75% on average in total tourism economy (OECD, 2020b). However, destinations need to note that domestic travelers, which are either from direct or indirect economic effects, can replace international travelers and international tourism expenditure that is bringing an added value to the economy. Global tourism policy institutions introduced a number of recommendations and measures that are tackling the crisis. By analyzing them, it can be concluded that most of the measures represent the apparent platforms and paradigms of contemporary functions and development of tourism. We also need to understand that even though one would expect countries and destinations to be using them already, the majority "has forgotten" about basic principles of tourism and increasing competiveness. In addition, during the previous strong period of growth, when tourism reached a record-breaking number, international and national tourism policy makers and influencers did not use the change to position tourism and hospitality as the key economic and social drivers of countries, which can be concluded from the lack of understanding of needed measures in countries. Tourism in post covid-19 time in the future will be challenging since destinations will need to find new product portfolios that will be putting safety and environmental consciousness in the first place for tourists, while service providers in the value chain will need to implement new formats of business and operations. Global tourism bodies and institutions made great effort in producing various recommendations that will help and assist the re-starting of tourism.
v3-fos-license
2021-09-01T15:20:25.100Z
2021-06-11T00:00:00.000
243400685
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "332e494d9795a9f782dc71eac93e8a9e8a798813", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:302", "s2fieldsofstudy": [ "Sociology" ], "sha1": "7ac4343aa83937f8a987c4cc0c64f4d96b5f766e", "year": 2022 }
pes2o/s2orc
Disintegration in the Age of COVID-19: Biological Contamination, Social Danger, and the Search for Solidarity Like any disaster, COVID-19 laid waste to infrastructure and the ability for a community to do community. But, unlike a tornado or nuclear meltdown, COVID-19 laid waste to social infrastructure in unique ways that only a disease can do. On the one hand, a pandemic brings biological dangers that, in turn, make all individuals—loved ones, too—into potential threats of biological contamination. On the other hand, the efforts to contain diseases present social dangers, as isolation and distancing threatens mundane and spectacular ritualized encounters and mask-wearing heighten our awareness of the biological risk. By exploring the link between disasters and disease, this paper leverages the contamination process, beginning first with the barriers it presents to making and remaking the self in everyday life. Constraints on ritualized encounters, both in terms of delimiting face-to-face interaction and in determining that some spaces have contaminative risks, reduces collective life to imagined communities or shifts to digitally mediated spaces. The former intensifies the sense of anomie people feel as their social world appears as though it were disintegrating while the latter presents severe neurobiological challenges to reproducing what face-to-face interaction habitually generates. Finally, these micro/meso-level processes are contextualized by considering how institutions, particularly polity but also science, manage collective risk and how their efficacy may either contribute to the erosion of solidarity or provide a sense of support in the face of anomic terror. Using the US to illustrate these processes, we are able to show how an inefficacious State response weakens the already tenuous connective tissue that holds a diffuse and diverse population together, while also exposing and intensifying existing political, economic, and cultural fissures, thereby further eroding existing solidarity and the capacity to rebuild post-pandemic cohesion. as protective forces against acute blows to the collective, whether endogenous or exogenous. In the 19th century, acute blows emerged out of economic and political revolutions that radically reconfigured the structure of society, raising questions about how large, diverse, and geographically dispersed societies and their smaller communities could sustain a sense of "we-ness," if at all. But, these same disruptions have been with human societies forever in the form of disasters of human and natural "design." The current pandemic offers an opportunity, then, to explore how disease as a disaster threatens these mechanisms of solidarity, thereby contributing to both the disaster scholarship and the sociology of disease. To do so, I begin by thinking more generally about the sociology of disease as a class of disaster. Like a tornado or other natural disasters (Wallace, 2003), plagues are rapid and destructive, and yet they are different in many ways: invisible in their path and in their infection. Similarly, pandemics share qualities with human-made disasters, like nuclear meltdowns, as plagues are very often the indirect result of human activities like irrigation (McNeill, 1976). Plagues remain more primitive, recurring across time and space; more totalistic having direct and immediate effects on the biological, psychological, and sociological levels; and, the danger of disease is not posed by the environment like climate change, but rather people, both intimate and impersonal others. For these reasons, and others, there is temptation to immediately reach for the consequences for solidarity at the macro-level like Durkheim did in, say, Suicide. However, there is far more variation in time and space at the macro-level, which serves as a backdrop against the more ubiquitous micro/meso-level experience of anomie in the face of threats to or disintegration of the connective tissue that usually goes unnoticed. Here, in the realm of everyday life, the phenomenological and structural horrors unleashed by the contaminative potential of disease make all types of interactions risky and, in turn, undermine the foundation of meso-level solidarity. Once these generic consequences of disease diffuse across a population, the macrolevel space becomes important. The state, as we all saw, must protect its citizens, but no state in history can do or does this equitably and, worse, efficaciously or competently. Moreover, the state, like the micro-/meso-levels of solidarity, is shaped by the historical context in which the pandemic forms. In the case of the US, which is the focus of this paper, it was hit the hardest and earliest by the pandemic because of weaknesses that were exacerbated and, in some cases, torn asunder as the three layers were bombarded. Each layer reveals independent dynamics, but as the US case demonstrates, the state's ability to shore up extant fissures or exacerbate and create new ones has important effects on the experience of solidarity in everyday life. Disasters and Disintegration Following a disaster, members of devastated communities tend to experience, individually or collectively, what is called disaster syndrome (Wallace 2003, pp. 149-163), in which survivors experience confusion, apathy, and passivity built on the Perception that not only the person himself, his relatives, and his immediate property have been threatened or injured, but that practically the entire community is in ruins. The sight of a ruined community [in which the physical environment is] wrecked, is apparently often consciously or unconsciously interpreted as a destruction of the whole world (Wallace, 2003, p. 153, emphasis added). Physical infrastructure, then, is social in both its external representation of the thing its members call community and in the facilitation of the everyday ritualized encounters and group life we come to take for granted as we make community. Disasters shred the moral anchors to which our personal identities are made and re-made and present severe barriers, in many cases, to finding old or new anchors. Without anchorage, we are unable to routinely and spectacularly plunge ourselves, to borrow a Durkheimian (1912 [1995]) phrase, in group life and reconstitute the very essence of society in ritualized encounters (Collins, 2004). For instance, in Erikson's (1976) study of a West Virginian hollow whose five tightknit coal mining communities were destroyed by a massive flood, the hope of reconstituting what once was, was replaced by a pervasive collective trauma (Erikson, 1994). The cost of rebuilding was expensive and the damage extensive, but the flood also accelerated an already extant reality: the decline of intergenerational reproduction, which is a necessary ingredient for meso/micro solidarity. The initial confusion and apathy gave way to anomie and, instead of rebuilding their world, this anomic trauma became embedded in the community's collective identity and many of the individual's personal identities; a process or project not uncommon in most cases of social trauma (Alexander, 2004, p. 1). What we learn from these studies of disaster and trauma is that, one, disasters work on all levels and are very often local in their most enduring and direct effects. Additionally, the central problem of disasters and ensuing collective trauma is rooted in the very core of solidarity: how can the connective tissue tying the individual to their environment be rebuilt amidst an acute trauma that becomes a chronic, continuous problem? Reflecting on myriad local communities devastated by disasters, Erikson (1994, p. 242) concluded that our local milieu consists of "layers of trust" that surround humans, radiating out in concentric circles like ripples in a pond. The experience of trauma, at its worst, can mean not only loss of confidence in the self but the loss of confidence in the scaffolding of family and community, in the structures of human government, in the logics by which humankind lives, and in the ways of nature itself (emphasis mine). To put this in more general and contemporary sociological terms, these layers of trust are built up from repeated exchanges and interactions that generate affectual attachments to people, groups, and even abstract systems (Lawler et al., 2009) such that these social objects become an "extension of [our] own personality, an extension of [own] own flesh" (Erikson, 1976, p. 191). Though this is not a controversial metaphor, maintaining that this connective tissue is real, as Erikson (inspired by Durkheim) did, is controversial: in much the same fashion as the cells of a body, [members] are dependent upon one another for definition, they do not have any real function or identity apart from the contribution they make to the whole organization and they suffer a form of death when separate from the larger tissue. . .It is the community that cushions pain, the community that provides a context for intimacy, the community, that represents morality and serves as the repository for old traditions (Erikson, 1978 p. 194, emphasis mine). Tempting as it may be to write off the notion of a superorganism, the biological metaphor is not really as metaphoric as sociologists presume (Strassmann & Queller, 2010), especially on the local level where direct, reciprocal, recurring interactions take place (Fine, 2012). Humans are motivated to find and feel belonging, in part because of our evolved neuroarchitecture and heightened capacities for things like role-taking (Tomasello, 2020). Disasters, then, test the strength and durability of this connective tissue, but diseases are an especially problematic test. For one thing, at least in the Western world, disease and plague are relatively rare, making people less prepared for the devastation. The notion of disease invites reflection on something old, primitive, Biblical, existential; meanings far removed from nuclear meltdowns. Diseases, unlike other natural disasters, are also unrelenting, at least until they run their course or meet resistance from human practice and invention. Complicating matters is the fact that they are invisible, biotic, organic threats, whose destructive path is not through razing physical infrastructure, but through using said infrastructure to create disease chains; both through mundane (grocery shopping), leisurely (tourism), and economic activities that constitute everyday life (McNeill, 1976). Finally, as disease courses through the social body, it infects the body politic, exposing weaknesses, exacerbating divisions, and testing the polity as the last protection against societal collapse. Thus, COVID-19 presents an opportunity to think about how disease, as a special case of disaster, threatens mechanisms of solidarity on every level of social reality. It is tempting to begin at the macro-level, as broad brush strokes are so easy to discern. However, I prefer to start at the micro, as most of the anomic pain has been experienced in countless ways through the sudden loss of a wide array of moral anchors. The barriers to ritualized encounters due to disease, both unique and shared with other disasters, have shaken the foundations of the meso-level communal and associational groups that are the waters in which we routinely and spectacularly plunge and remake our self. Therefore, beginning with the everyday experience of disease makes more sense; however, it makes sense for a second reason: the effects of disease works its way "up" and not "down." That is, threats to these rituals also threaten the always tenuous, but no less real, anchorage to the abstract, generalized, distant body politic. Diseases present a particular challenge, as they neither strike the infrastructure nor the polity, at least not immediately. Like a zombie movie, they strike the very grounds of social and moral order: through ritualized interactions that simultaneously make the individual extraordinarily aware of the group's external palpable, corporeal nature through the externalization and collectivization of emotion and attention and reinforce the social and moral order they have internalized through spectacular and mundane encounters in the past. Presumably, even highly generalized and abstracted rituals, like the celebration of an Independence or Memorial Day, serve to make the largest unit of political organization, the Nation-State, real and renew the attachment of the person's identity to the increasingly distant collective identity. To carry the zombie analogy further, humans are the hosts of the threat, but unlike zombies, they are not often observably sick; especially asymptomatic carriers. All people are a potential risk. Many remedies for stopping a pandemic also naturally delimit interaction, as social distancing and mask-wearing serve as artificial barriers. The latter, in particular, runs counter to our evolved ape capacities, which direct us to pay especially close attention to face to determine threat, belongingness, identity, and so forth. In a sense, then, the polity and not the disease is responsible for the (metaphoric) destruction of physical infrastructure. Like a zombie movie, the most harrowing scenes are seeing recognizable landmarks where human activity should be abuzz, being empty and desolate, and the state's efficacy in holding a diverse, geographically far-flung society together is challenged. In short, as disease strikes at the heart of social solidarity, severing our ability to anchor ourselves to social objects from which we derive comfort, reward, and emotional/social/mental resources, the meso-level subsequently falls in disrepair as its lifeblood -recurring daily interactions -are suspended. Consequently, pressure on the idea of what a society is, becomes ever-more tenuous as the very things that ward off anomie disappear (Abrutyn, 2019;Abrutyn & Mueller, 2016). These problems are exacerbated if society is already polarized and in conflict, no matter how dormant, along several different lines of stratification. Indeed, these lines have always become salient and legible amidst plagues (McNeill, 1976). Thus, while solidarity may be produced most directly and powerfully in local rituals that recreate the corporate units that externally embody their meaning and present tangible social and moral moorings to tether ourselves, it is large-scale institutions that stabilize and pattern these rituals. When these are threatened, people who are not in direct contact may experience similar feelings of anomie, putting real pressure on the polity to shore up the proverbial dam. Diseases are, unfortunately, hard to predict, as is the speed of their contagion, magnitude of morbidity, and duration of intensivity. States may react draconically; they may mismanage the crisis, thereby exacerbating a disaster; they may inequitably distribute resources in fighting the disease; and, they may pretend as though it did not exist, emulating Nero and his famous fiddling. Just as the barriers to ritualized encounters put pressure on the macro-level, the successes and, particularly, the failures of the polity, economy, science, and law return the favor, raising the stakes ever so high on the micro-and meso mechanisms of solidarity. For most readers, time and space were and perhaps still are objectively distorted as school, work, and home life are cast into chaos. As we slowly dig out of a disease-driven disaster, how we patch up, repair, and improve the invisible connective tissue, the webbing of group-affiliation that gives meaning and purpose to life presents its own challenges. It is these most tangible concerns, then, that give disease its greatest anti-social force; and, it is these concerns that I examine first in greater detail and specificity. The Phenomenology of Contamination To make sense of a pandemic, I leverage a core conceptual tool that operates at the micro and meso levels: contamination. At the phenomenological level, plagues contaminate collective and individual actors-or, at the very least, threaten to pollute biologically and sociologically. This imagery derives in part from Durkheim's (1995, p. 328ff) concern that the sacred must be protected from the profane, lest the former be polluted or contaminated; a point expanded in important ways by Goffman (1961, p. 23ff.;also 1963, 1967) and his imagery surrounding the self and how it can be contaminated or spoiled. Essentially that which is owned, revealed, and enacted in collective life-including the self-must be set aside from that which is individual and driven by biological or utilitarian motivations. The underlying problem a pandemic poses, then, is two-fold. A disease obviously threatens biological infection, making every individual a potential threat of contamination. Social danger lurks everywhere. In addition, the efforts to mitigate spread, like social distancing, not only heighten awareness of the threat others pose, but also weaken or disintegrate social ties indefinitely. In Durkheimian terms, the threat of contamination produces anomic pain brought on by the sudden disruption in social life and in the sudden restriction of social reality. Consequently, collective actors struggle to remain viable, tangible things, as constituents choose to avoid or are mandated to avoid encounters; and when forced to enter into encounters, Goffman's (1963) "rules" of stigma come into play, creating interpersonal problems for solidarity. At the meso-level, like a zombie movie, the physical and collective landscape appears to instantly unravel, leaving the usual environments for encounters and micro-level processes in the lurch, blocked, or totally disintegrated. To fully appreciate the dangers and horrors of the pandemic, let's look at the most micro of micro-level processes: the sacralization of self. The Self as Sacred The construction of a self and its location within a meaningful, purposeful, and supportive space begins in micro-processes related to interactions, exchange, and communication. Current neuroscience has identified the affective systems through which we are driven to seek out relationships that provide desired resources, the anger and fear we feel when those are threatened, and the panic and grief experienced when lost (Abrutyn & Lizardo, 2020). The point being that social relationships are built up from affect, which, in turn, acts as the underlying cohesive mechanism attaching individuals to each other and groups (Collins, 2004;Durkheim, 1912Durkheim, [1995), as well as to abstract systems and the imagined or generalized others associated with these systems (Lawler et al., 2009). Our self depends on these social or moral "anchors" for it to experience, again, the sense of belonging or connectedness. Hence, threats to or losses of those moral anchors generate panic, grief, as well as efforts to repair deteriorating relationships (Goffman, 1967), seek out new ones, or, potentially, adopt self-destructive or other-harm behavior. Our self, then, is not "ours," it is forged and only meaningful within the context of the collective, and "on loan to [us] from society" (Goffman, 1967, p. 10). And, like any externalized, embodied representation of collective life, it is imbued with sacredness as collective effervescence spreads from ritual and saturates the self in "group-ness." Efforts to protect the self, the other, and the underlying social bond demand vigilance, not merely in the practical sense of "caring" for a relationship, but also in a Durkheimian sense of purity and pollution (Douglas, 1966(Douglas, [2002). Diseases challenge the barriers we construct against contamination. The Spoiled Encounter What happens when the principal method for both making the self sacred and for immersing oneself in the very thing that verifies one's sacredness and generates transcendent meaning becomes threatening? What happens when daily life becomes marked by routine threats to pollute the biological and social self? Social life becomes difficult at best and, at worst, anomic. For Goffman (1961, pp. 23ff, 43-45), it was imperative and natural for humans to preserve and protect the self by cordoning off potentially polluting aspects of our environments. Ordinarily, humans use physical barriers-for example, relegating bodily functions to backstage, private space-in addition to discretion and tact when physical barriers are not available (Goffman, 1963). A pandemic constrains our active management of pollution, as the internal and external "territories of self [or] the boundary that the individual places between his being and the environment is invaded and the embodiments of self profaned" (Goffman 1961, p. 23). We are left with a difficult decision: self-isolate and risk being unable to enact self meaningfully or interact and risk contaminating the self and either becoming sick or, possibly, stigmatized for putting one's own interests above public health. 1 A final wrinkle: Like a good zombie movie, the source of contamination is not merely strangers or categories of "spoiled" actors (e.g., homeless), but rather our family and friends who are more likely to transmit the disease and, thereby, become both objects of concern and of potential pollution. Thus, it is not just extensive networks that crumble, but intensive ones, too, as the choice between potentially contaminating a loved one is thrown into sharp relief against being able to revivify that intimate, significant relationship. This last piece is important in understanding another source of social and biological pollution; a source closely tied to the macro-level backdrop (most pronounced in the United States). While the mask is a reminder of the dangers posed by encounters, to not wear a mask has become a meaningful symbol of expressing animus toward one of the two "tribes" of Americans (Schneider, 2020); it is another example of how larger struggles over status, power, and meaning spill into mundane public encounters. On the one hand, mask wearers are willing to forgo social health to prevent biological contamination (as both a virtue signal of their commitment to the collective good and their personal health). On the other hand, mask-resisters see the restrictions and the threat of biological contamination as one more effort by a distant set of political, economic, and cultural forces (Hochschild, 2016) to threaten their individual choice (in the case of anti-"vaxxers") and/or the social tissue of their communities (in the case of many conservative mask-resisters ). For the former, efforts to restrict biological contamination bring new dangers in routine encounters (like grocery shopping) designed to uphold the basic moral order, as the latter group has turned these spaces into performative sites of grievance, contest, and conflict. To be sure, these social dangers are compounded by the fact that those resisting Center for Disease Control (CDC) guidelines are also threatening to both catch and spread the disease to "essential workers," usually drawn from marginalized communities and who sometimes lack the basic choice to wear or not wear, be exposed or avoid exposure (Krishnan et al., 2020). For the mask-resisters, the mask becomes a verifiable symbol of those agents complicit in the disintegration of their communities and their status and privilege (objectively true or not). Thus, unlike most disasters where uncontrollable forces affect us, the most painful and hidden aspect of disease is that biological and social contamination comes from other people. In sensationalized cases, it is the anti-masker; but in many cases, it is the fellow Church goer, an asymptomatic student, or a mildly symptomatic worker unaware of being infected. A more subtle piece to this, one built on the generic processes of all diseases, but also on the social danger mask mandates have produced during COVID-19, is that contamination is not restricted to people, but also places. The disaster syndrome is not triggered by the loss of community as embodied in razed physical infrastructure, but in the phenomenological loss of space: the sudden fear of inaccessibility to places (and people) that are taken-for-granted extensions of our self, like the mailroom where we routinely check for correspondence, and the resulting deprivation of the sounds, sights, and smells intimately connected to the self. It is for these reasons, then, that some might extoll the virtues of digitally mediated interactions. The Digital Double Edge Undoubtedly, the grief of isolation was temporarily abated by the ubiquitous use of digital communications technology that promised to resolve, at least, the practical problems of small talk, learning, and doing business. And, while online platforms have provided connections that would be lacking otherwise, there are four key downsides that may, in fact, exacerbate the negatives of interaction during a pandemic. First, and most tangibly, the collapse of the clear compartmentalization of kinship, economy, and education generated intense burdens on families, especially working moms whose fragile gains made in the public sphere are suddenly under siege. We have all become accustomed to a world in which we do family at certain times and places as apart from economic roles and school roles. Fear of spaces and government regulations instantaneously collapsed these boundaries. A second problem stems from questions surrounding whether face-to-face encounters are superior to their digital cousins, an argument that rests on the suggestion that co-presence offers "ingredients" currently impossible to replicate online (Collins, 2004): the activation and heightening of our senses when palpably near other human bodies whose attention and affect are entrained and who bodies are synchronized. This assertion, admittedly, remains an open question. Strong evidence, though, points to severe limitations in digital interaction (Kalkhoff et al., 2020). Social interactions are both verbal and non-verbal, with both activating similar parts of the brain. Given that non-verbal is much older than verbal, it is at least equally important to conveying meaning about central aspects of a situation-for example, the actor's own emotional disposition (Freedberg & Gallese, 2007). And, conveying meaning is imperative for encounters to achieve practical and phenomenological goals (Tomasello, 2020). Not surprisingly, some research has highlighted how digitally mediated interactions constrain what is visually available. For instance, children demonstrate deficits in learning from screens (Krcmar, 2010), which is tied to differences in neural processing (for a review, see Dickerson et al., 2017). It is perhaps for these reasons, and of course, cultural ones as well, that for some people digitally mediated forms of interaction are felt as hollow and difficult to embrace (Tufekci & Brashears, 2014), which undoubtedly affects how committed an actor is to the success of a given encounter and restricts emotional entrainment. For youth, these first two problems are exacerbated by a third: their once stable (inperson) social worlds shift to a cocktail of online learning and social media connectivity. Gone are the rituals of high school that, for better or worse, characterize what youth and adults define as "normal" childhood. The efficacy of social media in providing the developmental and support needs of youth remains dubious (Holtzman et al., 2017), while social isolation during online or hybrid learning runs counter to what appears to make youth most healthy (Orben et al., 2020). The fourth problem comes from a surprising body of empirical research; one that points to the virtues of digitally mediated interaction. It has been shown, for instance, that these platforms can facilitate the emergence of digital affect cultures, or artificial environments that facilitate and heighten the flow of affect from user to user by generating emotional resonance and alignment, can build some form of community (Döveling et al., 2018). While this research does push back against the idea that online encounters are always unsatisfying, it does raise another issue: when digital mediation is successful in protecting against biological contamination while mitigating the structure of social health, does it prevent the spread of social contamination? It is clear that not all connectedness is a good thing (Abrutyn & Mueller, 2016), as encounters and ritualized interaction, especially with people we trust and care about, facilitates the diffusion of feelings, thoughts, and actions. Emotions are especially "contagious" or easily diffused through encounters as humans disclose their struggles, both explicitly and implicitly. In short, better connectedness amidst a pandemic means more efficacious exposure to others' suffering and experiences during the pandemic, which may contribute to the diffusion of a shared sense of anomie. Thus, even if digitally mediated encounters do provide us with ritual and assembly for plunging our self into, it also allows for rapid diffusion of a discourse of danger, risk, contamination, and real or false beliefs about the immediate or long-term future. From Micro to Macro Against this backdrop of a sacred self yearning to ward off creeping anomic pain as ritualized interactions, collective assemblies, and regions of everyday life become threats of contamination, the macro-world around us distributes choices, experiences, and contamination unevenly. Put differently, while the local world we inhabit is the true source of solidarity, these smaller units of life-encounters, groups, organizations-are nested within a large structure and culture that directly and indirectly permeates the construction of solidarity within them and between those units and others. In the United States, for instance, the horrors of the pandemic in the earliest months cannot be fully grasped without considering political polarization. For many "Blue State" denizens, the polity's (and, more specifically, then-president Trump's) inefficacious, incompetent response only served to destabilize social reality even more. Directly, by calling into question whether the state could protect its citizens, and indirectly as "Red State" individuals were both emboldened by mixed messages from the president, the messaging of the conservative media, and by their own fear of their shrinking social webs getting any smaller and, consequently, turned the public sphere into sites of direct conflict. Structurally, pre-existing cleavages (e.g., racial divides) were exploded by both the inequitable burden of risk spread across racial, ethnic, gendered, and class lines, extant inequities in health care access, the tendency to live in denser, multi-generational communities, as well as by the more general stress and anxiety folks felt in the face of biological and social contamination; these forces made civil society a tinder box. As with previous polities threatened by disease, its weaknesses were exposed alongside economic and civic weaknesses, which is why the US experienced the pandemic so much more intensely (at first?) than, say, Canada or Germany. Thus, it is imperative that we examine the macro-level dynamics of solidarity in light of those described above. Institutions and Social Solidarity Before examining the specific context of the US, however, it is worth taking a step back and thinking about what institutional spheres are and why polity matters so much to solidarity in the face of disaster and disease. Although institution is notably vague, I define institutions as the macro-level spheres or social orders that are the fundamental bedrock of all human societies. Universal spheres, such as kinship, polity, religion, law, and economy, organize beliefs and practices for significant portions of a given population, as do more recent spheres like science or medicine. 2 Viewed this way, institutions are the bedrock of any society's ability to integrate far-flung, heterogeneous populations. They do so, at the most basic level, by subjecting batches of people to generalized structural and cultural conditions: role/status relations like doctorpatient, mother-child, or teacher-student act as everyday vehicles for imposing generic rules and regulations (while allowing for flexibility, local variation, and personal idiosyncrasy) on any and all who inhabit those positions. Finally, by embedding encounters, groups, and organizations in a common cultural space, they integrate people through mundane and occasionally spectacular rituals designed to connect individuals with the distant institutional center. Rather than see institutions as reifications, then, I treat them as physically, temporally, socially, and symbolically real in the effects they have on our experience of daily life. Thus, as polity or science becomes increasingly autonomous (i.e., its physical, temporal, social, and symbolic spaces grow structurally/culturally discrete vis-à-vis kinship or religion), it acquires cognitive sovereignty over a set of processes, practices, and problems. We feel, see, touch institutions in both the arrangement of space in their domination and authority over certain problems, practices, beliefs, and so forth. By taking one of the generalized roles (e.g., patient, client, citizen), we take for granted that their counterparts will fill in our blind spots; they assume a host of risks for us, presumably, for our benefit. Central to my analysis is the polity, or the institutional sphere responsible for collective, binding decision-making, producing and distributing power, and mobilizing resources to realize whatever goals its elite set. Typically, polities are highly pragmatic institutional spheres, even when legitimated religiously, as they are the primary response to exigencies of all types. In centralizing risk management, polities deal with public works, defense, third-party adjudication, and so on (Johnson & Earle, 2000) . In the past, polities used whatever mechanisms were available, though without Germ Theory, its responses were limited. Hence, the importance of science in modern efforts to deal with disasters, especially disease (Beck, 2008). Consequently, micro-processes of solidarity must be understood as filtered through and reciprocally acting on the existing institutional complex or arrangement of a given society. Theoretically, a strong, positive response that appears to be protective will provide a barrier against the threats to our everyday experiences, whereas a weak or incompetent response will not only expose the lack of a safety net, thereby increasing the anomic response, but also will have real consequences for how risk is distributed. The State's inequitable response under ordinary conditions, particularly those historically burdened by the uneven distribution of resources, exacerbates these conditions and further widens already disintegrative fissures. In what follows, I examine the more general suppositions surrounding institutions, solidarity, and disease and then some key details that underscore why the US has been especially vulnerable to the pandemic, particularly leading up to the 2020 election. Institutional Risk To some extent, we live in what Beck (1992Beck ( , 2008 termed a "risk society," or a world in which the identifying, processing, and triaging of-and, I might add inoculation against-risk has been increasingly centralized into the structure and culture of the polity and its reliance on science. Centralization, as it always has been, is both a source of risk-especially in highly bureaucratized states that often ruthlessly determine what a hazard is and how to calculate reasonable loss-and a source of solution given its monopoly over implementing policies meant to contain or reduce risk. Besides the shadow of political expediency that shapes risk management, the polity's principal weakness-again, exacerbated in modernity-is the fact that they are arenas in which status groups and class interests, usually filtered through party machinations, clash. Put differently, States are typically rational and thus operate on expedient decisionmaking (Scott, 1998), but this expediency is greatly shaped by political agendas of parties in control and those striving to take control. Because political parties vary in their composition, control over the state can exacerbate (or dampen) existing patterns of inequality that lead to the off-lining of risk to some disadvantaged regions, populations, and, in the world-system, nations. Typically speaking, polities rarely "win" against diseases, though they may outlive them. Sometimes this is because polities respond inefficaciously, but the principal reason polities struggle is that diseases are "irrational" and rooted in their contaminative capacity. For instance, the first States, built on alluvial flood plains invited their own destruction as canals and irrigation also gave rise to a new barbarian and threat to "civilization": a water-borne disease, schistosomiasis. Polities continuously stumbled and fell whenever schistosomiasis spread through the farmlands, causing debilitating lethargy and sudden, insurmountable shortages in food (McNeill, 1976). Their stealthy, illegibility makes diseases resistant to political solutions, in many cases, and, therefore, elusive to risk management. Not surprisingly, in ancient states, the invisibility, lack of control, and sudden devastation conditioned a discourse and set of techniques fashioned around the idea of danger. Dangers are built on invisible, blameless calamities and exigencies (Luhmann, 2005), and require an older, morally "thick" language that contrasts with the technocratic nature of risk (Douglas, 1990). Thus, while Beck argues that problems in postindustrial, postmodern societies are identified, processed, and triaged through risk discourse, COVID-19 has shattered the idea that modern humans are superior to their premodern ancestors. The forensics of disease are refracted through a discourse of danger, not risk; the latter proves too sterile, too antiseptic to manage the contaminative edge that is experienced in daily life. A side effect of the danger discourse is the moral nature of blame, a factor that adds to the threat of social contamination and the construction of diffuse beliefs about who is to blame as well as what classes of people pose the most danger. Thus, while the institutional apparatus appears to be failing, people turn to either the State or extra-State actors to erect new and strengthen old barriers between those deemed "normal" and those who are dangerous. To be sure, these inherent institutional weaknesses were exacerbated by, say, the US government's response. The political calculus of Trump made for a volatile mix with the conservative position in the culture war against any intrusion by the State in personal, local, or state affairs. Unfortunately, the polity's reliance on the scientific institution for risk management techniques further exposed the weaknesses of institutional solutions, as inherent weaknesses in science combined with the Trump administration's weak response to reduce confidence and trust in scientific techniques. Where polities deal in expediency, science deals in probabilities, knowledge accumulation, and the unattainable and intangible goal of truth. Its fruits are enjoyed in material pleasures such as air conditioning, air travel, and the internet. But, science has never been effective at dealing with the three ontological problems identified by Geertz (1966): uncertainty, suffering, and evil, because, in its ideal typical form, science is disinterested and lacks an affectual connection. It may explain why a person's heart gives out, but it offers no comfort or solace. Indeed, rather than answer ontological dilemmas, science has sometimes exacerbated them-for example, Nagasaki/ Hiroshima. However, the blending of science and polity in risk societies has had a decent track record, at least in terms of public health, since the 19 th century, which is why a pandemic is so interesting; and horrifying. Most notably, the State's inability to contain the disease, much like the ineffectual nature of States in zombie movies, and science's inability to immediately diagnose and treat the disease (as unfair an expectation as that may be), not only hastened biological contamination but also presented a new social danger. In part, science's "failure" was due to its own cognitive and emotional limitations built upon the blind spots inherent in any community of knowledge producers (Molteni, 2021). But, science was also unfairly put to the test in a 24-hour world driven by instantaneous demands for results. Either way, COVID-19 raised significant questions about how safe the State and science make us or can make us. The final piece that helps explain risk management and its inability to contain dangers stems from the economic system that undergirds all human activities, including politics and science, is capitalism. Western states rely on consumption, and consumption requires employment and a lack of interruption in economic activities. The US was stuck between its commitment to the collective and to sustaining the economic sphere, exposing further weaknesses in the institutional abilities to avoid danger and maintain social solidarity. In short, history repeats itself: an invisible "barbarian" has done what social movements of all sorts have failed to do: expose the tenuousness and exploitative nature of the institutional apparatus that presumably stabilizes and encourages solidarity. Apparatus that is further weakened by existing political, economic, and cultural fissures that dot any given nation's landscape. Risk Refracted Through Personal and Impersonal Societies Solidarity, then, is endangered by the polity's ability to keep the fissures from cracking, but a polarized polity (Beck 2008, p. !2), like the US has its hands tied behind its back as it navigates the containment of contamination. To oversimplify for the sake of argument, the US is largely divided into two tribes. Risk and danger are perceived differently by these two different parts of the US, in part because of the micro/meso mechanisms by which the self is sacralized. The conservative image of community envisions the moral center as palpable, tangible, and directly available. Here, actors commune directly with their moral center, together in collective assemblies composed of mundane interactions at the post office and spectacular Sunday rituals in a (mega) church (Wuthnow, 2018, pp. 30-32). Their direct, local experiences have become defined as American and thus threats to it, to the ability to do community, are threats to both the moral center and each member's personal identity. In a penetrating study of the federal government's role in protecting citizens against environmental dangers, these feelings were clearly expressed over and over again. "It was hard enough to trust people close at hand, and very hard to trust those far away; to locally rooted people, Washington D.C. felt very far away [resulting in a threat of] frightening loss. . .of their cultural home, their place in the world, and their honor" (Hochschild, 2016, p. 54). For members of small towns, the horrors of the plague are less rooted in the threats to the individual self, but rather in solidarity-producing moral anchor. This notion of risk is buttressed by the local assemblage of institutional space: science is replaced by a particularistic, local religio-kin type system (that includes members in the community as fictive kin through extensive moral ties) fused with community politics, which, in turn provide a protective barrier against the distant, a/immoral national polity and global economy . Consequently, the danger discourse shifts the blame to threats to local autonomy (the "nanny state"), liberal transgressions against Christian values, and modernity's threat to the traditional family. For the other "half" of America-if that is not too simplistic a label-the pandemic casts the cohesive nature of the US in doubt, too. In part, this is due to the specific time and place: the Trump administration was not viewed as legitimate by many and its eponymous leader was viewed as a real threat to what this segment deems American. Like Nixon and Watergate (Alexander, 1988), Trump is perceived as polluting the political center on a spiritual and practical level. The chaotic response-for example, throwing away the Obama pandemic "playbook" (Knight, 2020)-undermined the polity's capacity to practically and symbolically contain the contamination and reflected the belief that the state had become a patrimonial tool for the pleasure of the administration; a situation that was not simply offensive on a cognitive level, but a moral level for many in this US tribe. Like their small-town counterparts, these communities reached the same conclusions, but for different reasons: the federal government's claim to cognitive sovereignty was doing more harm than good. The two tribes saw the solutions as different, with the conservative response being more local autonomy . and their counterparts being in favor of a stronger, more competent State response (Haffajee & Mello, 2020). This divide easily mapped onto the everyday experience of danger that Blue State denizens faced or feared they would face, as dense urban spaces not only facilitated biological contamination but increased the opportunities to encounter social dangers posed by mask-resisters, including as well as anti-"vaxxers" who tend to disproportionately live in urban areas (Olive et al., 2018). In short, the weaknesses of the American institutional apparatus made salient the cleavages, inequities, and polarities that, in times of calm, are often washed over by economic life. Besides the ideological divide highlighted above, economic and racial inequalities have also predictably become pressure points that have both exposed the weaknesses of the polity and made it weaker at the same time. Though it would be naïve to minimize Black American's grievances or the working class's precarious position, it is plausible to wonder whether these divides would have bubbled to the top so ferociously had the polity handled the pandemic more effectively. The phenomenological terror of social danger, biological risk, and the inability to reproduce self and the social/moral order leaves folks feeling acute anomie. And, it is this macro-level layer of anomie combined with the social and biological threats tearing at the fabric of everyday life that has driven protests, civil unrest (among "both" Americas for different reasons), and, at best, made a hopeful future cloudy; and at worst, impossible. Final Thoughts In spite of his horror and pessimism, Durkheim's sociology expressed optimism in humanity. Disruptions, though potentially fatal for society, also brought opportunities to reshape how things are. After the Spanish Influenza killed 50 million worldwide (650,000 of which were Americans) and infected scores more, the world went back to "normal." The "roaring 20s" in the United States was filled with gatherings of many people; baseball stadiums were soon filled; and, the handshake remained to this day, the principal greeting in polite and depersonalized society. Perhaps, then, 2020 (and, probably 2021) are blips on an otherwise steady march toward greater empathy and equity? The argument against this optimism, however, points to the historical context of the Spanish Influenza: the 1920s followed on the heels of World War I, in which a significant number of young men were killed alongside the tendency for the flu to kill younger people (relative to COVID-19s mortality rates). Likewise, the US, today, is very different from the 1920s, when Jim Crow laws reduced the need for White communities to find ways to generate solidarity with communities of color, especially Black communities. Perhaps repairing the US and building solidarity seems more difficult today than in the early 20th century. Setting aside these longer-term questions, it is also reasonable to ask two related questions: how long will COVID-19s effects linger? The vaccines seem effective against many variants, but the resistance to vaccines throughout the West and the inequitable access to them throughout the world has complicated the slowdown of COVID-19. The discourse of danger also threatens to crystallize mask-wearing and social distancing as signs of belongingness to one political tribe vis-à-vis another. What might no longer be necessary for public health, may remain markers of moral distinction. The second question is premised on the assumption that this is not the last pandemic contemporary humans will experience. Do political and scientific institutions learn from their successes and, more importantly, failures, or are diseases so unpredictable that we are doomed to repeat our mistakes? So long as the US remains polarized, it is difficult to imagine the US adapting, though much of the rest of the world may be better prepared. As we move down to the meso and micro-levels of social reality, we see more immediate changes possible. The US in the 1920s was already accustomed to belonging to voluntary associations. Perhaps the pandemic has laid bare just how much we miss these types of organizational attachments. The reliance on digitally mediated social space continues to prove, neurobiologically and sociologically, unsatisfying for most (Kalkhoff et al., 2020;Tufekci & Brashears, 2014), and thus, a trend toward more "joining" may arise if and when a sense of safeness replaces that of danger. It is also possible that Americans are motivated to re-make the idea of joining, and hybridize modern life with traditional collective life. Youth have been leaving dense urban spaces for some time, and it may be the case that the suburbs are reimagined with multi-generational family life emerging in planned communities that thrive on distance employment. Perhaps more interesting is what the post-pandemic collective American identity will look like, especially among young Americans who have come of age post-9/11 and have lived through some highly traumatizing periods of time. Though they fought in no wars like their grandparents or great-grandparents, they have endured myriad economic downturns, highly visible displays of racialized violence, and an impending sense of hopelessness as climate change has, thus far, remained beyond their control. Collective and cultural trauma is all about identity (Alexander, 2004;Erikson, 1994): groups can incorporate the experience and interpretation of trauma into their own sense of self, elevating it to one of, if not the, defining attribute of who they are. The self can only sustain being defiled and contaminated for so long before defensive measures are put in place. What COVID-19 does to these cohorts, both directly in the experience of masks and social distancing, and indirectly in the form of economic vulnerability, racial injustice, and civil unrest remains to be seen. Tempting as it may be to end on a fatalistic note, it is imperative to contextualize the now and push back against the allure of "golden ageism." Human societies have faced severe pressures in the form of wars, diseases, natural disasters, and so forth several times over. The stakes are higher now only because the size and scale of failure are exponentially greater. But, telling the future is not social science's strength precisely because human ingenuity has been our greatest adaptive characteristic. Just as collapse is commonplace in history, so too are resilience, reconfiguration, and revival.
v3-fos-license
2023-02-05T16:04:24.044Z
2023-02-03T00:00:00.000
256581729
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.scielo.br/j/mr/a/yGQhNM3kcBxbpVLRJWCPDnL/?format=pdf&lang=en", "pdf_hash": "9769f7073f15dd4f2aa83293d6412ddf2d3c3d07", "pdf_src": "Dynamic", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:303", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "sha1": "36821f8d40fd4ef7f1226eef738665896d431c56", "year": 2023 }
pes2o/s2orc
Directional Solidification of Aluminum A360 under Moderate DC Magnetic Field and Electric Current Abstract Metal additive manufacturing is rapidly developing technology, but its application in wider scale is limited by several factors. One of these is expensive raw material, because it requires certain physical properties. Two most popular metal additive manufacturing methods are printing from powder and printing from wire. Wire is usually produced by drawing it from rod. Rod can be produced by directional solidification, which is well known method to study the microstructure formation depending on various parameters during solidification. In this study directional solidification of A360 aluminum alloy with electromagnetic interaction is investigated. Aluminum alloy is induction melted and then directionally solidified into the rod 12-20 mm in diameter. Aim of this work is to investigate the role of axial DC magnetic field and electric current interaction on the grain refinement and mechanical properties of A360 aluminum alloy. It is found that electromagnetic interaction can be the approach to refine the grains, regulate the growth of oriented columnar grains and to improve mechanical properties of the material. Introduction Additive manufacturing of metals unlocks new possibilities to produce complicated custom shape parts in a short time and with less material consumption.Nowadays the metal 3D printing market is a rapidly rising industry of wide variety of applications 1 .With the market growth more resources are invested in development of new technologies and materials for additive manufacturing.For metal additive manufacturing it is important that the raw material has certain properties and fine-grained isotropic microstructure.Aluminum alloys are one on the perspective materials used for additive manufacturing because of their wide range of applications and low melting temperature 2 .Aluminum is perspective in various engineering applications because it is common material with lots of different alloys with finetuned properties for specific applications.There are several additive manufacturing methods how aluminum parts can be produced.Most common is additive manufacturing from powder 3 .Powder is sintered by laser or electric arc layer by layer.This process has several drawbacks, because it is slow, produces lot of waste, and raw material is expensive.For additive manufacturing aluminum powder should be spherical particles with narrow size distribution and isotropic microstructure.This powder is usually produced by atomization process 4 .Other alternative method is additive manufacturing using wire as starting material.Additive manufacturing from wire is cheaper and allows to achieve higher production speed, but accuracy is lower and shape possibilities are limited.This process is in fact metal inert gas welding process, where aluminum wire is welded to the part under inert gas stream 5 .Aluminum wire can be welded by electric arc or laser 6 .Wire for aluminum additive manufacturing is produced by drawing it from rod.Wire material for additive manufacturing needs to have fine grained isotropic structure, low porosity, and good homogeneity.These rods are being produced by directional solidification to avoid shrinkage porosity and large oriented grain growth.Direct chill casting is one of the methods for rod production, where rod is pulled from liquid melt and solid part is chilled by liquid or gas jets 7 . Control of metallic alloy solidification process is important because properties of the same alloy may vary in large amplitude depending on circumstances and processing during solidification.One of the methods is control of the melt flow during solidification, which greatly affects the local heat and mass transfer near the solidification interface.Electromagnetic contactless methods for improved solidification process are one of the ways how to decrease grain size and improve homogeneity of the metal alloys 8 .Electromagnetic interaction on directional solidification of metal alloys and composites have been investigated is various contexts showing that in many cases it is the means how to achieve different microstructure and improved mechanical properties 9 .There are numerous research works on various electromagnetic methods during directional solidification of various metallic alloys.Static magnetic field is used to damp the liquid phase convection.It is shown that even moderate magnetic field of 0.1 T is sufficient to significantly change the molten phase flow of Sn-Pb alloy as demonstrated by Hachani et al. 10 .If solidification velocity is low (up to 0.5 mm/s) directional solidification is significantly affected by static magnetic field, where the main mechanism for several metals can be thermoelectromagnetic effect at the solidification interface 11 .DC and pulsed electromagnetic interactions are known to have effect on dendrite breaking and grain refining 12 .Alternating magnetic field and DC magnetic field combination on the microstructure of directionally solidified alloys have been studied by several authors, showing the significance of these contactless methods on the microstructure and homogeneity of the solidified material 13,14 . Experiment In this work we aim to experimentally study the simultaneously applied static magnetic field and DC electric current through the directionally solidified aluminum alloy rod.For these experiments we chose commercial A360 alloy, which is known material for dye casting.This alloy has dendritic solidification interface with profound mushy zone between solid and liquid phases 15 . Our experimental setup is designed to investigate electromagnetic interaction on the solidification interface during directional solidification of aluminum alloys using water jet direct chill casting.Magnetic field at the solidification interface has only axial component, but field gradient is rather high, thus precise position of solidification interface is important.Concept of permanent magnet assembly is similar as used in liquid silicon processing experiment 16 .Electric current can be applied through the solidification interface.One electrode is connected to the bottom seed rod, while the top electrode is immersed in the upper crucible with molten aluminum as shown in experimental scheme in Figure 1a.Aluminum is melted and heated up to 700 o C by induction in the top crucible.Crucible is sealed and argon pressure of 0.25 Bar is applied to increase the metal flow and to overcome the surface tension in case of small metal depth.Bottom seed rod is lowered by programmable stepper motor.Solidification takes place in the middle of the thermally resistant boron nitride tube.Thermal problem in the axisymmetric problem is numerically using Comsol Multiphysics software.Results are shown in Figure 1b. Static magnetic field of 0.45 T is provided by permanent magnet assembly placed around the solidification zone.Magnet is assembled form 16 pieces of segment shape NdFeB N42 grade magnets and outer iron yoke.Magnet system is shown in Figure 2A, numerical model of magnetic flux density calculation is shown in Figure 2B.Calculated magnetic field values along central axis and radial coordinate at the middle of the magnet system are shown in Figure 2C and Figure 2D.In this way magnetic flux density in the 80 mm diameter bore of the magnet is increased.If electric current is applied parallel to the magnetic field, then Lorentz force appears at the solidification interface.In such configuration Lorentz force drives small scale melt rotation around each individual dendrite 17 .Microscale convection have influence on the heat and mass transfer. Materials and Methods A360 aluminum alloy is used for the experiments as it is alloy with well-known properties, which are summarized in Table 1.Primary dendrite sizes for this alloy solidified with velocity of 2 mm/s is around 50 µm 19 , which agrees well with observations from our experiments.Electric current of 157 A is applied through the 20 mm diameter, thus current density of 0.5 A/mm 2 is reached.At the solidification interface magnetic field is applied parallel to electric current direction, thus Lorentz force appears only if current is diverging or converging. Schematic picture of dendrite geometry during solidification is shown in Figure 3. Mushy zone thickness for binary alloy can be estimated from Equation 1 20 . ( ) For A360 alloy with temperature gradient at the solidification interface of 7 K/mm we get that mushy zone thickness is 0.35 mm.This estimation is in good agreement with observed primary grain structure obtained in our experiments.To estimate the melt flow velocity during solidification, characteristic dimensionless numbers are calculated.MHD flow is characterized by interaction parameter N, which indicates ratio between electromagnetic and inertial forces. Hartman number characterizes ratio of electromagnetic force to the viscous forces: Calculating dimensionless numbers for our experimental setup and using material properties from Table 1 we get.N=2000 and Ha=60, which means that inertial and viscous forces are small and melt flow is mainly governed by electromagnetic forces.Crucible scale flow is this case is small, because numerical model shows that solidification front is flat.Dendrite scale melt flow velocity can be estimated by solving simplified Navier-Stokes equation balancing viscous and electromagnetic forces and neglecting inertial term. Main component of the electric current and magnetic field is parallel.Perpendicular component depends on the ratio between electrical conductivities and the morphology of the mushy zone, which is represented by the constant c in Equation 4. Constant c shows what is the diameter/length ratio of primary dendrites.It can be estimated from mushy zone thickness and characteristic grain size in microstructure for a given alloy.In our case we can assume that c=1/5.From Equation 4we can calculate that characteristic convection velocity is 0.8 mm/s.This means that local convection velocity around each dendrite arm is rather fast and makes two revolutions per second. Results and Discussion In interdendritic space melt flows from two neighboring dendrites partly cancels out while near the crucible wall flow is not compensated.From these analytical calculations we may conclude that small scale convection leads to improved heat and mass transfer near the dendrites.Convection is the mechanism, which helps to dissipate latent heat and homogenize the concentration.Numerical simulation, calculating current distribution, Lorentz force and liquid phase flow in mushy zone is developed in Comsol Multiphysics 6.0.Three-dimensional model in simplified dendrite mesh shows that current has component perpendicular to axis near the tip of the dendrites as shown in Figure 4a.Lorentz force distribution and velocity at the dendrite middle plane are shown in Figure 4b and Figure 4c.Velocity field confirms that in the middle between dendrites the flow is small, while it increases near the crucible wall.In vertical cross section velocity field does not extend far above the mush zone as shown in Figure 4d.Results of the numerical model are summarized in Figure 4. Velocity agreement between analytical estimation and numerical model are good.We conducted series of directional solidification experiments.Experimental setup is similar like classic Bridgman solidification experimental setup 21 . Directional solidification experiments with A360 aluminum alloy were done with solidification velocity of 2 mm/s.After directional solidification samples are cut and microscopy and microhardness and tensile strength tests are performed. Microscopy pictures of the solidified samples are summarized in Figure 5, showing both transverse and longitudinal cross sections of the crystallized samples.Experimental results demonstrate that solidification without electromagnetic fields leads to columnar microstructure, which can be seen in Figure 5b.Applied static magnetic field causes this longitudinal structure to disappear as shown in Figure 5d.Such shift is columnar to equiaxed grain structure transition due to electromagnetic effect is known and reported in several scientific works.If electric current is injected parallel to magnetic field, significant small-scale melt convection takes place around the primary dendrites.This leads to radically increased heat transfer between solid and liquid phases, resulting in fine grained equiaxed structure formation and lots of eutectic phase regions, where components are well mixed. Directionally solidified rod is machined into test samples 6 mm in diameter according to the ASTM E8 standard.Before mechanical testing samples were heat treated according to T6 heat treatment process.Ultimate tensile strength of the materials is tested using Zwick/Roell Z100 testing equipment.Our tested sample and 8 mm displacement sensor is shown on Figure 6a, while collection of samples are shown on Figure 6b.Standard A360 alloy ultimate tensile strength (UTS) is 317 MPa 22 .Stretching force versus displacement of all three samples are shown on Figure 6c.From these data mechanical properties are calculated and summarized in Table 2. For all obtained samples the UTS was above the standard value.-for reference 332 MPa, for sample with magnetic field and current 346 MPa and surprisingly for only with magnetic field 395 MPa.For detailed analysis higher quantity of UTS tests should be done, however tests show that at least some electromagnetic treatment during crystallization can give an improvement in ultimate tensile strength.Similar tendency was obtained with sample fracture as well, it was observed Experimental results demonstrate that microstructure of the solidified A360 aluminum alloy is affected by the applied electromagnetic interaction.It is found that applied axial magnetic field during directional solidification leads to finer grain structure and affects columnar to equiaxed transition (CET).It is known that CET can be affected by applied fields 23 .Magnetic field has several effects on liquid metal flow.Firstly, large scale convection perpendicular to magnetic field is suppressed and secondly electric current and magnetic field creates new convection in mushy zone.This type of microcirculation around oriented primary dendrite arms affects the heat transport and is responsible for improved microstructure.Numerical model of melt electromagnetic convection in Figure 4c and analytical estimation shows that melt flow of 1 mm/s exists in mushy zone as a result of current and applied magnetic field interaction.Directionally solidified material has better mechanical properties, than die casted sample.Directionally solidified sample with improved properties can be perspective raw material for drawing into high quality wire suitable for additive manufacturing. Conclusions This work demonstrates that directional solidification microstructure in aluminum A360 alloy can be modified by applied electromagnetic interaction with the axial static magnetic field and DC electric current.Lorentz force modifies columnar to equiaxed transition and refines grains, leading to more isotropic structure of directionally solidified ingot.Improved mechanical properties are observed in directionally solidified samples, with forced convection caused by electromagnetic forces.As a continuation of this work, it is planned to develop this method for electromagnetically improved Al alloy and Al based metal matrix composites for improved casting process. Acknowledgments This research is supported by ERDF project "Electromagnetic technology with nano-particle reinforced light alloy crystallization process for 3D additive manufacturing applications" (No. Figure 1 . Figure 1.Directional solidification experiment: a) Cross section of the experimental setup; b) Temperature distribution along the symmetry axis. Figure 2 . Figure 2. Permanent magnet system to create 0.45 T DC magnetic field in 80 mm diameter bore; a) Assembled magnet system; b) Axysymmetric numerical model; c) magnetic field induction along y axis; d) magnetic field induction along x axis. Figure 3 . Figure 3. Schematic image of dendritic directional solidification and mushy zone.Electric current and magnetic field caused convection: u-crucible scale, v-dendrite scale. Figure 4 . Figure 4. Numerical simulation results of melt flow in mushy zone: a) Electric current distribution; b) Lorentz force distribution at the middle height; c) Velocity distribution at the middle height; d) Velocity magnitude distribution at the vertical plain. Figure 6 . Figure 6.Mechanical test results: a) Elongation measurement sensor on the sample, b) Samples prepared according to ASTM E8 standard, c) Experimental results for different A360 samples. Table 2 . Comparison of mechanical properties between A360 alloy ingots solidified in various regimes.are more ductile.Youngs modulus of different sampled does not show significant variation and agreed well with the value from literature.Microhardness tests shows that for all samples Vickers hardness is 128 HV.
v3-fos-license
2019-01-05T23:48:40.632Z
2018-12-01T00:00:00.000
57375424
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://microbiomejournal.biomedcentral.com/track/pdf/10.1186/s40168-018-0615-0", "pdf_hash": "f8881ca52cf30423917c543bc35eebeb62da5450", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:304", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "097bdf8085fe438d534d10e19631bca2e9f08b81", "year": 2018 }
pes2o/s2orc
Rhizosphere microorganisms can influence the timing of plant flowering. BACKGROUND Plant phenology has crucial biological, physical, and chemical effects on the biosphere. Phenological drivers have largely been studied, but the role of plant microbiota, particularly rhizosphere microbiota, has not been considered. RESULTS We discovered that rhizosphere microbial communities could modulate the timing of flowering of Arabidopsis thaliana. Rhizosphere microorganisms that increased and prolonged N bioavailability by nitrification delayed flowering by converting tryptophan to the phytohormone indole acetic acid (IAA), thus downregulating genes that trigger flowering, and stimulating further plant growth. The addition of IAA to hydroponic cultures confirmed this metabolic network. CONCLUSIONS We document a novel metabolic network in which soil microbiota influenced plant flowering time, thus shedding light on the key role of soil microbiota on plant functioning. This opens up multiple opportunities for application, from helping to mitigate some of the effects of climate change and environmental stress on plants (e.g. abnormal temperature variation, drought, salinity) to manipulating plant characteristics using microbial inocula to increase crop potential. Background Climate change has altered plant phenology [1,2]. This has crucial biological, physical, and chemical effects on the biosphere and the earth system [3]. These phenological alterations have become a subject of great interest in ecological and environmental sciences. Changes to phenology have been attributed to multiple factors, including warming [4], but the role of plant microbiota and particularly rhizosphere microbiota has not been considered. And yet, the rhizosphere harbors a diverse community of microorganisms that play critical roles in plant growth and reproduction [5,6]. We know that rhizosphere microbiota protect against pathogens, improve growth by producing phytohormones, and may help plants withstand environmental perturbations such as abnormal variation in temperature, drought, and salinity related to climate [7][8][9][10]. Recent studies also suggest that root microbiota can contribute to phenotypic plasticity, which has important implications for our understanding of plant phenology in a changing climate and for increasing crop production [11,12]. Several auxins have decisive functions in the establishment of plant developmental and reproductive programs [13,14]. Auxins can be synthesized by rhizosphere microorganisms [15,16], raising the intriguing possibility that root microbiota may regulate plant growth and development through phytohormone production. Understanding the interactions between plant microbiota, root exudation, and plant growth and reproduction, however, remains limited, despite significant advances in the last decade [17]. Root exudates account for 5-21% of total photosynthetically fixed carbon and help drive the composition of rhizosphere communities [18,19]. Exudates may be excess plant products [20,21], but they can also contain signaling and chemoattractant molecules. These molecules recruit beneficial microorganisms that contribute to pathogen resistance, water retention, and the synthesis of growth-promoting hormones [22], and may influence plant phenotype [23]. Interactions between exudates, soil microbiota, and plant physiology have the potential to dynamically affect rhizospheric communities and alter plant phenotypes by complex feedback mechanisms. We studied the molecular interactions among root exudates, rhizosphere microbiota, and plant physiology in wild-type (Wt) and mutant (pgr5) plants of Arabidopsis thaliana (hereafter Arabidopsis) and identified a novel network of molecular interactions linking the nitrogen cycle, the phytohormone IAA produced from Tryptophan (Trp), and the timing of flowering. These results thus provide evidence of an outstanding phenomenon: the timing of plant flowering may be influenced by soil microbiota. Results and discussion Rhizosphere microbiota can delay the onset of flowering of Wt Arabidopsis Multiple generations of experimental adaptation/acclimation could be used to observe microbially mediated mechanisms of plant growth and reproduction [24][25][26]. To test whether the multi-generations of rhizosphere microbiota can induce earlier or delay flowering time, we measured the phenotypic parameters of Arabidopsis plants growing in soil for three generations (G1, G2, or G3) inoculated with different soil microbiomes (Fig. 1a). The phenotypic parameters of Wt Arabidopsis did not change significantly in plants treated with microbiota isolated from the roots of wild-type plants (Wt-M treatment) compared to the corresponding control (growing in sterilized soil without addition of soil microbiomes) during the first two generations (Additional file 1: Table S1). This indicates that plants in sterilized soil could grow as well as those grown in sterilized soil inoculated with a live microbial slurry during the first two generations. Wt-M rhizosphere microbiota significantly affected flowering and reproduction by the third generation. Flowering time in the G3-Wt-M group was significantly delayed, by approximately 3 days, and silique number increased significantly. The richness and diversity of the rhizosphere microorganisms tended to decrease in parallel with the changes A B C Fig. 1 Design of the microcosmic experiment across generations and the profiles of rhizosphere microbial communities. a The experimental operation diagram of soil microbiota selection by plants; rhizosphere microbial richness: ACE index and number of species per sample (b); and relative abundance of the 10 most abundant microbial phyla (c) across three generations of Arabidopsis grown in microcosms. G-Wt-M represents microbiota in the corresponding generation of wild-type; G-pgr5-M represents microbiota in the corresponding generation of pgr5 Arabidopsis. Taxonomy details and significance analysis are shown in Additional file 2. Different letters represent significant differences (ANOVA followed by an LSD test; p < 0.05). Values are means ± SDs (n = 3) in Arabidopsis physiology over the three generations. Species richness and diversity based on four indices (Chao1, abundance-based coverage (ACE), number of species, and the Shannon index) significantly decreased after the two and three generations ( Fig. 1b and Additional file 1: Figure S1). The relative abundances of Proteobacteria and Acidobacteria, the dominant bacterial phyla in the rhizosphere, decreased by generations 2 and 3 relative to generation 1. In contrast, Bacteroidetes and cyanobacterial abundances increased by generation 3 ( Fig. 1c and Additional file 2). It was demonstrated that some early (or late)-flowering plant could be associated to specific soil microorganism families [25] and that plant microbiota could be associated with changes in different plant growth phases [27]. The results suggest that the selected enriched microbes during the three generations play crucial roles in modulating plant flowering time. Rhizosphere microbiota advanced the flowering time of the pgr5 mutant To better understand the relationships between microbiota, exudates, and flowering time, we used an Arabidopsis mutant (for the PGR5 gene that encodes a novel thylakoid membrane protein) that grows as well as Wt plants in the vegetative phase [28,29]. The pgr5 mutant is deficient in antimycin A-sensitive cyclic flow from ferredoxin to plastoquinone, which is one of the most crucial physiological processes for efficient photosynthesis [28]. Because of defects in photosynthesis, the pgr5 mutant produces different exudates from the Wt Arabidopsis. The phenotype of the pgr5 mutant was unchanged in generations 1 and 2 in the group treated with pgr5 microbiota (pgr5-M) compared to the corresponding control (without the addition of soil microbiomes), as for the Wt Arabidopsis. The flowering time of the pgr5-M-treated group, however, was nearly 4 days earlier by generation 3, in contrast to Wt, and the silique number was significantly lower (Additional file 1: Table S1). The changes in plant reproduction induced by the microbial manipulations were probably not driven by rhizosphere diversity, because the Shannon and richness indices of the pgr5-Mand Wt-M-treated groups did not generally differ over the three generations (Additional file 1: Figure S1). The ACE data, though, differed after one generation (Fig. 1b). The relative abundances of the phyla differed only marginally between the two Arabidopsis lines in each generation (Fig. 1c). Rare rhizosphere microbes potentially affected flowering time The abundances of microbial phyla were relatively constant (see above), but abundances between the WM-(microbiota from the third-generation Wt cultures) and the PM (microbiota from the third-generation pgr5 cultures)-treated groups differed more at lower taxonomic levels. The relative abundances of 77 rhizosphere genera differed significantly between the WM and PM treatment by generation 3, by at least a factor of two. A total of 41 genera were enriched in the WM treatment relative to the PM treatment, and 36 genera were enriched in the PM treatment (Additional file 1: Table S2 and Additional file 3). Most of the enriched rhizosphere microorganisms were initially rare (relative abundance < 1%), such as Emticicia, Methylobacterium, and Filimonas, suggesting that rare rhizosphere microbes might play a role in modulating Arabidopsis flowering. Rare microbes can be involved in soil biochemical processes and as active modulators of plant growth and resistance to pathogens [30]. The enriched microbes in the WM treatment mostly have key roles in rhizosphere N regeneration or in maintaining plant growth (Additional file 1: Table S3) [16,[31][32][33][34]. Indeed, Bacillus, enriched in the WM treatment, can potentially contribute to soil N fixation [35]. Potential denitrifying organisms such as Stenotrophomonas and Emticicia, though, were enriched in the PM treatment [31,36]. We hypothesize that an increase in N fixation and cycling in Wt Arabidopsis associated with the microbiota, and the potential increase in denitrification in the pgr5 mutant, could help increase the duration of N bioavailability in Wt Arabidopsis relative to the pgr5 mutant. Plant pathogenic genera enriched in PM, such as Panacagrimonas and Filimonas, may also contribute to the earlier flowering time, because infected hosts preferentially allocate resources toward reproduction [37,38]. This hypothesis linking N availability to flowering time is discussed further below within the newly proposed molecular network that modulates flowering time. Verification of microbial function We unambiguously demonstrated that flowering time was directly associated with the rhizosphere microbiota. Microbiota from the third-generation pgr5 (PM) or Wt cultures (WM) was used to inoculate cultures of three Arabidopsis lines (Wt and two mutants of the photosynthetic apparatus, pgr5 and pnsB4, deficient in cyclic electron flow from NADPH to plastoquinone) for one generation. The microbiota of the third-generation Wt culture delayed flowering time and increased shoot growth in all cases compared to the treatment using microbiota from the pgr5 cultures. The addition of Wt microbiota delayed flowering by 3.3, 5.5, and 5.7 days in Wt, the pnsB4 mutant, and the pgr5 mutant, respectively (Fig. 2a). The shoot fresh weight of the plants treated with Wt microbiota also increased significantly in the three lines compared to the treatments with the addition of pgr5 microbiota (Fig. 2b). These results clearly indicate that flowering time can be affected by the rhizosphere microbiota. This effect disappeared when the soil slurry was sterilized before inoculation, indicating that heat-stable exudates alone were not modulating Arabidopsis flowering time (Fig. 2c, d). It is interesting that the difference between WMand WM-S (sterilized soil slurry)-treated plants was not very obvious (Additional file 1: Figure S4). The reason may be that sterilized soil slurry included more N, and other nutrients to influence flowering time, which was in accordance with the results in Fig. 3. Therefore, we speculated that the metabolites in the sterilized slurry also played a role in influencing the flowering time and plant stature. The addition of the two soil slurries did not change bulk-soil pH, available soil K or P contents (Additional file 1: Table S4), the abundance of key genes involved in the C cycle, or the activities of β-glucosidase or chitinase in the rhizosphere (Additional file 1: Figure S2 and S3) but did affect the abundance of genes (normalized to the abundance of 16S rRNA gene) involved in N cycling (Fig. 2e-h) and the amounts of NH 4 + (Fig. 3A) and NO 3 − (Fig. 3C) in soil. The concentration of bioavailable N species was generally lower after the addition of the pgr5 microbiota compared to the treatment with the addition of Wt microbiota (Fig. 3A, C). This decrease in N availability in the cultures treated with pgr5 microbiota was accompanied by an increase in the abundance of genes involved in denitrification (nirK and nosZ) and a decrease in the abundance of genes involved in nitrogen fixation (nifH) and nitrification (amoA) compared to the Wt-treated groups (Fig. 2e-h). In the Wt-treated group, the activity of urease was higher ( Fig. 3B) and nitrate reductase was lower (Fig. 3D) than those in the PM group, which probably resulted in higher NH 4 + and NO 3 − . Several lines of evidence support the hypothesis that the rhizosphere microbiota modulated N cycling and bioavailability, leading to N deficiency earlier in the pgr5-treated groups and thus earlier flowering. Flowering can be triggered by low nitrate levels [39], but plants maximize growth before flowering under the conditions of N sufficiency [25]. Different root exudates in the two Arabidopsis lines Root exudates can act as key substrates or signaling molecules that affect microbial composition [27], so we tested the hypothesis that exudate concentrations and compositions differed between the Arabidopsis lines. A metabolomic analysis found that 34 exudates involved in 10 metabolic pathways were differentially released in the two lines (Wt and pgr5) (see Additional file 1: Table S5, the principal component analysis in Fig. 4a, and ≥ 2-or ≤ 0.5-fold changes and p values < 0.05 in Fig. 4b). Four of the 10 biochemical pathways were upregulated in the Wt cultures relative to the pgr5 cultures (Fig. 4b). Thymine was the most differentially released exudate (Additional file 1: Table S5). Thymine can be degraded by bacteria, perhaps accounting for the increase in NH 4 + content in the WM groups (Fig. 3A). Trp and its derivatives, phenols, and some carboxylic acids were preferentially exuded in the Wt cultures (Additional file 1: Table S5). The concentrations of amino acids were generally higher in the Wt cultures, consistent with a higher abundance of Bacillus subtilis in the Wt cultures, for which amino acids are chemoattractants. These metabolomic results are consistent with the hypothesis that differences in exudates affect rhizosphere microbiota [40]. IAA delayed flowering time by downregulating genes involved in flowering Auxins regulate plant growth in many ways [13,41,42]. One of the important auxins for plants is IAA, which is soluble in aqueous solutions and, when protonated, diffuses passively across cell membranes without the need of a specific transporter [43]. IAA has also been hypothesized to have a floral-inductive signaling role by regulating multiple aspects of embryonic and postembryonic development [44,45]. Microorganisms can produce IAA from Trp [46][47][48][49]. One of the enriched rhizosphere microorganisms, Arthrobacter (Additional file 1: Table S3), has been reported to be beneficial for plant growth by having the ability to produce IAA [16]. Trp and its derivatives were enriched in the Wt exudates, so the generation of IAA by microorganisms may control flowering time by a novel molecular network. Trp content in the soil of generation 3 of the Wt cultures decreased, and the IAA content increased 3.03-fold, suggesting that the Wt microbiota rapidly converted Trp into IAA (Fig. 4c, d). We explored the possibility that microbially generated IAA delayed flowering time by adding Trp line. Adding 5 and 25 nM IAA decreased the bolting proportion of Wt Arabidopsis by 20-30% (Fig. 5a) when 80% of the control plants had floral buds, implying that IAA was the direct driver that delayed flowering time. Changes in the expression of genes involved in flowering further supported the role of IAA in regulating flowering time (Fig. 5b). IAA treatment induced changes in the relative transcription rates of genes associated with flowering. The rates for some of the genes comprising the autonomous, GA, and vernalization pathways changed significantly, and these changes were Shimada et al. [14] reported that IAA relieved the inhibitory effects of aspterric acid on pollen growth and thus speculated that IAA accelerated reproductive growth in Arabidopsis. To our knowledge, our microcosm study is the first to demonstrate that IAA is one critical signal that delays flowering in Arabidopsis, although Wagner et al. [50] also found a similar phenomenon, while not proposing any related mechanism. Combined with earlier reports, the weight of evidence suggests that IAA stimulates the development of floral organs but delays the flowering time of Arabidopsis. Our data strongly suggest that IAA delays flowering and acts as a signal of optimal growth conditions in the absence of N limitation for growth. Conclusions Identifying the community/function relationships for rhizosphere microorganisms and their interaction with plant physiology is critical for determining the role of the plant microbiome in regulating biogeochemical cycles, plant growth, and phenology (Fig. 6). We identified a novel metabolic network in which exudates affect plant rhizosphere microbiota, which can then modulate flowering time by IAA production and can also affect vegetative growth by influencing N availability. IAA-promoted plant growth is expected to further stimulate exudation and hence retroactively affect flowering time in a positive feedback mechanism. Our results have important implications for our understanding and modeling of plant phenology and are of great interest for the biotechnology sector seeking to increase crop potential. Seedling culture Arabidopsis seeds (wild-type Col-0 (Wt) and the pgr5 mutant, deficient in PGR5-dependent cyclic electron flow from ferredoxin to plastoquinone), were surface sterilized to avoid bacterial contamination on solid A B Values are means ± SDs (n = 6). Asterisk represents a significant difference (p < 0.05) medium and vernalized as described by Sun et al. [51]. Given that photosynthesis is the main resource of exudates, photosystem mutant with the same genetic background with that of Wt was selected to induce different exudates. Vernalized seeds were cultured in Petri dishes containing Murashige and Skoog (MS) medium under sterile conditions at 25°C at a light intensity of 300 μmol photons/m 2 /s and a 12:12-h light to dark photoperiod. The MS medium containing 3% sugar and 0.5% agar was autoclaved at 115°C for 30 min before use. Twoweek-old aseptic seedlings were transplanted into polycarbonate pots (400 mL) containing autoclaved pottingmix soil (Sun Gro Horticulture, MA, USA). Microcosm experiments across generations Approximately 30 g of grassland soil collected near the Zhejiang University of Technology, China (30°17′ 45.11″ N, 120°09′ 50.07″ E) was mixed with 200 mL of sterile water by vigorous shaking for 60 s [25]. Tenmilliliter samples of soil slurry were added to polycarbonate pots transplanted with 20 seedlings of either the Wt or the pgr5 Arabidopsis line. Nine replicate pots were used for generation 1 for each line and for a control treatment (n = 9) without added soil slurry. Plants grew in an artificial greenhouse at 25 ± 0.5°C and 80% relative humidity under cool-white fluorescent light (300 μmol photons/m 2 /s) with a 12:12-h light to dark cycle. The plants and soils were harvested when 80% of the plants had floral buds of 1 ± 0.1 cm or larger (measured from the center of the rosette), and the time to flowering was recorded. Large soil aggregates that were loosely bound to the roots were first removed by shaking, and 30 g of the tightly bound rhizosphere soil was then collected (see the "Collection of rhizosphere soil" section) and added to 200 mL of sterile water, producing a soil slurry for generation 1. This slurry was used for inoculating the Arabidopsis lines for the second and third generations, as described above. Treatments inoculated with the microbiota of Wt and the pgr5 mutant were designated WM and PM, respectively. The generations in our study refer to the propagation of soil microbes only, and all seeds were from a common stock. At flowering stage and harvest time, the physiological parameters including the number of days before flowering, fresh weight, number of rosette leaves, and number of siliques were determined in wild-type (Wt) and pgr5 mutant ecotypes grown in microcosms for each generation. Collection of rhizosphere soil Rhizosphere soil was collected as described by Bulgarelli et al. [9]. Four-centimeter sections of roots were collected immediately below the rosette, and the roots from each pot were transferred to a 50-mL centrifuge tube containing 20 mL of sterile phosphate buffer saline (PBS; 137 mM sodium chloride, 10 mM phosphate buffer, 2.7 mM potassium chloride; pH 7.3-7.5). The centrifuge tubes were shaken at 40 g for 20 min on an orbital shaker. The roots were removed, the solution was centrifuged at 1000g for 20 min, and the pellet of rhizosphere soil was recovered. All samples of rhizosphere soil were frozen in liquid nitrogen and stored at − 80°C until analysis. 16S rRNA gene sequencing The frozen soil samples were thawed on ice, and DNA was extracted using a PowerSoil DNA Isolation Kit (MO BIO Laboratories, Inc., Carlsbad, USA). 16S rRNA genes were amplified using EXtaq enzyme (TaKaRa, Kyoto, Japan) and the specific primers 314F (5′-CCTA CGGGNGGCWGCAG-3′) and 805R (5′-GACTACHV GGGTATCTAATCC-3′) with the adapter (index) that targets the V3 and V4 variable regions of bacterial/archaeal 16S rRNA genes. Strongly amplified products 460 bp in length were chosen for further experiments. The amplicons were quantified with a Qubit 2.0 fluorometer (Thermo Fisher Scientific, USA), diluted to 1 ng/μL, and sequenced on a MiSeq platform (PE250). In total, 854,514 raw reads were obtained from the 18 samples of rhizosphere soils (6 groups as shown in Fig. 1a, n = 3). We computed operational taxonomic units (OTU) and microbial diversity as described previously [52]. Rarefaction curves of observed species are shown in Additional file 1: Fig. S5. Measurements in the dissolved phase of the soil samples Physicochemical (nutrient contents and pH) and biological (enzymatic activities and concentrations of IAA and Trp) variables were measured in the dissolved phase of the soil samples. After three generations of Arabidopsis cultures (Fig. 1a), soil samples were then collected randomly from each pot, air-dried, homogenized, and sieved to obtain particles < 1 mm. The activity of soil enzymes (β-glucosidase, chitinase, urease, and nitrate reductase) and soil N content (NH 4 + and NO 3 − contents) were subsequently measured using 0.5 g of soil (n = 4) following the manufacturer's instructions of corresponding commercial reagent kits (COMIN, Suzhou, China). The concentrations of two key metabolites (IAA and Trp) were measured in the soil dissolved phase. Five grams of soil was mixed with 5 g of NaCl and 20 mL of acetonitrile for 5 min, followed by centrifugation at 1000g for 10 min. The concentrations of IAA and Trp in the supernatants were measured by Liquid Chromatography-Electrospray Ionization-Mass/Mass Spectroscopy (LC-ESI-MS/MS, Q-Trap 5500, Agilent Technologies, USA). Collection, measurement, and identification of root exudates Arabidopsis seedlings (Wt and pgr5 mutant) reached the bolting stage after approximately 40 days of culture and were then transferred to glass containers containing 40 mL of ddH 2 O [27]. After 3 days of culture in the glass containers, culture media containing exudates were passed through 0.45-μm filter membranes. Approximately 35 mL of the solution was freeze-dried, dissolved in 500 μL of 80% methanol, and derivatized with Bis (trimethylsilyl) trifluoroacetamide and 1% chlorotrimethylsilane. The derivatized samples were analyzed by gas chromatography/mass spectroscopy GC-MS (Agilent 7890B gas chromatographic system coupled to an Agilent 5977A Mass Spectrometry Detector (Agilent, USA)). The potential involvement of all identified exudates in Arabidopsis biochemical pathways was subsequently determined by reference to the KEGG database [52]. Total RNA was isolated from the plants and then reverse transcribed into cDNA for qRT-PCR analysis using the protocol described by Chen et al. [11] and the primer pairs in Additional file 1: Table S6. We studied the transcription of six genes in the autonomous pathway (FCA, FLD, FPA, FVE, FY, and LD), three genes in the gibberellin acid (GA) pathway (GAI, GA1, and RGA), six genes in the vernalization pathway (FR1, VRN1, VRN2, VIN3, and PIE1), five genes in the photoperiod pathway (CCA1, CO, GI, LHY, and TOC1), and five genes in the floral integrator pathway (FLC, FT, SOC1, AGL-24, LFY, and AP1). Data analysis and statistical methods For the biochemical and physiological measurements, analysis of variance followed by the Dunnett's post hoc test was performed to evaluate the statistical significance among data using the StatView 5.0 program (Statistical Analysis Systems Institute, Cary, NC, USA). Means among treatments were considered significantly different, when the probability (p) was less than 0.05. All analyses were performed in triplicate unless otherwise stated. All data are presented in the tables and figures as mean ± SD (standard deviation). Additional files Additional file 1: Figure S1. Rhizosphere microbiota richness and diversity. Figure S2. Abundance of carbon cycle-related genes in rhizosphere soil. Figure S3. Activities of carbon cycle-related enzymes in rhizosphere soil. Figure S4. Comparisons between WM-and WM-S-treated plants. Figure S5. Rarefaction curves of observed species. Table S1. Physiological parameters of Arabidopsis in three generations. Table S2. Significant enrichment of rhizosphere microorganisms in the third generation. Table S3. nriched rare microorganisms in Wt and pgr5 Arabidopsis. Table S4. Bulk soil properties measured after addition of WM and PM soil slurries. Table S5. Comparison of root exudates between Wt and pgr5 mutant Arabidopsis. Table S6
v3-fos-license
2017-06-19T17:22:14.854Z
2014-04-08T00:00:00.000
26027397
{ "extfieldsofstudy": [ "Medicine", "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/1472-6920-14-225", "pdf_hash": "c72422ef1bbe1da212a38f1421cf20037a903695", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:306", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "292ff411ae7bf8f8120f0a07ee54244780d68f10", "year": 2014 }
pes2o/s2orc
National survey of UK medical students on the perception of neurology Background Medical students perceive neurology to be a difficult subject, a phenomenon described as “neurophobia”. Studies investigating student attitudes towards neurology have so far been limited by small sample sizes as a consequence of being conducted within a single medical school or region. We aimed to conduct the first national survey of the perception of neurology among UK medical students. Methods A 24 question online survey was designed and distributed in the form of a web-link to all UK medical schools. Responses were collected for 10 weeks with reminders sent at 3 and 6 weeks. A prize-draw of £300 was offered upon completion of the survey. Results 2877 medical students from 25 of 31 medical schools responded. Students found neurology to be significantly more difficult than other specialties and were least comfortable drawing up a neurological differential diagnosis compared to other specialties (p < 0.0001 for neurology vs. each of the other specialties). Neuroanatomy was regarded as the most important factor contributing to neurology being perceived as difficult. Conclusions The findings of the first national survey addressing this issue are consistent with previous research. The perception of neurology remains unchanged, in contrast to the rapidly changing demands of neurological care in an ageing population. Neurological examination and formulating a differential diagnosis are important skills in any medical specialty, and combatting “neurophobia” in medical students is therefore essential. Electronic supplementary material The online version of this article (doi:10.1186/1472-6920-14-225) contains supplementary material, which is available to authorized users. Background A fear of neuroscience and neurology among medical students has long been recognised and the term neurophobia was coined as long ago as 1994 [1]. Studies have suggested it is endemic among medical students and junior doctors, and associated with deficiencies in medical education [2][3][4][5]. In response, educational bodies have implemented strategies to improve the perception and experience of neurology in medical training [6,7]. This is of particular importance given the recently modified UK medical training programme which encourages trainees to develop their career pathways early, with many students developing areas of interest as undergraduates [8]. However, the results of studies thus far have been limited by small sample sizes (with surveys typically conducted within a single medical school) and none have addressed the issue at a national level [2][3][4][5]. The data available may therefore be biased and it is unclear whether there has been a change in attitudes over time. We aimed to conduct the first national survey to determine the perception of neurology among medical students across all UK medical schools and to identify the factors influencing these views. Methods A 24 question online survey was designed by medical students, neurology trainees, neurologists and neuroscience researchers from four UK medical schools. To ensure suitability and clarity of the questionnaire the survey was piloted with 10 medical students not previously involved in its design. The survey was also sent to the Association of British Neurologists for review which led to some further minor revisions. The questionnaire is available as Additional file 1. A variety of questions were used including multiple choice, short answer questions and Likert scales. Students were asked to rate neurology compared to six other specialties (gastroenterology, respiratory medicine, cardiology, geriatrics, rheumatology and endocrinology) in the following areas: difficulty learning the specialty, comfort in the relevant examination, developing a differential diagnosis and the quality of teaching received. The studied specialties and the areas explored were selected to ensure consistency with previous smaller surveys investigating the perception of neurology [2]. There was an option to provide free text responses to some questions. The final survey was distributed in the form of a web-link to UK medical students by emailing the administrative office and the undergraduate medical student society of every UK medical school with a request to distribute it to medical students. An option of entering a prize-draw of £300 was offered upon completion of the survey. The survey was set to allow only one response per computer. The survey was distributed in May 2013 and reminders were sent 3 and 6 weeks following the initial request. Responses were collected for 10 weeks from date of first distribution. The independent sample t test was used to compare the difference in mean score for neurology and each of the other specialties separately. Fisher's exact test was used to assess the significance of factors influencing the likelihood of a student wishing to pursue neurology as a career. Ethical approval for the study was provided by the University of Oxford, reference MSD-IDREC-C1-2014-121. Results 2877 students (61.6% female and 38.4% male) from 25 out of 31 UK medical schools responded, representing approximately 7% of UK medical students. There was no notable difference in the size or location of schools which did not respond; the non-responding schools were from England, Scotland, Wales and Northern Ireland. The UK medical school training programme is typically a 5 year undergraduate degree (or 4 year graduate course) with the possible addition of an intercalated year. The median number of students responding in 25 schools was 82 (range 2-317; mode 81). The average age of respondents was 22.6 years. 83% of respondents were undergraduates, 7% were mature undergraduate students (defined as a student aged 21 or over at the start of their studies) and 10% were graduate students. Incomplete responses occurred for some questions; however, a particular pattern of nonresponse was not evident. Students found neurology to be significantly more difficult (mean score = 3.47, 95% confidence interval (95% CI) = 3.43 to 3.51) than any other specialty (p < 0.0001 for neurology vs. each of the other specialties) ( Figure 1a) and also reported being the least comfortable in drawing up a differential diagnosis from a presentation of neurological symptoms (mean score = 2.96, 95% CI = 2.92 to 3.00) compared to other specialties (p < 0.0001 for neurology vs. each of the other specialties) (Figure 1b). Neuroanatomy was identified as the biggest factor making neurology difficult with 70% rating it a large or very large contributor to the level of difficulty, followed by basic neuroscience (45%) and lack of diagnostic certainty (40%). The level of comfort in examining neurological patients and the quality of teaching received in neurology was rated higher than endocrinology, geriatrics and rheumatology (Figure 1c-d). Upon including only the 1461 respondents who had at the time of the survey completed both the preclinical neuroscience and clinical neurology components of their course, students still found neurology to be significantly more difficult (mean score = 3.37, 95% CI = 3.31 to 3.43) than any other specialty (p < 0.0001 for neurology vs. each of the other specialties). These students were also significantly less comfortable in drawing up a differential diagnosis from a presentation of neurological symptoms (mean score = 3.22, 95% CI = 3.17 to 3.27) compared to all other specialties (p < 0.0001 for neurology vs. each of the other specialties), except for endocrinology (p = 0.0712). In the "open comments" section it was clear that students felt that there was a lack of integration between pre-clinical neuroscience and clinical components of neurology training, as well as an insufficient length of time dedicated to neurology in the medical course, which in some schools is not a distinct clinical rotation but integrated into other medical attachments. Regarding the possibility of a career in neurology, students ranked neurology higher than rheumatology, endocrinology and geriatrics as a prospective career. Respondents also considered it to be associated with good or very good research opportunities (75%), prestige (68%) and the ability to make a significant difference to patients' lives (64%). Job satisfaction and ability to make a significant difference to patients' lives were the most likely factors to persuade students to pursue a career in neurology (32% and 30% respectively). On the contrary, 26% thought there was a poor or very poor ability to maintain work-life balance, which was further shown to be the most likely factor to dissuade students from pursuing a career in neurology (43%). These data are presented in Figure 2. The following factors were associated with a significantly increased likelihood of pursuing neurology as a career: being male (p < 0.0001), personal experience caring for a relative or friend suffering from a neurological disorder (p = 0.04) or caring for someone suffering from a neurological disorder through volunteer work in a healthcare environment (p < 0.0001). Among students who had completed both the preclinical neuroscience and clinical neurology components of their course, 35% of students felt that the amount of planned neurology teaching was too little and more/ improved bedside teaching was ranked as the factor which would most improve the neurology training in medical school. Further, 42% reported having not had the opportunity to receive additional neurology teaching beyond the course curriculum, 27% had not met a neurologist who inspired them, 26% did not feel confident they knew what neurologists do and 20% reported having not had the opportunity to carry out a clinical placement in neurology. Discussion Our study is the largest and the first national study to investigate perceptions of neurology by medical students. We have confirmed previous findings that students consider neurology to be the most difficulty specialty to learn and the one in which they feel the least comfortable establishing a differential diagnosis. Although we have only surveyed students, other work has indicated that neurophobia persists after qualification [2,7]. A study of general practitioners, who are often the sole providers of care for common neurological conditions such as migraine, reported a lack of confidence in addressing neurological complaints, likely to result in increased referrals and demand on specialist care [9,10]. The fact that neuroanatomy and learning basic neurosciences were identified as the most important driving factors of the difficulty of neurology highlights the need to reduce the substantial time gap between basic neuroscience and clinical teaching at many medical schools and to adopt a more integrated structure. Resources through which to teach and enable an understanding of neuroanatomy may be scarce and likely to be helped by the use of online resources. Similarly, the use of available online videos and demonstrations are likely to aid teaching of the neurological examination. An American study investigating alternative methods of teaching neurology found that 6 years following the implementation of an e-textbook student satisfaction had risen, and it was identified to be an effective tool to aid the teaching of neurology [11]. Medical schools should ensure that they assess the perception of neurology among their students, collect feedback and subsequently assess the effectiveness of any interventions over time. The widespread scale of neurophobia warrants a national initiative and we propose the establishment of a massive open online course for largescale participation aimed at teaching functional neuroanatomy around the neurological examination. Further, our findings demonstrate that the perception of neurology across more than a decade has remained unchanged, in sharp contrast to the rapidly changing demands of neurological care. The growing social and financial burden of an ageing population with chronic neurological diseases, particularly neurodegenerative disorders, and a relative shortage of neurologists in the UK has highlighted the need for a multidisciplinary approach to their care [12,13]. Management of neurological diseases will be an unavoidable reality for most future doctors and focusing on the effective development of neurological skills is likely to be a cost-effective measure in providing optimal early care and appropriate referral. The perceived level of difficulty of neurology is not reflected by the interest in the subject. This is encouraging and in line with findings from previous work, suggesting that students want to grasp the subject and may be motivated to work hard at it [2,4,14]. The finding that one in four students thought there was a poor or very poor ability to maintain work-life balance in neurology is perhaps surprising, and a potential misconception which could be addressed through career talks. Limitations of this study include the response rate (as despite being the largest survey, only 7% of medical students responded) and the fact that we cannot exclude the possibility of institutional bias (as response rates were not equal across institutions), responder bias and acquiescence bias (a tendency to respond positively to survey questions, which we tried to dissipate as much as possible through options including for example "don't know" and "neither likely or unlikely"). Conclusions In conclusion, we report that students perceive neurology to be the most difficult specialty to learn and in which to formulate a differential diagnosis. We encourage neurologists and course organisers to work towards greater understanding of neuroanatomy and basic neuroscience through integration of pre-clinical and clinical neurology teaching, for example by increasing case-based/ bedside teaching and ensuring teaching remains relevant and focused on the most important principles. Ensuring that medical students are comfortable with the neurological examination and diagnosis is a necessary priority not only for medical students wishing to pursue a career in neurology, but for many other specialties, particularly primary care. Additional file Additional file 1: Survey questionnaire.
v3-fos-license
2019-11-29T15:12:29.214Z
2019-11-28T00:00:00.000
208338085
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://josr-online.biomedcentral.com/track/pdf/10.1186/s13018-019-1367-7", "pdf_hash": "61396b2264dbd57f0c6655259335e0caa3bd0701", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:309", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "sha1": "61396b2264dbd57f0c6655259335e0caa3bd0701", "year": 2019 }
pes2o/s2orc
Instability of the proximal radioulnar joint in Monteggia fractures—an experimental study Background A Monteggia fracture is defined as a fracture of the proximal ulna combined with a luxation of the radial head. The aim of the present work is to evaluate the extent of instability of the radius head in the proximal radioulnar joint (PRUJ) as a function of the severity of elbow fracture and ligamentous injury in an experimental biomechanical approach. Methods Eight fresh-frozen cadaver arms were used. All soft tissues were removed except for the ligamentous structures of the PRUJ and forearm. A tensile force of 40 N was exerted laterally, anteriorly or posteriorly onto the proximal radius. The dislocation in the PRUJ was photometrically recorded and measured by two independent examiners. After manual dissection of the ligamentous structures up to the interosseous membrane, the instability was documented and subsequently measured. The following dissection levels were differentiated: intact ligamentous structures, dissection of annular ligament, oblique cord and proximal third of interosseous membrane. Results An anterior instability remains relatively constant until the proximal third of the interosseous membrane is dissected. The radial head already dislocates relevantly in the posterior direction after dissection of the annular ligament with an additional considerable stability anteriorly and laterally. Subsequently, the posterior instability increases less pronouncedly in regard of distal resected structures. The lateral instability increases constantly during the progressing resection of the ligamentous structures. Conclusion On the one hand, a complete healing of the ligament injury after functional treatment is hardly conceivable with ligamentary damage up to the level of the proximal interosseous membrane. A remaining instability of the proximal radius could therefore be a possible cause for the unsatisfactory clinical results after certain Monteggia fractures. On the other hand, the present study may give a possible explanation (i.e. early dorsal radius head dislocation after dissection of annular ligament) why the Bado II injury is the most frequent type of Monteggia fractures. Introduction A Monteggia fracture is defined as a fracture of the proximal ulna combined with a dislocation of the radial head [1]. The current operative treatment of these injuries leads to favourable clinical results in the majority of cases. However, there are some injuries that do not have a favourable outcome. The knowledge of the fracture morphology and its involved structures are therefore important preconditions for a successful therapy [2]. Monteggia fractures are usually associated with a dislocation in the proximal radioulnar joint (PRUJ) [2,3]. The most commonly used classification according to Bado describes in four subtypes the direction of the radius head dislocation and thus the angulation of the ulna fracture [4]. The posterior Monteggia injury (Bado type II) is additionally classified according to Jupiter into four subtypes and describes the accompanying ulna fracture or radius head injury [5]. The associated extent of the capsuleligament injury can only be assumed. While during childhood the injury often heals with very good results, a complicative healing process is often observed for injuries in adults [6]. There is an agreement in the literature that the precise anatomical reconstruction of the ulna fracture is the key to a successful surgical therapy [7]. In general, the distal part of the ulna fracture that remains intact in the interosseous membrane leads to a reduction of the radius head in the elbow joint or in the PRUJ. In order to dislocate the radius head with intact capsule ligament structures of the humeroulnar joint part, the ligament connections between radius and ulna (consisting of annular ligament, chorda obliqua and proximal part of the interosseous membrane) must rupture at the level of the ulna fracture (Fig. 1). The distal part of the interosseous membrane of the fracture usually remains intact. Even after anatomical reduction and fixation of the ulna fracture, a persistent instability auf the PRUJ can remain (Fig. 2). It is not known whether the torn ligament connections between the radius and ulna actually heal to a stable condition without surgical revision and under the obligatory early functional treatment. A standard surgical refixation of the torn annular ligament is not recommended [2,8]. However, it can be assumed that with ulnar osteosynthesis alone, an instability of the radius head will remain in the PRUJ and in relation to the humeral capitulum. It can be assumed that their extent increases significantly from proximal to distal depending on the level of the ulna fracture. The aim of the present work is to evaluate the extent of instability of the radius head in the PRUJ as a function of the severity of elbow fracture and ligament injury in an experimental and biomechanical approach. Specimens Eight fresh-frozen cadaver arms were provided by the Institute of Anatomy of the university clinic, Technical University Dresden, Germany. The specimens were frozen at − 22°C (Liebherr Typ 40073 1, Germany). Within the preparation, all soft tissues were removed except for the ligament structures of the PRUJ and forearm. The distal ulna was solid clamped in a vise. To reduce stability variation, the preparation was carried out by a single senior orthopaedic surgeon in a standardized fashion. Todisco and Trisi had already proven that Hounsfield units (HU) measured in CT correlate highly with bone mineral density [9]. Therefore, the bone density of the specimens was measured by using a quantitative computed tomography (Somatom CT, Siemens, München, Germany, technical specifications: CTDI 4.53 vol mGy, kV 80, mAs 180, 0.75 mm layer thickness). The bone density of all used proximal ulnas were in average 596 ± 127 (min 495, max 891) HU. Test setup and intervention A 4.5-mm Schanz screw was inserted in a vertical direction and perpendicular in the horizontal plane. Clockwise markings at 3, 6 and 9 o'clock were applied to the radius head. Furthermore, the lowest point in the PRUJ was marked as a reference (Fig. 2). By means of a mechanical force measuring device (PGH, Kraftmessgeraete, Halle [Saale], Germany), a tensile force of 40 N was Statistical analysis was performed with SPSS Statistics software (version 25; IBM, Armonk, NY, USA) for descriptive statistics. The significance level was chosen at p < 0.05. All data are presented as mean with standard deviation, minimum and maximum. Univariate analysis of variance was carried out to compare the different instabilities. Results The average age of the used donors was 81.6 ± 9.4 (62-92) years. Five donors were female and three male. All biomechanical tests were successfully completed without the Schanz screws or the holding device loosening itself. Setting A (intact ligament structures) With intact ligament structures there is almost no instability in the PRUJ. It measures 1.5 mm (SD 1.08, min 0, max 2.7) in the anterior direction, 0.7 mm (SD 1.28, min 0, max 3.0) in the lateral direction and 1.6 mm (SD 1.57, min 0, max 3.9) in the posterior direction. There was no significant difference among these groups. Setting B (dissection of annular ligament) After dissection of the annular ligament, instability occurs mostly posteriorly and slightly laterally. An anterior instability is almost not measured. An instability of 1.8 mm (SD 1.7, min 0, max 4.2) in the anterior direction, 4.1 mm (SD 2.7, min 2.4, max 10.1) in the lateral direction and 13.9 mm (SD 4.05, min 10.8, max 22.9) in the posterior direction was recorded (Fig. 4). Setting C (dissection of the annular ligament and oblique cord) After dissection of the annular ligament and oblique cord, another posterior instability is generated. A lateral instability of 5.7 mm (SD 2.3, min 2.5, max 9.72), an anterior instability of 2.9 mm (SD 1.7, min 0, max 1.7) and a posterior instability of 17.5 mm (SD 6.3, min 10.5, max 26.5) were measured. Setting D (dissection up to proximal third of interosseous membrane) After the dissection of the proximal third of the interosseous membrane, a massive multidirectional instability was observed in the lateral direction with dislocation of the radius head in the PRUJ in the posterior and anterior direction. In detail, there was a lateral instability of 10.3 mm (SD 2.6, min 6.7, max 14.2), an anterior instability of 15.8 mm (SD 5.3, min 9.2, max 23.1) and a posterior instability of 23.9 mm (SD 12, min 10.1, max 45.2). Instability in regard of direction Considering the instability in regard of the direction, it is noticeable that the anterior instability remains relatively constant until the proximal third of the interosseous membrane is dissected (Fig. 5). This is also seen statistically with a significant increase of the instability when dissecting the interosseous membrane [p = .001]. The early subluxation of the radius head in the posterior direction after dissection of the annular ligament with considerable stability to anterior and lateral is remarkable. In the course of our examinations, the posterior instability increased in inverse proportion with initial large increase of instability and decreases in regard of the distally resected structures (Fig. 7). However, the successive instability is always significant ( Table 1). The lateral instability increases relatively constantly during the resection of the ligament structures distally. It is striking that a slight translation to the posterior direction always occurs with lateral traction. However, only the lateral offset was measured (Fig. 6). Discussion Precise ligamentous guidance of the radius rotating around the ulna is essential for free range of motion and painless strength of the forearm. The translation of the radius head during forearm rotation is therefore limited to only 1-2 mm for intact ligaments between the ulna and the radius bone [10,11]. In case of Monteggia fractures, besides the anatomical reconstruction of the ulna fracture, the objective of the treatment must be the sufficient healing of the ligamentous structures in the PRUJ and the interosseous membrane. In the literature, only three studies investigate experimentally the resulting instability in the PRUJ after cutting band structures [12][13][14]. All of these studies have evaluated the effect of ligamentary structure resection in regard of the stability in the PRUJ. In the study according to Galik et al., the translation of the radius head increased from 1.6 ± 0.7 to 2.3 ± 0.9 mm in the mediolateral (ml) plane and from 2.1 ± 0.6 to 2.6 ± 0.9 mm in the anteroposterior (ap) plane after severing the annular ligament during pro-/supination [12]. A direct comparison to the present study is difficult because only the sum of the distance in one plane (ap and ml) was measured without the exact data for the anterior, lateral or posterior plane being given. In this study, however, the complete elbow joint in the 90°position with intact lateral collateral ligament was tested, which also makes the comparability difficult, because the 90°position of the elbow is a very stable position anyway when the primary stabilizing ligaments were not resected. A comparable experimental setup has been chosen in the study of Anderson et al. The forearm including the elbow joint was examined and the ulnar collateral ligament, the lateral ulnar collateral ligament (LUCL) and the joint capsule of the elbow were left intact during preparation [13]. After dissection of the annular ligament, chorda obliqua and proximal interosseous membrane, the dislocation of the radius head in the PRUJ was measured in the lateral direction. Even after the dissection of all structures except for the distal interosseous membrane, the maximum diameter was only 3 (SD 2) mm. Due to the intact primary ligamentary structures, the study is difficult to compare with the present study. However, there is no relevant instability in any direction in the PRUJ, which indicates in comparison to our study, that the not resected structures (ulnar collateral ligament, the LUCL and the joint capsule) contribute to considerable stability. In the present study, the instability of the PRUJ was therefore measured just by the use of forearm specimens without an annexed elbow joint and after resection of the medial und lateral ligament structures. The resulting instability of the radius head was more obvious in the experimental approach of Galik et al. [12]. The elbow joint with capsule and ligament structures remained intact and the specimen was clamped in 90°elbow flexion. The dislocation of the radius head in lateral, anterior and posterior plane after application of 20 N tensile force was measured and reported in percent of the diameter to the radius head. After dissection of the annular ligament, a significant lateral (46%) and posterior (37%) instability was measured, while stability in the anterior direction (8%) was retained. The same results were seen in the present study with no significant instability in the anterior direction and already subluxation of the radius head in the lateral and posterior direction. However, in the study of Hayami et al., it was larger in the lateral direction, while in the present study, the largest instability was evaluated in the posterior direction after dissection of the annular ligament [14]. Not until the separation of the proximal half of the interosseous membrane, a subluxation was observed in the anterior direction (39%) and even further in the lateral (154%) and posterior (200%) direction. In comparison to the present study, these results correspond precisely to the currently evaluated data. Also in the present study, a dislocation in the PRUJ in the lateral and posterior plane was evaluated significantly after resection up to the membrane interossea, whereas in the anterior direction, only a comparatively low dislocation was found. However, the results of these experimental studies can only be transferred to a very limited degree onto the instability of the PRUJ after Monteggia fractures. In particular, in the 90°elbow flexion with intact collateral ligaments, the guidance of the concave radius head on the convexity of the humeral capitulum may result in a considerable secondary stability in the frontal and sagittal planes. The dislocation of the radius head often leads to significant ruptures of the elbow joint capsule and the radial collateral ligament complex, so that articular guidance of the radius head is not possible even after stable ulna osteosynthesis (Fig. 2). The study has some limitations. On the one hand, in the present study, a different experimental setup was chosen (no 90°position of the elbow) and the primary and secondary stabilizing structures such as the collateral ligaments and joint capsule with the distal humerus were resected. However, we believe that a stability bias is created by the very stable 90°position of the elbow, especially since the relevant instabilities of the elbow are created starting at approximately 30°extension. On the other hand, compared to Hayami et al., we measured with double the force (20 vs. 40 N), so in the present study, the measured instability is higher compared to other studies [14]. Nevertheless, we believe that 40 N is more appropriate in relation to the forearm natural weight. A further limitation is the analogue, manual measurement of the instability by an image processing software, which can result in a latent inaccuracy. However, we have tried to reduce this by using two independent investigators. A measurement with an optical system would be preferable for future studies. Conclusion Based on our experimental observation and the study of Hayami et al., a complete healing of the instability of the radial head under functional treatment is hardly conceivable at least for ligamentous injuries up to the chorda obliqua or proximal interosseous membrane. A remaining instability of the proximal radius is a possible cause for the unsatisfactory clinical results after certain Monteggia fractures. Therefore, we recommend an intraoperative stress test of the PRUJ (equivalent to the syndesmosis stress testing) after anatomically stable osteosynthesis of the ulna, and, in case of persisting significant instability, an operative reconstruction of the annular ligament. In addition, the present study may give a possible explanation (i.e. early dorsal radius head dislocation after dissection of annular ligament) why the Bado II injury is the most frequent type of Monteggia fractures.
v3-fos-license
2022-07-01T15:18:15.955Z
2022-06-29T00:00:00.000
250151850
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphy.2022.943016/pdf", "pdf_hash": "08635e593348f0601e8c2d9a703a2708d9c061f9", "pdf_src": "Frontier", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:311", "s2fieldsofstudy": [ "Engineering", "Physics" ], "sha1": "46152e37704dacd2372b4bf564e501172f5f7aa2", "year": 2022 }
pes2o/s2orc
Real-Time Reconstruction of the Complex Field of Phase Objects Based on Off-Axis Interferometry Quantitative phase imaging (QPI) can acquire dynamic data from living cells without the need for physical contact. We presented a real-time and stable dynamic imaging system for recording complex fields of transparent samples by using Fourier transform based on off-axis interferometry. We calculated and removed the system phase without sample to obtain the real phase of the sample, so as to ensure that the system has the ability to accurately measure the phase. The temporal and spatial phase sensitivity of the system was evaluated. Benefit from the ability to record the dynamic phase and phase profile of a specimen, a standard sample (polystyrene microspheres) is investigated to demonstrate the efficiency of this imaging system and we have observed the variation of erythrocyte membrane during Red Blood Cells (RBCs) spontaneous hemolysis with different mediums. Experimental results indicate that the phase of non-anticoagulant RBC changed apparently than anticoagulant RBC and the system could be applied to real-time noninvasive and label-free identification of living cells. INTRODUCTION Most living cells are almost transparent when illuminated by visible light, essentially acting as phase objects. Same techniques such as phase-contrast microscopy and differential interference difference microscopy can carry out microscopic imaging of transparent samples, consequently revealing the structural details of biological systems [1,2]. In spite of this, the information of the illumination field obtained by these techniques is only qualitative, and it is difficult to describe the morphology of the sample quantitatively. Both non-interference and interference methods have been widely used in quantitative phase imaging of biological samples. For example, the microscopy based on intensity transfer equations can realize phase imaging of biological samples through a series of numerical operations [3][4][5][6]. However, it is limited by the complexity of the calculation process and the long time required. The advantages of digital holographic microscopy (DHM) are rapid, non-destructive, and high-resolution which is widely used in the study of cell structural characteristics, cell deformation, cell dynamics, etc., [7][8][9][10]. Meanwhile, it can also be combined with other technologies [11][12][13] to form a multi-mode microscopic imaging technology. The acquisition rate of this technique is limited only by CCD and has the ability to measure the morphological characteristics of living cells in real-time [14,15]. However, it should not be ignored that the realtime monitoring quality and the longest observation time of QPI will be limited by the overall stability of the imaging system [16,17]. In off-axis DHM, the low contrast of interference fringes usually reduces the phase sensitivity of the system. In addition, camera dark noise, read noise and other instrumental parameters may affect the measurement sensitivity of the system [18]. The spatial sensitivity of the system is easily affected by speckle noise factors such as scattering field of impurities on optical elements and random interference noise patterns generated by specular reflection of various surfaces in the system, or environmental factors such as mechanical vibration and air density fluctuations. Spatial light interference microscopy (SLIM) is considered as a method to reduce speckle noise inherent in laser light source [19]. In conclusion, it is significant to check and reduce the influence of noise and environmental factors on measurement sensitivity during post-processing. In the process of off-axis spatial filtering, the Fourier transform involved will also bring unnecessary noise to the image. Therefore, in order to ensure the spatial phase sensitivity of QPI, it is particularly important to subtract the background phase. In this paper, an off-axis real-time digital holographic microscopy system based on Mach-Zehnder interferometer was designed, to solve the above problems, we acquired the background phase firstly by dealing with the interference fringe without a sample then the phase caused only by the sample can be computed by subtracting the background phase. Consequently, the dynamic imaging of the phase only caused by the sample could be achieved. To verify the feasibility of the imaging system, we have demonstrated the experiments on polystyrene microspheres and red blood cells [20][21][22][23]. Experimental Setup A typical setup of off-axis interferometry is depicted in Figure 1A. A continuous wave (CW) laser (MRL-III-650L, Changchun new industry), which was used for the imaging Mach-Zehnder interferometer, was steered to the first nonpolarized beam splitter (NPBS), after which the beam was separated to perform off-axis interferometry. The sample was placed on a three-axis displacement table for wide-field illumination. An objective (Daheng Optics, GCO-213 40x, NA = 0.60) imaged the sample to a scientific complementary metal-oxide-semiconductor (sCMOS) camera which was positioned at the imaging plane of the objective, where an exact (magnified) replica of the sample field can be formed. The acquisition rate of the sCMOS that we used (PCO.Panda.4.2, Germany) is 48 frames/s when acquiring at the full resolution of 2048 × 2048 pixels. To produce a clean reference beam, a 20 μm pinhole was placed within the reference path at the common focus of a pair of lenses (L1 and L2) performing spatial filtering. Finally, the reference field was slightly tilted relative to the sample beam and interfered with the sample beam to form uniform phase modulation fringes in sCMOS. The standard Fourier transform (FT) algorithm was adopted to reconstruct the amplitude and phase [24]. Specifically, the camera respectively recorded a hologram with and without a sample and performed a fast two-dimensional Fourier transform (By selecting the higher-order information in the spectrum to fundamental and performing the inverse Fourier transform), by subtracting and unwrapping the phase information, the phase directly related to the sample can be obtained. Off-axis interferogram was recorded after optimized the fringe contrast with an exposure time of 10 ms. The temporal and spatial phase sensitivity was evaluated as shown in Figure 1B,D (a 30 s continuous and a series of different points measurement of the phase without samples). The absolute value of the phase is not meaningful, but the relative change of the phase is meaningful. The standard deviation of points was selected to demonstrate the time-space domain phase fluctuation, the smaller the value, the more stable the phase, which shows great temporal-spatial stability as shown in Figure 1C,E. Principle As for the spatial coherence imaging system, the intensity comes out at sCMOS has the form: |U 0 | 2 and |U 1 (x, y)| 2 represent the irradiance distribution of reference and the irradiance distribution of sample respectively, φ(x, y) represents the optical delay caused by the sample, which is the amount of interest in the experiments, f x and f y , respectively, represent the spatial frequencies of fringes with X and Y direction, and φ n is the additional phase modulation introduced by the environment noise. For easy description, we denote 2πf x x + 2πf y y as φ sym . By Fourier high-pass filtering, the interference term U(x, y) can be isolated: By applying Euler's formula, j represents an imaginary unit: In the spectrum, the interference term U(x, y) is divided into two parts as follow, which distribute along the center fundamental frequency signal symmetrically and contain the same high-frequency information, F is the Fourier operator, u(k x , k y ) is the Fourier transform of U(x, y). By taking any term (u(k x , k y ) +1 or u(k x , k y ) −1 ) in the spectrum as higher-order information and return it to the fundamental frequency: The term φ sym introduced by moving reference beam is eliminated. The phase and amplitude information of the sample could be obtained by inverse Fourier transform of u(k x , k y ) ±1 in the spectrum back to the spatial domain: The phase calculated here includes the phase caused by nonsamples, which we solve in this paper by the following method. The phase caused by ambient noise or the possible presence of impurity scattering field φ n in the system can be obtained by performing the same process above on the interferogram without samples. Finally, the real phase value of the target sample can be obtained by calculating the difference between the phase value obtained with and without the sample φ(x, y) ϕ(x, y) − φ n . It can be seen from Figure 1 that this method can make the system have good measurement sensitivity. Sample Preparation To demonstrate the phase stabilization ability of the proposed method, we performed QPI of polystyrene sphere as well as the RBC spontaneous hemolysis. For the study, all cell samples were taken from three-month-old mice. After euthanasia on mice (Put the mouse's body straight, lift it up about 30°diagonally, and instantly break the cervical vertebra according to IACUC guidelines), we extracted blood by removing the eyeballs of mice. For comparison, we divide the obtained blood into two parts, one immediately poured into a centrifuge tube with anticoagulant and the other not, centrifuged for 5 min at a speed of 2000 RPM, and then sucked 2 μl red blood cells on a slide for imaging experiments with our experimental system. Animal related experiments were approved by Guangdong Medical Experimental Animal Center (Code: C202110-01). RESULTS Using quantitative phase imaging techniques for live cell monitoring can better reflect other cell information such as phase, optical thickness, etc. We first achieved the complex field and phase measurement ability of our system using standard samples (polystyrene microspheres) as shown in Figure 2. Polystyrene microsphere is a common test target because of its simple and easily identifiable structure. The polystyrene microspheres we use are about 10 microns with refractive index of 1.60. We took a hologram of the sample as shown in Figure 2A in Olympus oil medium with approximate refractive index of 1.52. Figure 2B,C show the overall amplitude and phase distributions with Fourier transform (FT) algorithm. The phase information shows the spherical structure of the sample, what's more, we plotted the phase profile as shown in Figure 2D to verify the effectiveness of our imaging system. (Real-time dynamic process of the polystyrene microspheres See Supplementary Figure S1). Then, we imaged living red blood cells with different mediums. The result of the complex field of the blood cell without anticoagulant and its phase profile at 1s are shown in Figure 3A-D, and Figure 3E-H shows the results of the red blood cell with anticoagulant. After obtaining the interferogram ( Figure 3A) with high contrast, the complex field of this sample was obtained by using the algorithm introduced above, as shown in Figure 3B,C. To verify the accuracy of the achieved phase, the phase value across the middle white line was plotted ( Figure 3D) which showed the unique structure of RBCs. To observe the morphological changes of red blood cells during spontaneous hemolysis, phase measured for 4 consecutive hours, the results of the first 2 h were shown in Figure 4. On the one hand, it can be clearly observed that the structure of the nonanticoagulant RBC membrane has undergone significant changes as shown in Figure 4A-E which is caused by the variation of its osmotic pressure between internal and external during cell spontaneous hemolysis [7]. Specifically, the RBC ( Figure 4A) has a complete structure that means it has two peaks which can be seen from its phase curve shown in Figure 3D. An hour later, as shown in Figure 4F, the RBC only has one peak which indicates both that its cell membrane morphology has changed and it has died. From can find that the morphology of the RBC is basically unchanged cause of death of the cell. On the other hand, the results in Figure 4G-K show that the anticoagulant RBC membrane has not changed significantly and the RBC has two peaks all the time as shown in Figure 4L. Comparing Figure 4A-E with Figure 4G-K, it can be found that due to the addition of anticoagulants to the samples, which enabled the RBC to maintain physiologically active for a long time and with no obvious variation of the RBC phase. The dynamic phase of the RBC was continuously acquired after 2 hours for approximately 120 min with 30 s intervals as shown in Supplementary Figures S2, S3. To show the phase variation of the non-anticoagulant and anticoagulant RBC in 2 hours qualitatively, we continually recorded the maximum phase of the cells with 10 min intervals and calculate its average value as shown in Figure 5A and Figure 5B. Apparently, the maximum phase of nonanticoagulant RBC trended to change in one direction, but of which the anticoagulant RBC changed little and leveled off at the same time. The standard deviation was used to show the stability of data usually. Compared with the anticoagulant RBC, the standard deviation of the non-anticoagulant RBC phase maximum is bigger which means non-anticoagulant RBC has changed sharply and the addition of anticoagulants has the effect of making the cell morphology last longer. CONCLUSION In summary, we have measured the complex field of objects using off-axis interferometry. The dynamic phase and phase profile are used to describe the morphology changes of the sample. We have observed the spontaneous hemolysis process of red blood cells with two different mediums in 2 hours (with anticoagulant or not). For the non-anticoagulant RBC, the cell membrane changed significantly during spontaneous hemolysis. We also quantitatively described the spontaneous hemolysis of red blood cells in two ambient fluids by using the standard deviation of the maximum phase. The advantages of our system are that good measurement stability of the system is obtained by subtracting the background phase, the phase and profile information are combined and can be detected in real-time, and the time of one single shoot needed is about 0.1 s. We believe that this work can be applied to the physiological detection of living cells. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. ETHICS STATEMENT The animal study was reviewed and approved by Animal related experiments were implemented according to the guidance of the Medical Department of Shenzhen University. AUTHOR CONTRIBUTIONS XL and GQ conceptualized the study. XL, GQ, and WY performed the analysis of data. XL and GQ performed the data collection and wrote the original draft. XL, GQ, WY, HL, RH, JQ, and LL reviewed and edited the manuscript. LL reviewed and supervised writing the manuscript.
v3-fos-license
2019-11-27T16:06:06.637Z
2019-11-27T00:00:00.000
208302043
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/s12870-019-2125-z", "pdf_hash": "6b132463f14247b8e3780194e3ebfc614f373877", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:312", "s2fieldsofstudy": [ "Biology" ], "sha1": "6b132463f14247b8e3780194e3ebfc614f373877", "year": 2019 }
pes2o/s2orc
PI signal transduction and ubiquitination respond to dehydration stress in the red seaweed Gloiopeltis furcata under successive tidal cycles Background Intermittent dehydration caused by tidal changes is one of the most important abiotic factors that intertidal seaweeds must cope with in order to retain normal growth and reproduction. However, the underlying molecular mechanisms for the adaptation of red seaweeds to repeated dehydration-rehydration cycles remain poorly understood. Results We chose the red seaweed Gloiopeltis furcata as a model and simulated natural tidal changes with two consecutive dehydration-rehydration cycles occurring over 24 h in order to gain insight into key molecular pathways and regulation of genes which are associated with dehydration tolerance. Transcription sequencing assembled 32,681 uni-genes (GC content = 55.32%), of which 12,813 were annotated. Weighted gene co-expression network analysis (WGCNA) divided all transcripts into 20 modules, with Coral2 identified as the key module anchoring dehydration-induced genes. Pathways enriched analysis indicated that the ubiquitin-mediated proteolysis pathway (UPP) and phosphatidylinositol (PI) signaling system were crucial for a successful response in G. furcata. Network-establishing and quantitative reverse transcription PCR (qRT-PCR) suggested that genes encoding ubiquitin-protein ligase E3 (E3–1), SUMO-activating enzyme sub-unit 2 (SAE2), calmodulin (CaM) and inositol-1,3,4-trisphosphate 5/6-kinase (ITPK) were the hub genes which responded positively to two successive dehydration treatments. Network-based interactions with hub genes indicated that transcription factor (e.g. TFIID), RNA modification (e.g. DEAH) and osmotic adjustment (e.g. MIP, ABC1, Bam1) were related to these two pathways. Conclusions RNA sequencing-based evidence from G. furcata enriched the informational database for intertidal red seaweeds which face periodic dehydration stress during the low tide period. This provided insights into an increased understanding of how ubiquitin-mediated proteolysis and the phosphatidylinositol signaling system help seaweeds responding to dehydration-rehydration cycles. Background As the boundary between land and sea, the intertidal zone has a unique ecological environment caused by the changing of tides and period of exposure due to the amplitude between spring and neap tides. Seaweeds growing in the intertidal zone are subject to sometimes extreme abiotic stresses during low tide, such as dehydration, strong solar irradiance and fluctuating temperature [1,2]. Among all of these stressors, dehydration of the algal tissues is one of the most important limitations determining the upper and lower vertical distributions of intertidal seaweeds [3,4]. Water loss can cause osmotic stress, mechanical damage to membrane systems and intra-cellular oxidative stresses which are induced by excessive reactive oxygen species (ROS) [2,[4][5][6]. In addition, dehydration can affect the main physiological and biochemical processes in seaweeds, including photosynthesis, protein synthesis and energy metabolism [4,5,7,8]. Therefore, it is of great significance to study both the adaptive and tolerance mechanisms of seaweeds when responding to water deficit stress caused by exposure. When facing desiccation stress, plant can usually activate rapid transduction of environmental stress signals and tolerance-related biochemical regulatory mechanisms [9]. Some studies on dehydration mechanisms in various seaweeds illustrated that increased antioxidant enzymes and components associated with detoxification were responsible for eliminating over-production of ROS [2,4,5,10]. Meanwhile, the accumulation of compatible solutes can regulate cellular osmotic pressure and protect seaweed tissues during the dehydration process [11][12][13]. Other studies have paid much attention to the effect of dehydration on photosynthetic activity and cyclic electron flow [4,8,14,15]. Nevertheless, some aspects relating to the actual mechanisms of dehydration tolerance are still unsolved. For example, the studies mentioned above did not uncover the associated upstream regulation and signal transduction systems. Recently, some studies illustrated that certain algae such as red seaweed Pyropia orbicularis and streptophyte green alga Klebsormidium share some common responding mechanisms with mosses or higher plants during water loss stress [8,16,17]. These findings inspired us to explore potential molecular pathways responsive for dehydration tolerance that have been previously overlooked in seaweeds. For example, the ubiquitin system is crucial for higher plants in their responses to drought stress [18,19], but still poorly studied in seaweeds. Gloiopeltis furcata (Rhodophyta) is a marine macroalga which occurs abundantly on the upper, rocky intertidal zone in the North Pacific coast [20,21]. It has an important economic value in food, textile and traditional medicine [22]. Extracts of G. furcata have a variety of proposed functions such as cancer prevention and blood anti-coagulation [23][24][25]. Ecologically, seaweeds inhabiting the upper interidal zone (exposed for the greatest duration) are believed to have considerable potential to tolerate dehydration [3,4]. In accordance with this expectation, G. furcata is indeed able to tolerate water loss for more than 72 h, despite nearly 80% of tissue water content being lost during low tide [26]. More specifically, even with a tissue water content as low as 6%, G. furcata could recover photosynthetic activity after complete submergence and rehydration [26]. Recently, G. furcata was reported to have a high O 2 -radical-scavenging activity [27]. These results indicated that G. furcata was an ideal model for studying dehydrationinduced acclimation strategies of red seaweeds in intertidal ecosystems. In this study, we chose G. furcata as research material and simulated two dehydration-rehydration cycles in the laboratory ( Fig. 1), with goals to find the key pathways and genes related to dehydration stress by using RNAsequencing and weighted gene co-expression network analysis (WGCNA). Considering that natural tidal changes are continuously cyclic, continuous periodic treatments can help us to better understand the responding during dehydration stress. The results provide valuable insights for understanding the molecular mechanisms associated with dehydration of intertidal seaweeds. Overlook of transcriptome assembly After assembly and filtration (see Materials and Methods), all clean reads were assembled into 32,681 uni-genes with an average length of 799 bp, and N50 was 1238 bp ( Table 1). The average GC content of the uni-genes was 55.32% (Table 1). Twenty-four cDNA libraries were generated by RNA sequencing, and the number of reads (after filtration, see Materials and Methods) per library ranged from 21.90 M to 34.58 M (Additional file 1: Table S1). Statistical analyses of unigenes expression levels in each sample showed that most of uni-genes were in the range of 0.5-5 RPKM (reads per kilobase of exon region per million mapable reads) and 5-100 RPKM. Approximately, 61% of expressed uni-genes had RPKM values ≤5, and 39% had RPKM values ≥5 (Additional file 2: Fig. S1). BLAST results indicated that 12,813 (39.2%) uni-genes were annotated into at least one of the following databases: the Kyoto Encyclopedia of Genes and Genomes database (KEGG) and Clusters of Orthologous Groups of proteins (KOG), NCBI non-redundant protein sequences (Nr) and Swissprot, 4815 uni-genes were annotated into all of these databases (Additional file 3: Fig. S2). Fifteen uni-genes were selected for quantitative reverse transcription PCR (qRT-PCR) analysis in order to validate the quality of RNA-Seq data and 75% of which had a correlation coefficient ≥ 0.8, suggesting a strong consistency between quantitative results and transcriptome data (Additional file 4: Table S2). Such a correlation between RNA-Seq and qRT-PCR confirmed the high reliability of transcriptomic profiling data. Modules associated with dehydration tolerance by WGCNA According to the filter standard (see Materials and Methods), 4876 uni-genes were removed. A topological overlap matrix (TOM) was generated using a set of 27, 805 uni-genes for WGCNA (Additional file 5: Fig. S3). After a dynamic tree cut and merging, twenty modules were identified and designated by different colors, with gene numbers ranging from 84 (Deeppink) to 5513 (Coral2) ( Fig. 2a and b). Module-trait relationships and eigengenes expressions of each module indicated that Coral2 was characterized by two clear fluctuations, in accordance with the two cyclic treatments of dehydration ( Fig. 2b and c). Specifically, the expression patterns of genes grouped into Coral2 were up-regulated during two dehydration treatments, and down-regulated during rehydration ( Fig. 2b and c). Although the gene expression at SD4.5 did not show significant increase, all other time points were in line with expectations (Fig. 2c). The Coral2 module was thus chosen for the following analysis. Pathway enrichment analysis of Coral2 provided insight into phosphatidylinositol signal transduction and ubiquitin mediated proteolysis 5513 uni-genes were grouped into the Coral2 module, and the functional pathways were characterized in Coral2 using GO analysis and KEGG enrichment study. A large number of uni-genes in Coral2 were involved in genetic and environmental information processing and metabolism. Statistics on GO terms of uni-genes in Coral2 showed that many uni-genes involved in cellular process and metabolic process, and the molecular function of binding and catalytic activity had the largest number of uni-genes (Additional file 6: Fig. S4). KEGG pathway enrichment analysis revealed that Coral2 was enriched (P < 0.05) in the ubiquitin mediated proteolysis pathway (UPP), protein export, phosphatidylinositol signaling system (PI signal system), and both starch and 0h 12h 6h 24h 18h First Dehydration Second Rehydration Second Dehydration First Rehydration sucrose metabolism (Fig. 2c, Table 2, Additional file 7: Table S3). Many uni-genes belonging to the UPP and PI signals had a ≥ 1.5 fold change of RPKM value during drought stress, with very low expressions during rehydration (Table 2). We subsequently constructed coexpression networks to look for hub genes in these two pathways, considering that the function of UPP in dehydration response may have been neglected in seaweeds and the PI signal pathway still has not been studied clearly. Identification of key genes related to UPP A UPP network was built by UPP-related genes and their co-expression genes in Coral2. The network contained 857 nodes and 1783 edges ( Fig. 3a and Additional file 8: Table S4). Eight ubiquitin-related genes exhibiting high connectivity (> 100) were indicated in red, and four with interactions ≥150 were considered as large hubs (Fig. 3a). These large hubs were annotated as one unnamed protein product (Gf19264/UN1) and three E3 ubiquitin ligase (Gf27969/E3-1, Gf16090/E3-2, Table S4). Four other mid-size hubs (100 < degree < 150) were Gf19263 (SUMO-activating enzyme sub-unit 2/SAE2), Gf19266 (unnamed protein product/UN2), Gf08625 (elogin-binding protein-like protein/EBP) and Gf18542 (RING-box protein 1/E3-RBX) (Fig. 3a). Amongst these mid-size hubs, SAE2 was related to the small ubiquitinrelated modifier (SUMO) conjunction. E3-RBX was also one kind of E3 with a RING-box domain. Thus, four of the eight hubs in the UPP network encoded for different E3 ligases and interacted with 589 nodes (Fig. 3a). Nodes with at least four neighbors within one distance were extracted, in order to investigate if there was a common up-stream regulator or downstream substrate for the hub genes in UPP. In addition to hub genes, 128 nodes were selected and colored yellow (Fig. 3a). We focused on nine genes according to their descriptions, and extracted them with the hub genes to built a fine-scale sub-network. (Fig. 3b and Additional file 8: Table S4). We found seven genes involved in genetic information processing, two genes encoded for sub-units 2 and 6 of the transcription factor TFIID (Gf03506/TFIID2, Gf26104/TFIID6); one gene encoded the alpha sub-unit of transcription initiation factor IIE (Gf17133/TFIIE by KO ID) and one gene encoded for the zinc-finger domain containing protein (Gf31441/ZFP) ( Fig. 3b and Table 3); three othergenes were involved in RNA processing, such as Gf06149 (exosome component 4-like The expression level of the genes were shown as fold changes of RPKM comparing to CG. Any fold change higher than 1.5 in four dehydration treatments (FD 4.5, FD6.0, SD4.5, SD6.0) considered as significant increases and used bold type /Rrp41-like), Gf30426 (nuclear cap-binding protein subunit 2/NCBP2), Gf32444 (DEAH-box RNA helicase/ DEAH) ( Fig. 3b and Table 3). Moreover, Gf24243 (kin (ABC1)/ABC1) and Gf12536 (Beta amylase/Bam1) relating to osmotic regulation also interacted with hub genes in UPP ( Fig. 3b and Table 3). Identification of key genes related to PI signal transduction The Coral2 module-based co-expression network for PI signal transduction indicated that the PI signal network contained 1029 nodes and 1433 edges ( Fig. 4a and Additional file 9: Table S5). Four large hubs such as Gf07443 3. The Coral2 module-based ubiquitin-mediated proteolysis pathway (UPP) networks. a: The network for UPP in Coral2. This network was constructed by extracting Coral2 genes which annotated to UPP as seed nodes, with an edge weight cu-off of 0.45. Hub genes present in the network were coded red and those nodes with at least four neighbours within one distance were coded yellow. The node size represented the level of connectivity. b: The sub-network for hub genes in UPP and candidate hub-interacted genes. The sub-network was built by extracting the hub genes and the selected candidate co-expression genes connected to more than one hub genes within the UPP network (calmodulin/CaM), Gf18777 (inositol-1,3,4-trisphosphate 5/6-kinase/ITPK), Gf08993 (3′(2′), 5′-bisphosphate nucleotidase/SAL) and Gf14233 (phospholipase C/PLC by KO ID) were identified in the PI network (Fig. 4a). One mid-size hub was Gf03841(inositol hexakisphosphate (IP 6 ) and diphosphoinositol-pentakisphosphate (PP-IP 5 or IP 7 ) kinase/IHDPK) (Fig. 4a). Among these hubs, CaM, served as a calcium receptor protein, which had the highest connectivity (Fig. 4a). ITPK and SAL not only participated in the phosphatidylinositol signaling system but also in inositol phosphate metabolism (Additional file 9: Table S5). Ninety-three nodes (except hub genes) with at least three neighbors within one distance were colored yellow in order to find the potentially common interacted genes with hubs (Fig. 4a). Six of them were chosen to build the sub-network (Fig. 4b and Additional file 9: Table S5). Surprisingly, five of these also had co-expression with hub genes in the UPP network, such as Rrp41-like, DEAH, ABC1, Bam1 and TFIID2 (Figs. 3b and 4b and Table 3). Bam1 was also considered to play a key role in the pathway of starch and sucrose metabolism which was enriched in Coral2 (Additional file 7: Table S3). Gf25392 (major intrinsic protein /MIP), one kind of aquaporin (AQP) that regulates cellular water balance under desiccation conditions, was also found to have some interactions with the PI network hub genes (Fig. 4b). Expression patterns of four candidate drought-related hub genes Combining the fold changes of the RPKM value, the result of co-expression networks and the annotations of genes, we chose E3-1, SAE2, CaM, ITPK as candidate dehydration-responsive genes, and identified their transcriptional expression with qRT-PCR. As shown in Fig. 5, expression patterns of four hub genes demonstrated their positive responses to dehydration stress. CaM, E3-1 and SAE2 genes all showed increases during first dehydration and was down-regulated when submerged (Fig. 5). The expression patterns of the second simulated low tide cycle were not exactly the same as the first. Fig. 4. The Coral2 module-based phosphatidylinositol (PI) signaling system networks. a: The network for PI signal system. This network was constructed by extracting those Coral2 genes which annotated to the PI signal pathway as seed nodes, with an edge weight cut-off of 0.4. The hub genes present in the network were coded red and the nodes with at least three neighbours within one distance were coded yellow. The node size represents the level of connectivity. b: The sub-netwok for hub genes in PI signal and candidate hub-interacted genes. The subnetwork was built by extracting hub genes in the UPP and the selected candidate co-expression genes connected to more than one hub genes Specifically, the expression levels of CaM, E3-1 and SAE2 genes increased at FD4.5 during the first cycle, but began to rise at SD6.0 during the second cycle, and the fold changes at SD6.0 were lower than at FD4.5 or FD6.0 (Fig. 5). These three genes were all downregulated during first rehydration, especially SAE2 and CaM almost reduced to zero at FD4.5 and FD6.0. However, they did not show downtrends at SR6.0. ITPK was up-regulated at four time points during dehydration treatment, peaked at FD4.5 and SD4.5, and reduced to almost normal level after rehydration (Fig. 5). Discussion Post-translation modifications mediated dehydration response in G. furcata, E3 and SAE2 were the key genes induced during exposure to dehydration As an important regulatory mechanism of posttranslation modifications, ubiquitination has been widely recognized as a mechanism to explain how higher plants respond to drought stress [18,19,28,29]. In this study, UPP was significantly enriched in the dehydrationrelated Coral2 module, the transcript level of many genes in UPP showed up-regulation in at least one dehydration treatment group and were significantly downregulated during dehydration ( Fig. 2d and Table 2). We therefore suggest that UPP may have played an important role in the dehydration-response of G. furcata. Growing evidence indicates that UPP has a functional role of tolerating abiotic stress in those seaweeds studied this far, despite most of these studies focusing on heat shock stress [30][31][32][33]. A study of the red alga Gracilaria lemaneiformis demonstrated that the bioactivity of UPP was directly related to its ability to withstand heat stress [33]. Nevertheless, research linking UPP to dehydration in macroalgae are still very scarce. One study of dehydration-tolerant seaweed Fucus vesiculosus reported on the over-expression of two ubiquitin-ribosomal protein fusion genes, thereby suggesting that the protein targeting and degradation pathway, via 26S proteasome, was up-regulated during dehydration [34]. Another study of the high intertidal red seaweed, P. orbicularis showed a significant increase of the transcript level of ubiquitin during water loss and down regulation during rehydration [5]. The results presented here on the intertidal red alga G. furcata further support the viewpoint that, as with higher plants, ubiquitin production is also a crucial mechanism for exposed seaweeds to adapt to periodic dehydration stress during low tides. Ubiquitin-activating enzymes (E1), ubiquitinconjugating enzymes (E2) and ubiquitin ligase enzymes (E3), and their concerted actions are required for UPP [35,36]. In the dehydration-related module Coral2, six uni-genes encoded different E3 and four of them were found as the hubs in the UPP network ( Table 2 and Fig. 3a). This was taken to indicate that diverse kinds of E3 exist in G. furcata and likely play vital functions in UPP. In fact, E3 are the most diverse enzymes in the ubiquitin proteasome system and have been divided into many different types based on sub-unit compositions (e.g. HECT type, F-box type, RING type, U-box type) [36]. E3 determine the specific selection and recognition of substrate proteins in UPP [35,37], the fact that E3 interact with different targets accounts for making them important hubs in UPP network. In G. furcata, we found one candidate dehydrationinduced E3 (E3-1) significantly responded to water loss because its expression increased markedly at FD4.5, FD6.0 and SD6.0 (Fig. 5). We propose that E3 positively responding to water deficit and resisted the negative influences caused by exposure to dehydration. In fact, many types of E3 have been identified in higher plants to participate in tolerating drought stress by reducing oxidative stress and regulating downstream genes (e.g. drought-related transcription factors) [38][39][40][41]. Our study is the first report of an E3 expression profile significantly associated with a dehydration response in macroalgae. Similarly, differential expression analysis in the red intertidal macroalga, Pyropia haitanensis showed that long-term exposure to high temperature, another major abiotic stress for intertidal seaweeds, also induced the expression of E3 [42]. In addition to E3, another hub gene SAE2 related to SUMOylation also showed high connectivity with other genes in the UPP network, with an almost 7-10 fold increment at different time points of the dehydration treatment (Table 2 and Fig. 5). Such a pattern enabled us to argue that SAE2 also played an important role in dealing with the stresses of water loss. SAE2 is an essential large sub-unit for the SUMO-activating enzyme (SAE) in SUMOlaytion [43].SUMOlaytion is another important form of post-translational modification similar to UPP, which also plays crucial roles in regulating what when responding to abiotic stress responses, particularly in plant drought stress [44][45][46]. Some studies suggest that SAE may act as a limiting regulatory step during SUMO conjugation [43,47]. It was found that when SUMO conjugation was impaired by the expression of the SAE2 UFDCt domain, plants became more sensitive to drought [48]. PI signal connected with Ca 2+ /CaM pathway responded to dehydration, and the role of CaM and ITPK in signal transduction When land plants are faced with abiotic stresses, timely signal transmission can activate response mechanisms which enable them to respond and resist [49,50]. The PI signal system, including a series of kinases and phosphatases, is involved in the perception and transduction of external stimuli [49,51]. In this study, the PI signal pathway showed a significant enrichment in the dehydration-related module, supporting its crucial function during water loss in G. furcata (Fig. 2d). The PI signal also interacted with the Ca 2+ /CaM signal pathway [49,51,52]. Here, CaM is an essential part of the Ca 2+ /CaM signal [53], and showed the highest connectivity in the PI signal network, its expression was affected by water loss during both of the periodic dehydration treatments (Figs. 4a and 5). These results suggested that for intertidal seaweeds, the PI signal pathway most likely interacts with Ca 2+ /CaM signals in order to construct a complex signal regulatory network for transducing external stress signals and activating downstream pathways in response to dehydration. CaM is an ubiquitous Ca 2+ -binding protein related to many biochemical reactions, and can be activated by Ca 2+ release in response to multiple environmental stimuli [53]. The intertidal red seaweed Porphyra yezoensis also increased the expression level of the CaM gene when loss of tissue water reached 20%, the expression peaked when reaching 40% [54]. Likewise, CaM showed an increased transcription level in P. orbicularis during dehydration and returned to the normal levels when rehydrated [5]. This environment-induced change of CaM has also been reported under copper and temperature stresses in other seaweeds [55,56]. Although the PI signal pathway is associated with the Ca 2+ /CaM signal pathway in stress-induced signal networks, how they interact with each other is still largely unknown. One connection between these two pathways is through the secondary messenger inositol 1,4,5-trisphosphate [Ins (1,4,5) P 3 ], generated by PLC hydrolysis [49,51,57,58]. Ins (1,4,5) P 3 can release Ca 2+ from the endoplasmic reticulum (ER) to change the concentration of Ca 2+ and regulate downstream mechanisms with activated CaM [49,51,57]. In turn, the activities of PLC can be stimulated by higher Ca 2+ concentration [58,59]. Our results from G. furcata, however, do not seem to support this viewpoint. Although one hub gene (Gf14233) in the PI network annotated as PLC by KO-ID, but the expression level of PLC was not affected significantly by water loss (Fig. 4a, Table 2). Actually, genes encoding for the Ins (1,4,5) P 3 receptor have not been found in plants [60]. More importantly, the pleckstrin homology (PH) domain and EF hand, the important structural domains for membrane binding and Ca 2+dependent activation of PLC, were reported to be absent in red seaweeds [60]. The candidate hub-related genes found in our study were mostly elevated to more than1.5 fold compare to CG at FD 4.5 and FD 6.0, and they exhibited obvious up-regulated trends at SD6.0 compared to FR6.0 (Additional file 10: Fig. S5). Some of them (i.e. DEAH, TFIID, ABC1, Bam1 and MIP) were reported to link to drought or other abiotic stresses. For example, DEAH was reported to be involved dealing with salt and Cd [74,75]. ABC1 transgenic Arabidopsis thaliana showed an enhanced osmotic regulation ability [76], and ABC1 is also responsible for oxidative stress [77,78]. TFIID was proposed a candidate gene for drought response and heat stress in higher plants [79][80][81], and Bam1 has been reported to participate in drought tolerance by degrading transitory starch to sustain proline [82,83]. In addition, MIP in the PI signal network encoded for an aquaporin (AQP) (Fig. 4a), which can maintain cellular water balance under drought conditions and be regulated by a calcium signal [84]. These reports, along with the results from G. furcata, indicated that the ubiquitin mechanism and signal transduction system, including the PI and Ca 2+ signals, may influence these drought-related regulator (e.g. TFIID) and osmotic defense genes (e.g. Bam1, MIP) in order to tolerate dehydration. Conclusions The PI signal pathway, connected with the Ca 2+ /CaM signal, is required in order for G. furcatato transduction the external dehydration signal, whilst the ubiquitinrelated pathway provided the post-translation modifications required to cope with dehydration stress in this intertidal red alga. These two pathways served as regulatory mechanisms which may interact with some dehydration-related transcriptional factor, RNA modification, and osmotic regulation genes thereby enabling this seaweed to tolerate considerable water loss at low tide. In these two pathways, E3, SAE2, CaM and ITPK were possibly at the key positions and induced significantly by dehydration. Further research should be conducted related to the complex functions of the ubiquitin mechanism operating in more seaweed species and to provide greater insight into the specific mechanisms of interactions between the PI and Ca 2+ /CaM signals. Plant materials and cultivation conditions Specimens of the intertidal red alga G. furcata were collected at April from the upper, rocky intertidal zone in Yantai, Shandong, China (37°27′45.23″N, 121°26′34.28″ E). We plucked the samples when G. furcata expose to the air during the low tide and all samples were transported to the laboratory with ice. After cleaning the surface with filtered seawater, we selected young and healthy thalli with uniform size and thickness (3-4 months young thalli about 3 cm long and 1.5 cm wide) for experiments. Before the formal treatment, according to previous study in G. furcata [26], our samples were pre-cultured at 11°C with 50 μmol photons m − 2 s − 1 irradiation provided by cool-white fluorescent lamps in a 12:12 (light:dark) cycle. Experimental design and sampling Considering the semi-diurnal tide in Yantai [85], and the physiological data reported in previous studies [86], we designed two continuous dehydration-rehydration cycles within 24 h to simulate the natural tidal cycle (Fig. 1). We dried the surface moisture of samples with filter papers and fixed the bottom of the thalli with wooden frame, then suspended individually in a ventilated culture box (GZP-250 N, Shanghai Senxin Co., Ltd., China) for 6 h as the first dehydration treatment (FD). Samples were taken out at 4.5 h and 6 h respectively during FD and immediately frozen in liquid nitrogen and stored at − 80°C. After FD, the first rehydration treatment (FR) was continued, samples were submerged in the filtered sea water that previously placed at the culture box, and samples were taken out at 5.5 h and 6 h successively during FR. During the formal experiment, the temperature was controlled at 11°C and the relative humidity was 77%. The treatments during the second dehydration (SD) and second rehydration (SR) were same as FD and FR mentioned above, and samples were taken out at 4.5 and 6 h during SD and 6 h during SR. The G. furcata were sampled at eight different time points during two dehydration-rehydration cycles, for convenience, the eight time points were abbreviated as CG (control group), FD4.5, FD6.0, FR5.5, FR6.0, SD4.5, SD6.0, SR6.0. At each time point we prepared more than three biological replications. RNA extraction, sequencing and transcriptome assembly Total RNA was extracted from the thalli using the Plant RNA Kit 150 (Omega, USA) according to the manufacturer's instructions. RNA degradation and contamination were monitored on 1% agarose gels and Nano-Photometer spectrophotometer (DeNovix, USA). As described above, we designed eight time points for RNA sequencing, and each had three biological replications. A total of 24 Seq-libraries were sequenced on Illumina HiSeq2000 instrument (Guangzhou Gene Denovo Biotechnology Co., Ltd., China). The raw reads were filtered by removing reads with adapters, reads with a ratio of N (the percentage of nucleotides in the reads that could not be sequenced) > 10% and low-quality reads (those reads containing over 40% bases with Q value < 20%), reads that mapped toribosome (rRNA) were also removed. For normalization of the data, gene expression levels were measured by the number of uniquely mapped reads per kilobase of exon region per million mapable reads (RPKMs). WGCNA analysis and network construction For WGCNA (co-expression network analysis), unigenes obtained from RNA-sequencing were filtered for a second time. Genes with RPKM < 1 in all samples or the coefficient of variation [(SD/Mean)*100%] < 0.1 were removed. All the remaining uni-genes were used for WGCNA with R-package [87]. According to the correlations between genes, the co-expression adjacency matrix was formed and converted to a topological overlap matrix (TOM). Co-expression modules were generated by hierarchical clustering and a dynamic tree cut, the minimum module size was set as 50 and modules with a tree height < 0.3 were merged together. The expression patterns of 20 modules were displayed as the eigenvalues (equivalent to the weighted synthesis values of all genes in each module and can reflect the comprehensive expression level for the module) [88]. The Coral2 modulebased networks for UPP and PI signal pathways were constructed using genes annotated into these pathways as nodes to extract the co-expressed gene pairs. The resulting networks, with an edge weight cut off of 0.45 (for PI signal network) or 0.4 (for UPP network), were visualized by Cytoscape [89]. The hub genes are those showing the most connections in the network and often play important roles. In our study, genes which had degree values between 100 and 150 were considered as mid-size hubs, those genes with degree values > 150 were considered as large hubs. Gene annotation and pathway enrichment analysis Description of uni-genes and pathway annotation were performed by BLASTing databases, including the Kyoto Encyclopedia of Genes and Genomes database (KEGG, http://www.genome.jp/kegg), Clusters of Orthologous Groups of proteins (KOG, http://www.ncbi.nlm. nih.gov/COG/KOG), NCBI non-redundant protein sequences (Nr, http://www.ncbi.nlm.nih.gov) and Swissprot (http://www.expasy.ch/sprot). In addition, GO analyse and KEGG enrichment test were also used to detect potential dehydration responsive function between co-clustered genes in the dehydration-related module. Pathways with P value < 0.05 were considered as significant enrichment. Gene expression analysis by qRT-PCR RNA used for qRT-PCR was the same as that for previous transcriptome sequencing. For the first-strand cDNA synthesis, the PrimeScript™ RT Regen Kit with gDNA Eraser Kit (Takara, Kyoto, Japan) was used following the manufacturer's instructions. The expression levels of selected genes were verified by qRT-PCR with the following cycling conditions: 95°C for 30 s, followed by 40 cycles of 95°C for 10 s, 55°C for 10 s and 72°C for 20 s. The qRT-PCR was conducted using SYBR® Premix Ex Taq™ II (Tli RNaseH Plus, Takara, Kyoto, Japan). The reactions were performed in 20 μL volumes containing 10 μL of 2 × SYBR® Premix Ex Taq, 0.8 μL of each primer (10 μM concentration of each primer), 3.0 μL of the diluted cDNA mix, and 6.4 μL of RNA-free water. A melting curve for each amplicon was then analyzed to verify the specificity of each amplification reaction. No template controls were included for each primer pair and each PCR reaction was carried out in three biological replicates. Elongation factor 2 (EF2) were used as internal controls [90,91], and the sequence of EF2 was obtained in NCBI (GenBank: EF033553.1). The 2 -△△Ct method was used to calculate relative gene expression values. The sequences of the primers used are given in Additional file 11: Table S6, the primers concentrations were 10 μM.
v3-fos-license
2021-03-13T08:34:26.934Z
2021-01-08T00:00:00.000
232216158
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.auctoresonline.org/uploads/articles/1625315811pdf.pdf", "pdf_hash": "fda4f47e4c04a4c036450c747e6f1fe9e6bc8838", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:314", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "8fb275ead5630ebe31dbbfa5f7e081c2d57c8cb7", "year": 2021 }
pes2o/s2orc
Laser Therapy in the Complex Prevention and Treatment of Covid-19 (Preliminary Results) This article presents preliminary results of treatment of 51 patients with COVID 19 (Moscow Region, Russia). These patients were subjected to various schemes of immune stimulation for the prevention and treatment of this disease. Were comparedPercutaneus laser therapy (PLT), Intravenous Laser Blood Irradiation (ILBI), Drug stimulation and their combination. The results showed: 1. In the treatment of COVID-19, the use of various types of immunomodulation and anticoagulants proved to be most effective. 2. The combination of ILBI and TLT with immunomodulators proved to be the most effective in the prevention and treatment of COVID-19. 3. Immediate use of immunomodulators at the very beginning of COVID-19 reduces the severity of the disease, and facilitates its course. Background and Aims: We started to use laser therapy in 1988. When used in different categories of cancer patients, it was found that various types of laser radiation stimulate the immune system. We started to use this peculiarity of laser therapy to boost the immune status of sickly patients with weakened immune systems, as well as for prevention and treatment of respiratory viral infections (e.g. influenza, parainfluenza, acute respiratory infections). We performed various types of Immunostimulation in 51 patients from Russia and evaluated its influence both on the morbidity and the course of COVID-19. Rationale: Laser radiation (890-910 nm) stimulates cell immunity, increasing the amount of active T-lymphocytes. The wavelength of 630-640 nm is the most effective for irradiation both the blood and the vascular walls. At this wavelength photons are absorbed by oxygen, microcirculation improves, decrease blood viscosity, and direct impact on the nerve and muscle elements of the vascular wall influences the activity of the vascular and nervous systems. Conclusion::The laser therapy practice we have been exercising for over 30 years has shown that it produces good immunostimulating effects. The use of various laser therapy methods combined with immunomodulatory drugs allow to reduce the number of patients infected with COVID-19, and reduce the severity of the disease. Introduction: In September 2019 we paid attention that there evolved an acute respiratory infection which had a clinical course different from regular flu clinic and the acute respiratory viral infections. The main manifestation of this infection were long-term and severe pneumonias, which did not respond well to the standard therapy. The computer tomography revealed interstitial edemas of the tissue which is an attribute of viral pneumonias. This was first noted with children (within September-October 2019), then with adults (within October-December, 2019). Initially we treated this phenomenon as a new form of parainfluenza. Discussions with pediatricians and general practitioners confirmed our assumptions. Up to date, there are no clear criteria for COVID-19 treatment, while the treatment methods used before, appeared to be inefficient. We therefore chose to exercise those methods of laser therapy that could be efficient in the prevention and treatment of COVID-19. ILBI has the direct impact on all blood components, and the vascular wall. The most significant among these effects are improved microcirculation, improved rheological blood properties, as well as blood clotting reduction. For the studies of this method I received an award (Ming Chien Kao Awards 2015) [1,2,3]. For immunostimulation we used the PLT schemes, which we had used for the treatment of cancer patients, respiratory viral infections [4,5]. The obtained results showed that the use of combined Immunostimulation (laser + medication) could reduce the number of infected patients, as well as reduce the disease severity. Methods of Immunostimulation: Stage I: ILBI (November-December, 2019). ILBI was performed according to the standard method (10 sessions) (A puncture of the ulnar vein with one-time sterile catheter was performed. The catheter consisted of a thin needle with a monofilament through which intravascular irradiation came out (produced by Polironic). After each session of irradiation, the catheter was rejected. Stage II: Percutaneous laser therapy (Percutaneus laser therapy, hereinafter "PLT") (mid-late January, 2020). We performed percutaneous laser stimulation using the standard method during 5 days. This technique is used for Immunostimulation in cancer patients [5,6]. Stage III: After 2 weeks, immunomodulatory drugs (Tirolone, Levamisole) were introduced according to the application schemes. Patients of group 4: With the first signs of the disease these patients received Tilorone according to the application scheme (from 3 to 5 tablets subject to the disease severity). Vitamin C (ascorbic acid) 200 mg 3 times a day. In case the treatment appeared to be ineffective, the patients called an ambulance and were hospitalized when necessary. Results All patients were divided into the following groups: Group 1: 8 patients (ILBI + PLT + immunomodulatory drugs). Group 4: 8 patients (immunomodulatory drugs according to application the scheme + vitamin C). All had the signs of acute respiratory disease. The patients were distributed by their age as follows ( Table 1). The patients were distributed by their gender as follows ( Table 2). We divided all the patients who had fallen ill according to the severity of the disease: Mild degree: fever up to 38-39˚ C, dry cough, weakness, sweating, difficulty breathing (from 1 to 3 days). Cough from 5 to 7 days. The patients had no need to be hospitalized. A 17-year-old male patient fell ill abruptly. The temperature rose to 39,2˚ C. The patient developed a severe dry cough, chills, weakness, sweating. Immediately, Amixin was taken according to the application scheme, and vitamin C. In order to prevent secondary infections, Biseptol was taken according to the application scheme. On the next day the patient's state improved: the temperature dropped to 37,8˚ C. The weakness and the sweating lessened by day 5. The dry cough remained for 2 weeks, though. The 2nd course of PLT was performed, Amixin was re-taken according to the application scheme, as well as vitamin C. Gradually the cough lessened. The cough finally disappeared only 4 weeks after the onset of the disease. The treatment was performed at home in self-isolation. Contact persons were not identified. The results obtained for morbidity are presented in Table 3. 1* -(17-year-old male). The severity of the disease was assessed by the duration of dry cough and weakness which lasted for 4 weeks. The patient was not hospitalized. 1** -(54-year-old male). The patient got infected at work from a COVID-19 colleague. The Patient was tested for COVID-19, however the test came back negative. After 3 days the temperature increased to 39,2˚ C, chills and weakness developed. There was no cough. On the 4th day in the morning the temperature dropped to 36,0˚ C. The patient developed a tachycardia and a slight shortness of breath. Taking into account concomitant diseases (hypertension, obesity of the 2 nd degree, coronary heart disease, vascular atherosclerosis), the patient was hospitalized. The diagnosis of COVID-19 was confirmed. The diagnosis is COVID-19, mild degree. Starting February, 2020 the patients were tested for COVID-19. The test results are presented in Table 4. The patients are herewith distributed by the number and the degrees of COVID-19 severity as follows (Table 5): We did not take into account the number of hospitalized patients (i.e., 1 patient). This patient was hospitalized not due to the severe grade of the COVID-19 disease, but due to his concomitant diseases. Even more so, the diagnosis made in the hospital was a mild grade of COVID-19. Discussion For over 30 years we have been using laser energy of various wavelengths to prevent and treat various diseases [6,7,8,9,10]. Near infrared radiation range (890-910 nm) can penetrate deeply into tissues, and have a local effect on the work of various tissues and organs. This allows you to influence the pathological focus and the work of various body systems. The use of this wavelength allows stimulating cellular immunity for up to 3 months as an average. Using ILBI (630-640 nm) allows to directly influence the parameters of all blood cells, blood plasma, the process of clotting and the work of nerve and muscle elements of the vascular wall. The best results with the combined use of ILBI (Group 1) can be explained by a direct impact on blood cells, improvement of microcirculation and rheological properties of blood by reducing blood clotting. Our assumptions were confirmed by the recommendations of the Ministry of Health of Russia (there have been 6 updates of the recommendations for the treatment of COVID-19 since January, 2020). Since recently all patients in Moscow hospitals were obliged to take anticoagulants and immunomodulatory drugs. The identical results in Groups 2 and 3 are explained by the fact that the patients' response to the introduction of immunomodulatory drugs (Group 3) is stronger than in patients receiving PLT(Group 2). The body responds to the introduction of immunomodulatory drugs with a faster production of interferons and the activation of cellular immunity. The body responds to PLT in a slower manner. As a rule, the effect comes slower, and reaches its maximum in 2 weeks. The duration of the effect is longer and lasts up to 3 months. This allows limiting the number of complications and to reduce the severity of the disease. The scheme (PLT + immunomodulatory drugs) has been used for the treatment of acute respiratory infections for the past 5 years. As a rule, in 70-80% of cases, a positive effect was observed. The disease was stopped within 2 or 3 days. We have never used ILBI in these cases. Using ILBI to treat acute infections was a relative contraindication [11]. The clinical picture of COVID-19 failed to fit the picture of regular respiratory infections. It was characterized by high aggressiveness, atypical course and lack of treatment effects. The effect of frosted glass denoting the diffuse interstitial edema of the lung tissue was noted on Computed Tomography (CT) scans. All these indicated a systemic lesion of the lung tissue. ILBI's main systemic effects were reduced blood clotting and improved microcirculation. In this case, these effects could be the main ones in the complex treatment of COVID-19. In May, 2020 anticoagulants were introduced in Moscow hospitals. This proved that our choice had been correct. In conclusion I would like to note that the presented material describes a small amount of cases, therefore the results may statistically be unreliable, while a short observation period does not provide long-term treatment results. I would just like to share the experience which was gained in the treatment of COVID-19. I trust this will help other researchers, and can support further improvements aimed at the COVID-19 treatment effectiveness. Conclusion: 1. In the treatment of COVID-19, the use of various types of immunomodulation and anticoagulants proved to be most effective. 2. The combination of ILBI and TLT with immunomodulators proved to be the most effective in the prevention and treatment of COVID-19. 3. Immediate use of immunomodulators at the very beginning of COVID-19 reduces the severity of the disease, and facilitates its course.
v3-fos-license
2021-09-09T20:45:24.655Z
2021-07-30T00:00:00.000
238822432
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/13/15/8525/pdf", "pdf_hash": "e579a7780130d460400aa751d49b259d0defc636", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:322", "s2fieldsofstudy": [ "Environmental Science", "Geology", "Geography" ], "sha1": "8ea38eae6aef1abad264162f73a1f1cddc17760c", "year": 2021 }
pes2o/s2orc
A Preliminary Geothermal Prospectivity Mapping Based on Integrated GIS, Remote-Sensing, and Geophysical Techniques around Northeastern Nigeria : Spatial mapping of potential geothermal areas is an effective tool for a preliminary investigation and the development of a clean and renewable energy source around the globe. Specific locations within the Earth’s crust display some manifestations of sub-surface geothermal occurrences, such as hot springs, a volcanic plug, mud volcanoes, and hydrothermal alterations, that need to be investigated further. The present area of investigations also reveals some of these manifestations. However, no attempt was made to examine the prospectivity of this terrain using the efficient GIS-based multicriteria evaluation (MCE) within the scope of the Analytic hierarchy process (AHP). The integration of remote sensing, Geographic information system (GIS), and other geophysical methods (Magnetic and gravity) was performed to map the promising geothermal areas. Multiple input data sets such as aero-magnetic, aero-gravity, aero-radiometric, digital elevation model (DEM), geological map, and Landsat-8 Operational Land Imager (OLI) data were selected, processed, and use to generate five thematic layers, which include heat flow, temperature gradients, integrated lineaments, residual gravity, and lithology maps. The five thematic layers were standardized and synthesized into a geothermal prospectivity map. The respective ranks and weight of the thematic layers and their classes were assigned based on expert opinion and knowledge of the local geology. This research aims to apply an efficient method to evaluate the factors influencing the geothermal energy prospects, identify and map prospective geothermal regions, and, finally, create a geothermal prospectivity model. Introduction Geothermal energy is one of the significant energy sources contributing substantially to the global sustainable energy supply drive. Several countries globally, such as Turkey, the U.S.A, France, Indonesia, Germany, Italy, etc., derive a substantial part of their electricity from geothermal energy sources [1,2]. The use of geothermal energy for electricity generation could not be detached from the need for a global shift from environmentally harmful "fossil fuel" means of energy generation to more clean and renewable sources of energy. For many decades, Nigeria has been unable to generate enough energy to meet its domestic and industrial needs. Nigeria's current installed electricity generation capacity is put at 6000 MW [3]. It also has a maximum output of 4000 MW that is primarily derived from two primary sources which include: hydro (36%) and gas-fired (fossil fuel) sources with 64% contribution [3]. The fossil fuel means of energy generation that is currently being used has been negatively impacting the safety of this part of the globe. These negative impacts include air pollution, crude oil spillage problems, and gas flaring issues, etc. Hence, there is a need to explore other sources of energy that cause no harm to the Abdel Zaher et al. [11] further evaluated the geothermal resources potentials of Egypt using a GIS-based method. The study employed input data such as the distance to faults, the Bouguer gravity anomaly, the distance to seismic activity, the CPD, the heat flow, and the temperature variation at depth. The study further mapped areas of better prospects for geothermal occurrences. Another attempt to study the possible target areas for future geothermal exploration around Sinai Peninsular was performed by Aboud et al. [12] via the estimation of the Curie point depth (CPD) isotherm over the place. The study was also able to compute the CPD isotherm that portrays higher geothermal prospects, especially around the eastern parts of the Egyptian Gulf of Suez. Yalcin and Kilic [13] employed a GIS-based multicriteria decision analysis to study the geothermal resource investigation of the Akarcy basin in Turkey. The study employed several thematic evidence layers and integrated them using the GIS platform to construct a favorability map for its case study area. Noroullahi et al. [14] deduced geological, temperature, and geochemically suitable regions that enabled the evaluation of geothermal favorability of the Akita and Iwate areas of Japan. Moreover, Moghaddam et al. [5] conducted a spatial analysis and multicriteria decision for a regional-scale geothermal favorability mapping using Fry analysis and the weight of evidence technique. The study produced a prospectivity map via spatial integration of thematic data using a Boolean index overlay and Fuzzy prediction modelling. Meng et al. [15] attempted to investigate the geothermal resource potentials around the northeastern Chinese Changbai mountain regions via integrating five thematic maps of temperature distribution, distance to fault position, distance to places of hot springs, and distance to graben's site. A GIS-multi-criteria decision method was used to integrate these thematic maps into a geothermal favorability map for the area. The study revealed areas of high, low, and moderate favorability. Furthermore, Abuzied et al. [16] also conducted a geothermal investigation study around the Gulf of Suez coastal area of Egypt using an integration of remote sensing, GIS, and geophysical techniques. Similarly, Procesi et al. [17] performed geothermal favourability mapping using a geospatial overlay around the Tuscany area of Italy. The study employed several thematic layers and integrated them using a weighted overlay analysis technique in a GIS platform. The study was also able to map new promising areas of geothermal potentials. Yousefi et al. [18] combined geological, geochemical, and geophysical data sets to develop a geothermal resource map of Iran. The study identified 18 promising zones of geothermal occurrences. It further helped guide decision makers on the most suitable locations for future geothermal investigation in the Iranian geological landscape. Calvin et al. [19] used a remote sensing technique to identify geothermally related minerals that help in the geothermal resources exploration of Nevada areas. The study also recommends some promising target areas for subsequent investigation. Nishar et al. [20] demonstrated the capability to use thermal infrared to image or identify the surface geothermal features around the Wairakei-Tauhara geothermal field close to Taupo, New Zealand. The study displayed UAV as an advanced means of geothermal resources mapping. In another study aimed at discovering new geothermal potential zones of Nevada Fish Lake valley using remote sensing data, Littlefield and Calvin [21] employed visible-, near-, and short-wavelength infrared to map geothermal-related mineral occurrences around the Fish Lake area. Despite the manifestation of some surface geothermal indicators, such as Wikki Warm spring (32 • C), Ruwan Zafi hot spring (54 • C), Billiri Mud volcanoes, Wuyo-Gubrunde basaltic plugs, Kaltungo basaltic plugs, etc., this particular geological terrain had been neglected in terms of geothermal studies by geoscientists and other stakeholders in the energy sector. The only available studies conducted on this locality were the ones that involve the separate use of either geophysical or geological data to study the heat flow, structural pattern, and surface pattern of geological indicators of geothermal occurrences. These include the studies by Obande et al. [22], Ayuba and Nur [23], Aliyu et al. [24], Anakwuba and Chinwuka [25], Kasidi and Nur [26], and Abraham et al. [27], who used lower resolution airborne magnetic data in the study of the Curie point depths (CPDs) and the heat flow pattern of a tiny part of either the Gongola basin or the surrounding sedimentary basins. Others include Epuh et al. [28], who conducted a multivariate statistical analysis of gravity data around the Gongola basin to explore hydrocarbon. Moreover, Epuh and Joshua [29] used gravity data over the Gongloa basin to evaluate the depth of basement interphase and sedimentary thickness. Deeper depth values were found using gravity modeling conducted in the area. Seismic modeling was also done to determine the seismic velocity using the depth-normalized velocity iteration technique. Moreover, Epuh and Joshua [30] used gravity data to analyze the structural configuration in the Gongola basin for possible hydrocarbon exploration. The study revealed the presence of NE-SW and NW-SE structural trends. Bassey et al. [31] conducted an interpretation of gravity data over the same Gongola basin with the aim of understanding the hydrocarbon prospectivity of the area. the study established the structural configurations as well as revealed the areas of greater sediments thickness that could serves as an area of better prospectivity. The presence of horst and graben structural configuration over Gongola basin was established in a study by Shammang et al. [32] using gravity data. Other studies that used geological information such as hot/warm springs, rock distributions, mud pods, and structural distributions that revealed geothermal indicators include [33][34][35], among others. An overview of the various studies carried out around this important terrain shows that the very effective multicriteria evaluation was never applied to partially integrate all the available geoscientific data that partially impact the geothermal occurrences in this area. Hence, a GIS-based multicriteria evaluation approach for geothermal prospectivity mapping was employed in the present work to have a better decision guide on areas displaying higher geothermal prospects in this region. This approach used the Analytic hierarchy process (AHP) to better view the geothermal opportunities of this region. Hence, this study, being the first of its kind in the entire Nigerian terrain to integrate multiple geoscientific data using a GIS platform for regional-scale geothermal prospectivity mapping, thus provides or serves as a baseline study upon which a further detailed geothermal prospectivity analysis can be carried out by concentrating on the most significant areas that were reported to be "highly prospective" in the present study. It also serves as the first GIS-based Multi-criteria study that included radiogenic heat sources among its evaluation criteria. Therefore, it provided a prospectivity map that is all-encompassing and highly reliable. The principal objective of the current research is to employ an efficient method to appraise the factors controlling the geothermal energy prospects, identify and map prospective geothermal zones, and finally construct a geothermal prospectivity map of the present area. Description of the Study Area The present study area spans from longitudes 10 • 00 E to 12 • 00 E and latitudes 9 • 30 N to 11 • 30 N. It lies within crystalline and sedimentary geological terrain popularly known as the Gongola sub-basin of the Benue rift (in Nigeria) and its adjoining basement and volcanic rock outcrops ( Figure 1A). Gongola basin is a sub-basin forming part of the intra-continental Benue basin of Nigeria. It lies in a north-south direction covering about 15,000 km 2 . Surrounding the Gongola sub-basin towards its western regions are the exposures of crystalline basements ( Figure 1B). Moreover, towards the east are the exposures of basements and volcanic rocks popularly called the Gubrunde horst and Biu basalt, respectively. and volcanic rock outcrops ( Figure 1A). Gongola basin is a sub-basin forming part of the intra-continental Benue basin of Nigeria. It lies in a north-south direction covering about 15,000 km 2 . Surrounding the Gongola sub-basin towards its western regions are the exposures of crystalline basements ( Figure 1B). Moreover, towards the east are the exposures of basements and volcanic rocks popularly called the Gubrunde horst and Biu basalt, respectively. The present area (Northeastern Nigeria) also revealed some surface manifestations of geothermal occurrences such as Wikki warm spring, Ruwan Zafi hot spring, volcanic rock outcrops around Billiri, Kaltungo, Wuyo-Gubrunde, and Lunguda basalts, as well as the mud volcanoes around Billiri town ( Figure 1B). Sediment deposition within the Benue trough spans from the Albian to Tertiary times [36,37]. The oldest sedimentary rock found within the Gongola sub-basin is the continental (Braided/lacustrine/Alluvial fan) Bima Formation. The formation comprises sandstones, shale, mudstones, and clays. The Bima Formation was overlain by the transitional (Barrier Island/deltaic) Yolde Formation during upper Albian to Cenomanian times. It consists of cross-bedded sandstone, shale, and clay stones. It serves as a transition period within which a change from continental to marine environment was recorded. The final transformation from the continental to marine environments resulted in the deposition of an entirely marine (offshore/estuarine) rock formation known as the Pindiga Formation [37]. Pindiga Formation's deposition began from the upper Cenomanian to middle Santonian times [37]. The Pindiga Formation consists of black shale, sandstone, and siltstone. A major orogeny occurred during the Santonian times, and this orogeny affected the entire length and breadth of the Benue trough. It also resulted in the folding and upliftments of all the sediments pre-dating its occurrence [38]. The folding and upliftments of the sediments during the Santonian times resulted from reorganising the global tectonic plates [38]. The post-Santonian sediments deposition in the Gongola basin led to the deposition of the continental (fluvial) Gombe Formation during the Campanian to Maastrichtian times [37]. The Gombe Formation comprises sandstones, siltstones, ironstones, coal, and mudstones [37]. During the tertiary period, overlying the Gombe Formation is the continental (fluvial/lacustrine) Keri-Keri Formation [37]. The deposition of the tertiary Keri-Keri Formation occurred in a continental setting [37]. The formation consists of sandstone, shale, and clays. The Precambrian basements rocks surrounding the western parts of the Gongola sub-basin ( Figure 1B) include the Migmatites gneisses, Banded gneisses, and medium-tocoarse-grained biotite hornblende granites. Others include charnokytes and ignimbrites ( Figure 1B). The following rock types are found at the eastern parts of the Gongola subbasin: basalts and porphyritic biotite hornblende granite. The evolution mechanism of the entire Benue trough has been a subject of debate. However, there are two models regarding the evolution process of this mega-structure (Benue trough): the rift system model and the pull-apart basins model [39]. Many models aimed at providing explanations to its formation process were presented under the rift system model. These models considered the Benue trough to be formed due to the direct consequence of the separation of African and the South American continent. This mechanism led to the formation of the Atlantic Ocean. The basin is considered one of the three arms of the triple junction that failed after initiating at a location beneath the present-day Niger-Delta area. A tensional movement mechanism was proposed as the leading cause of the occurrence of the Benue rift structure [40]. This model was supported by geological and gravitational evidence [40]. However, the existence of an RRF type triple junction below the present-day Niger-Delta basin was proposed by [41]. An RRF triple junction occurrence further shows the presence of an active spreading ridge [41]. The Benue trough was considered a failed arm of the three arms of the triple junction that failed and formed an Aulacogen [42]. The "pull-apart systemic model" proposal reveals that wrenching is the major tectonic process that led to the evolution of the Benue trough provided by [39]. This model was based on geological and geophysical studies that show transcurrent faults as the main dominant structures rather than the normal faults associated with the rift system proposed by the earlier model. Materials and Methods An integrated approach using GIS, remote sensing, and geophysical data was adopted to appraise the geothermal prospects of this part of Nigeria. The research workflow pattern is summarised in (Figure 2). Materials and Methods An integrated approach using GIS, remote sensing, and geophysical data was adopted to appraise the geothermal prospects of this part of Nigeria. The research workflow pattern is summarised in (Figure 2). Materials The resources (materials) used for the current study include; the airborne magnetic data of enhanced resolution, airborne radiometric and gravity data, Shuttle-Radar Topography Mission digital elevation model (SRTM-DEM), Landsat-8 (OLI) data, and geological maps. The airborne magnetic, radiometric, and gravity data were obtained from the Nigerian Geological Survey Agency (NGSA). Fugro-air services acquired this set of data on behalf of the Nigerian Government between 2004 and 2009. The airborne magnetic, radiometric, and gravity data were obtained with flight line intervals of 500 m, flight height of 80 m, which indicates higher resolution compared to the former 1975-1978 data, whose flight heights of data acquisition is 2 km, a tie line of intervals of 2 km following an NE-SW trend. The flight line intervals of 500 m used and the 80 m flight height used Materials The resources (materials) used for the current study include; the airborne magnetic data of enhanced resolution, airborne radiometric and gravity data, Shuttle-Radar Topography Mission digital elevation model (SRTM-DEM), Landsat-8 (OLI) data, and geological maps. The airborne magnetic, radiometric, and gravity data were obtained from the Nigerian Geological Survey Agency (NGSA). Fugro-air services acquired this set of data on behalf of the Nigerian Government between 2004 and 2009. The airborne magnetic, radiometric, and gravity data were obtained with flight line intervals of 500 m, flight height of 80 m, which indicates higher resolution compared to the former 1975-1978 data, whose flight heights of data acquisition is 2 km, a tie line of intervals of 2 km following an NE-SW trend. The flight line intervals of 500 m used and the 80 m flight height used against the 2 km flight height used in the 1975-1978 set of these potential field data show a significant increase in the resolution of the data. The SRTM-DEM and the Landsat-8 data were obtained around April 2019 from www.earthexplorer.usgs.gov in smaller grids, with each grid having a resolution of 30 m. The downloaded SRTM-DEM and Landsat-8 (OLI) data were then mosaic into single larger units using ArcGIS and ENVI software. The Landsat data was subjected to several digital image processing techniques such as directional filtering methods to enhance and map structural features based on tonal, textural, and shape images. Several corrections were effected on the aeromagnetic, radiometric, and aero-gravity data by the acquisition company; these include the international geomagnetic reference field (IGRF) correction using the year 2010 theoretical model. Other corrections, such as temporal and diurnal correction, were also affected by the company's magnetic data before its release for usage. Other corrections applied on the magnetic data in the present study included the reduction to equator correction, which has to do with the realignment of the data on its causative source. The correction was performed to remove the impact of the bipolar effect of the earth that tends to displace the magnetic signal away from its source. Other processing applied to both magnetic and gravity data includes the horizontal derivative filtration and analytic signal computations to establish both the structural and lithologic information resulting from density and magnetic susceptibility contrast. The radiometric data used was earlier preprocessed before its subsequent processing by Fugro-aero services. The primary purpose of conducting radiometric data preprocessing is to ensure the correction of the observed radiometric data against influences that are unrelated to geological sources. Most of these preprocessing techniques were applied by the acquisition company to ensure the quality control of the acquired data. These include merging signals from different sources, validating the measured data, and checking against missing or spurious data values. Thus, the radiometric data was later subjected to some corrections that include: terrain clearance variation correction, general background rate correction, which is due to nongeologic sources such as atmospheric radon, instrument background, and cosmic rays were also affected. Other corrections done on the data in the field include data labeling, spectral smoothening, and height correction. The conventional method of collecting and processing gamma radiation data is to observe 3 to 4 reasonably wide spectral windows. The Potassium ( 40 K) decays displayed an accompanying emission of gamma radiations of 1.46 MeV. Uranium and thorium decays show a corresponding release of radiations of 1.76 MeV and 2.62 MeV, respectively. Thorium depicts high mounts compared to the other two radio-elements concentrations [43]. The gravity data were corrected against the international gravity standardization Net (IGSN, 1971). The Bouguer gravity data gotten was then subjected to regional-residual separation using the low pass and high pass filters with a cut-off distance of 200 km. Structural Lineaments Mapping Structural lineaments distribution, distribution density, and interconnectivity play a significant role in providing a passage through which hot molten rock material (magma) found within the deeper parts of the crusts are transported to the Earth's surface [31]. Therefore, this process enables heat transmission from the magma chamber to the earth's surface via the convection process [44]. It accounts for a high heat flow usually recorded along outcrops of younger volcanic rocks. The surface lineament mapping was conducted on both the SRTM-DEM and the Landsat-8 data by exporting each of the images into Global mapper for lineament extraction. The linear features were then mapped manually based on variation in tone, texture, and other geomorphological appearance of the images. The extracted lineaments were exported to the ArcGIS environments where the lineaments distribution contour map and the lineaments density maps were created using the Kernel density algorithm. [45]. Similarly, the airborne magnetic data were also used in sub-surface lineament mapping. However, the total magnetic field grid that was earlier reduced to the magnetic equator was subjected to regional/residual isolation, and horizontal derivatives filtration technique using the "MAGMAP" tool of the Geosoft-Oasis montaj software. This was performed using the expression below to ensure that magnetic linear features (Faults, Fractures, etc.) are enhanced [46]. The expression is as follows: where ∂T ∂H is the total horizontal derivatives, ∂T ∂X is the derivatives concerning the x-direction, and ∂T ∂Y is the derivatives concerning the y-direction. The horizontal derivatives map produced was later imported into global mapper and subsequently to ArcGIS for lineament extraction. Magnetic derived lineaments distribution density contour map, and their distribution density maps were generated following the same kernel density approach mentioned earlier on. The composite lineaments distribution maps derived from Landsat-8, SRTM-DEM, and magnetic data were subjected to the calculation of their Euclidean distances. This is to evaluate the relationship (the proximity) between them and the surface geothermal indicators [13]. The resulting individual Euclidean distances image maps created were then merged into a single composite structural lineaments map for the study area ( Figure 3A) using the spatial analyst tool of the ArcGIS environment. mapping. However, the total magnetic field grid that was earlier reduced to the magnetic equator was subjected to regional/residual isolation, and horizontal derivatives filtration technique using the "MAGMAP" tool of the Geosoft-Oasis montaj software. This was performed using the expression below to ensure that magnetic linear features (Faults, Fractures, etc.) are enhanced [46]. The expression is as follows: where is the total horizontal derivatives, is the derivatives concerning the xdirection, and is the derivatives concerning the y-direction. The horizontal derivatives map produced was later imported into global mapper and subsequently to ArcGIS for lineament extraction. Magnetic derived lineaments distribution density contour map, and their distribution density maps were generated following the same kernel density approach mentioned earlier on. The composite lineaments distribution maps derived from Landsat-8, SRTM-DEM, and magnetic data were subjected to the calculation of their Euclidean distances. This is to evaluate the relationship (the proximity) between them and the surface geothermal indicators [13]. The resulting individual Euclidean distances image maps created were then merged into a single composite structural lineaments map for the study area ( Figure 3A) using the spatial analyst tool of the ArcGIS environment. Curie Point Depth/Geothermal Gradient Estimation The airborne magnetic data obtained from NGSA was used in the estimation of Curie point depth (CPD) within the study area. This is because Curie point depth reveals or indicates the depth at which the dominant magnetic minerals within the crustal rocks lose their magnetism due to an increase in temperature beyond 580 • C [47,48]. Therefore, calculating this is very significant in geothermal studies, as it reveals thinner areas of the crust have high geothermal gradients and emit a large amount of heat [16]. The CPD estimation was performed on the already generated upward-continued residual magnetic anomaly map of the study area using the centroid depth method of spectral analysis by [49][50][51], as expressed below: where Z b is the depth to the bottom of magnetic sources, Z o is the depth to the central part of the magnetic sources, and Z t is the depth to the top of the magnetic sources. The Temperature gradients ( ∂t ∂Z ) were computed as shown in Equation (3). It was used as another important parameter (thematic layer) for the determination of the geothermal potential of the area [16]. Equation (3) is expressed as follows: where θ is the Curie temperature and is considered to be 580 • C. The temperature gradient map ( Figure 4) that was computed was then assigned evidence (thematic layer) map weight value ( Table 1) that was obtained through the use of analytic hierarchy process (AHP) by employing expert opinions. The assigned weight value was given based on the appparent influence of the temperature gradients on the local geothermal occurrence of the area. The temperature gradients map generated was further classified into five layers (classes) ( Table 1). help reveal the subsurface lithology, such as hidden plutons and structures that serve as an area of geothermal interest, serves as a zone of weakness that enables the transmission of hot molten magma to the Earth's surface [4,53]. Similarly, the gravity sub-classes derived from the reclassification of the gravity anomaly map ( Figure 6A,B) were also assigned evidence layer weight value (Table 1) based on their assumed level of impact on the geothermal occurrences following a pair-wise comparison method [15]. Geological Maps The geological map of the study area was carved from the larger geological map of Nigeria published by NGSA in the year 2009 ( Figure 5A,B). It was used to study the different rock units covering the study area. Before integrating this map with other evidence maps using GIS, it was first assigned an evidence map layer value ( Table 1) that is considered proportionate to the rocks' degree of impact on the occurrence of geothermal resources in an area. The map was then classified further into three (3) main lithology groups ( Figure 5A,B) using the analytic hierarchy process (AHP) employing ArcGIS software. The layer weight value (Table 1) was assigned according to each layer's degree of influence on geothermal resource occurrence. The lithological groups produced include plutonic, volcanic, and sedimentary rock groups. The rocks arranged in order of their degree of impact on the geothermal occurrences from high to low are volcanic, plutonic, and sedimentary rock groups (Table 1). Gravity Data Processing The airborne gravity data acquired from the geological survey agency of Nigeria (NGSA) was used in generating the Bouguer gravity anomaly map for the current study area. It was further used in the computation of the analytic signal map and the residual gravity anomaly map. The Bouguer gravity anomaly map generated helped reveal the subsurface rock blocks based on their density contrast with the country rocks. Other information disclosed by the Bouguer gravity anomaly map includes estimated thickness of the sedimentary piles and the displayed extent and geometry of a basin [52]. The residual gravity anomaly map produced was also given a weight value corresponding to its considered level of influence on geothermal occurrence using the pair-wise comparison technique in the AHP. The fact that gravity data can significantly help reveal the subsurface lithology, such as hidden plutons and structures that serve as an area of geothermal interest, serves as a zone of weakness that enables the transmission of hot molten magma to the Earth's surface [4,53]. Similarly, the gravity sub-classes derived from the reclassification of the gravity anomaly map ( Figure 6A,B) were also assigned evidence layer weight value (Table 1) based on their assumed level of impact on the geothermal occurrences following a pair-wise comparison method [15]. Radiogenic Heat Production As stated earlier, the airborne radiometric data used for the present study was obtained from NGSA. It actually comprises of equivalent-Uranium (eU), equivalent-thorium (eTh) and percentage-potassium (%K) radio-elements grids. These data grids were earlier subjected to basic filtering and processing by the acquisition company before its release for research purposes. The primary purpose of using the radiometric data in the current study is to estimate the amount of radiogenic heat production in the present area of focus. It was achieved via the use of the relation provided by [54]. The relation is as expressed below: RHP µW/m 3 = ρ(0.0952 C u + 0.0256 C Th + 0.0348 C k ) (4) where C u is the uranium concentration derived from the equivalent uranium concentration map, C Th refers to the thorium concentration obtained from the equivalent thorium concentration map, %K is the percentage potassium derived from the percentage potassium map, and ρ is the average rock density of the rocks within each block of the study area. The RHP stands for the total radiogenic heat production for an area, derived from the individual contribution by the three mentioned radio elements as captured in the above equation by [54]. The radiogenic heat derived from each block covering the study was combined using Geosoft, and a total radiogenic heat map for the study area ( Figure 7A) was generated. The RHP map (data) was considered in light of the perceived influence it has on the occurrence of geothermal resources. were earlier subjected to basic filtering and processing by the acquisition company before its release for research purposes. The primary purpose of using the radiometric data in the current study is to estimate the amount of radiogenic heat production in the present area of focus. It was achieved via the use of the relation provided by [54]. The relation is as expressed below: RHP (µW/ ) = ρ(0.0952 + 0.0256 + 0.0348 ) (4) where is the uranium concentration derived from the equivalent uranium concentration map, refers to the thorium concentration obtained from the equivalent thorium concentration map, %K is the percentage potassium derived from the percentage potassium map, and ρ is the average rock density of the rocks within each block of the study area. The RHP stands for the total radiogenic heat production for an area, derived from the individual contribution by the three mentioned radio elements as captured in the above equation by [54]. The radiogenic heat derived from each block covering the study was combined using Geosoft, and a total radiogenic heat map for the study area ( Figure 7A) was generated. The RHP map (data) was considered in light of the perceived influence it has on the occurrence of geothermal resources. Similar to the other previously mentioned evidence maps, the RHP map computed also apportioned an evidence map rank value based on its considered relative level of control on geothermal occurrence. This map was further divided into five sub-classes based on the same principle of its degree of impact on geothermal resources occurrence (Table 1). Similar to the other previously mentioned evidence maps, the RHP map computed also apportioned an evidence map rank value based on its considered relative level of control on geothermal occurrence. This map was further divided into five sub-classes based on the same principle of its degree of impact on geothermal resources occurrence ( Table 1). The assigned weight values for each of the evidence layers were obtained using the pair-wise comparison method. Data Standardization Processing of the various data type before its integration using the weighted overlay (W.O.) method is essential, as these data types are in different formats such as lines (e.g., faults and fractures) and raster formats (e.g., maps of the computed, temperature gradient, gravity, and heat flow, data) and therefore need to be in a consistent form [15]. Hence, they were converted, homogenized, and reclassified using a uniform scale of ref-erence. The minimum and maximum data standardization methods were adopted as it was considered more appropriate to this kind of study objective. The standardized pixel value of (x ij ) was fixed between "0" and "1". For instance, shallow sub-surface temperature gradient requires maximum values while the distance to structural lineaments (proximity to faults and fractures) requires a minimum value. The maximum values needed are as shown below: However, the required minimum value for the thematic layers such as distance to lineaments (faults and fractures, etc.) are as provided below: Weighting There are several methods of assigning weight to thematic map layers. These include the ranking method, the rating method, the trade-off analysis method, and the pair-wise comparison method [13]. The pair-wise comparison method was used in assigning relative weight values to the different criteria (thematic layer classes) used in the present study. Comparing two or more evaluation indexes can be performed to establish its relative significance, as explained by [15]. Table 2 below shows an example of the AHP grading pattern (After [15]). A model created to compute the relative importance weight for each of the individual indices was used. The weighting process was also subjected to a consistency test, where a comparison within the same thematic layers was performed via a judgments matrix created using the scale shown below [15,55]: However, dij (i,j = 1, 2, 3 . . . n) denotes the ratio of significance of i to j. Then, the weight is computed via a square root method [15]. The pair-wise method was adopted because the results obtained are controlled by an associated consistency ratio (C.R.) that guides the weighting process. The relation for the computation of the consistency test index is given as: where n denotes a judgment matrix order, and Lmax represents the greatest eigenvalue of A, the consistency ratio C.R. = CI/RI. If the C.R. found after the layers comparison is less than 0.1, then a high degree of reliability is obtained; as such, a reliable comparison is achieved [15]. However, if the C.R. is greater than 0.1, then an unreliable (unacceptable) level of consistency is obtained; as such, it needs to be revisited (re-compared). Table 1 shows the selected thematic layers classes, their assigned weight, and the consistency ratio (C.R.). Weighted Overlay (W.O.) Technique This technique was applied in the spatial integration of the multiple data sources (input data) to generate a geothermal prospectivity map of the current area of research. This method is performed in a series of ordered operations on the ESRI ArcGIS environments' input data. Moreover, applying a standard scale of values across all the different input data is essential for effective integration. The following stages were followed while performing the weighted overlay analysis: (a) Identification and selection of layers (input data) with varying geothermal influences; (b) Preparation of the data into a grid format and subsequent reclassification using a uniform scale of reference; (c) Allocating weight to each of the reclassified data grids; (d) The allocated weight value for each of the reclassified grid layers (Rec_GRID) is then multiplied by the allocated data type influence, which gives the significance of the layer in the generated model [17]. The individual values of the cell obtained were then summed up to get the final resultant (output) grid (Res_GRID) using the equation below: All of the above-mentioned thematic (evidence) maps (structural-lineaments, geology, temperature gradients, heat flow, gravity data) and their associated sub-divisions (classes) that were earlier assigned a layer weight value were subsequently integrated into the geothermal prospectivity (favorability) map. The computed geothermal potential map was also validated by the field distributions of surface evidence of geothermal occurrences such as hot springs, basaltic plugs, and mud volcanoes reported in the work of [34]. Results After integrating the multiple data sources (magnetic, gravity, radiometric, DEM, Landsat-08, and lithology data) using GIS techniques. The results are presented in the form of map plots. Integrated Structural Lineaments Map The visual inspection (study) of the integrated structural lineaments map of the study area ( Figure 3A,B) shows the distribution of the structures to be in either N-S, NE-SW, NW-SE, or E-W patterns. However, field studies revealed the distribution of most of the surface geothermal occurrences (manifestations) such as warm springs (Wikki warm spring and Ruwan Zafi warm spring), mud volcanoes, and volcanic rock outcrops to be situated along with a NE-SW structural pattern. It shows the influence of these NE-SW structural lineaments on the emplacements and manifestation of these surface geothermal indicators. Hence, attention was given to the NE-SW structural lineaments pattern to evaluate their Euclidean distances. Moreover, the density of the lineaments distribution, as revealed in Figure 3B, shows the high concentration (density) of the lineaments around the granitic and volcanic rock exposures found around Wuyo, Deba, Kaltungo, Gombe, Bajoga, Dukku, and the southwestern parts of Alkaleri town ( Figure 3B). The numerous NE-SW lineament distribution patterns and areas associated with the surface manifestation of geothermal occurrences could be the channel through which heat within the Earth's crust is transmitted to the surface [53,56]. Temperature Gradients The temperature gradient information (map) for the study area was derived from the CPD values calculated. Many authors provided the relationship between the Curie point depth (CPD) and the temperature gradient/heat flow parameters were provided by many authors, e.g., [11,12,22,[49][50][51][57][58][59], among several others. The relationship between the two parameters was found to be inversely proportionate in form. It implies that shallow CPD areas usually transmit high heat flow as well as record high-temperature gradients. Therefore, estimating the temperature gradient of a place provides very vital information on the possibility of high geothermal occurrence [60]. The temperature gradient map of the study area ( Figure 4A,B) reveals places with shallow, low, moderate, and high values. Very low-temperature gradient values are recorded around the extreme northwest, northeast, southeast, and extreme southern parts of the study area ( Figure 4A,B). Other low-temperature gradient zones are found around the western and central parts of the map (near Nafada, Bajoga, Gombe, western Pindiga town). Areas portraying low-temperature gradients (Pinkish colored parts) are found around the central, western, northeastern, and southeastern regions. The moderate-to high-temperature gradient zones represented by the blue-and brown-colored anomalies are found around Darazo, Wuyo, Deba, Kumo, Kaltungo, Tula, Alkaleri, and locations around the north and eastern parts of Nafada town ( Figure 4A,B). The high-temperature gradient areas show some relative degree of conformity with the lithological outcrops of the study area, as they are mostly found to be around the granitic, basaltic, and (in some instances) sedimentary rocks. The high-temperature gradient values recorded also agree with the locations of structural lineaments oriented in a NE-SW pattern. As such, the high-temperature gradient could be attributed to the fractural lineaments pattern that could be the channels for the transmission of heat from the deeper parts of the Earth's surface. Residual Gravity Anomaly and Lithology Maps The careful study of the lithology map ( Figure 5A,B) and the residual gravity anomaly map ( Figure 6B) shows the distribution of sub-surface plutons as well as their volcanic exposures. These rocks' outcrops were distinguished as a result of their different density contrasts. It enables the mapping of lithology based on variations in density. Other features revealed by the gravity anomaly map are the structural patterns, which also help show the areas of high geothermal prospectivity. An analysis of these two maps ( Figure 6A,B) shows the occurrence of high to very high prospects for geothermal events around Wuyo, Deba, Bajoga, Kaltungo, Tula, Northern Misau, and an area around Alkaleri town ( Figure 6B). The very high geothermal prospects recorded along the above-cited locations could be attributed to the occurrence of very dense granitic and volcanic rocks found around these areas. For instance, the exceptionally high geothermal prospects class that spans from Bajoga down to Tula areas ( Figure 6B) in an N-S, NE-SW pattern are found along with the exposures of med-coarse grained biotite hornblende granites, tertiary basalts, porphyritic biotite hornblende granites, banded gneiss, the Bima Formation, and Yolde and Gombe Formations Other regions that portray high to very high prospects based on density contrasts derived from the residual gravity anomaly map are locations around Alkaleri and Misau towns found around the western and the northern parts, respectively ( Figure 6B). The two high density (highly prospective) zones coincide with the exposures of medium-coarsegrained biotite hornblende granites, charnokytes, migmatic gneiss, and banded gneisses. Radiogenic Heat Flow The radiogenically derived heat production (RHP) map ( Figure 7A) of the study area is one of the essential decision criteria employed to evaluate the geothermal prospects of the current location of study. As stated earlier, the radiogenic heat-produced map was derived using Equation (1) provided by [54]. A visual study of this map shows two prominent areas of higher RHP values. These areas include the Wuyo-Gubrunde horst located at the eastern part of the study area and the Alkaleri basement complex, rock areas which are situated at the western parts of the study area ( Figure 7A). Moreover, earlier geological and geochemical studies of rocks around the northeastern part of Nigeria by [61][62][63][64][65][66][67] reported anomalous uranium concentrations around the Wuyo Gubrunde horst of Northeastern Nigeria. Therefore, the high radiogenic heat production recorded by the present work also conforms to the earlier uranium enrichment findings reported by the previous authors using geochemical methods. Discussion Multicriteria evaluation (MCE) is an active tool used to solve decision-making problems/matters through multiple-input data integration [68]. The main goal of using the MCE technique is to help examine selected probabilities based on multiple evidence layers. Hence, the use of MCE in a GIS platform helped in the classification, examination, and reorganization of the available data concerning the choice probability for planning. The analytic hierarchy process (AHP) is a multiple criteria decision method that can assist a decision maker in making a decision when confronted by numerous, complex, conflicting, and subjective layers [68]. The five thematic layers of temperature gradients, radiogenic heat, gravity, lithology, and structural lineaments were generated, processed, and integrated to obtain a favourability map of the research area. Moreover, the multicriteria evaluation (MCE) used in the present study has several advantages that include the processing of extensive data that spans a large area of land, the investigation of present and prospective geothermal fields, and attaining a more organized as well as a more precise outcome through constricting the target zone. The geothermal favorability map ( Figure 8) generated after integrating the various input layers (thematic layers) shows areas of variable degrees of favorability. Four main classes of favorability were delineated; these include the low, moderate, high, and very high categories. The low potential class (50-102) is represented by the blue-colored anomalies that extend from the southwest to the north/northeastern parts of the study area. This anomaly was observed to be located mainly within the cretaceous sedimentary sections of the research area. Another prominent geothermal low potential anomaly class was around the extreme southeastern part near the Dadiya and Lamurde anticlinal areas. The intermediate geothermal potential class is represented by pale green-colored anomalies that ranged from 102-141 in the geothermal favorability map (Figure 8). These anomalies could be found mainly around the central parts of the map (Figure 8). It could also be seen occupying locations near the eastern and the western parts of the map with yellow-colored anomalies. These anomalies are found around the towns of Giade, Misau, Darazo, Bajoga, Deba, Kumo, Kaltungo, and the northern part of Nafada town. A very high potential geothermal prospects class (183-267) is found at the map's extreme western and eastern sections. The field information from these two locations reveals the separate exposures of granitic and basaltic plugs around the areas of Alkaleri, Wuyo-Gubrunde, Billiri, and Kaltungo. Moreover, the basaltic plugs found between the Wuyo and Gubrunde regions tend to be oriented along with the study area's dominant NE-SW structural trend, suggesting its structural control [33]. The high to very high geothermal potential class (Orange-Pinkish colored anomalies) also agrees (conformed) with the field distribution of the following geothermal indicators viz; Wikki warm spring (32 • C), Ruwan-Zafi Hot spring (54 • C), mud volcanoes around Billiri and Kaltungo areas, and the distribution of the volcanic rocks near Billiri-Kaltungo areas. The distribution of the surface geothermal indicators within the moderate to very high potential class helps validate the prospectivity map created. The field information from these two locations reveals the separate exposures of granitic and basaltic plugs around the areas of Alkaleri, Wuyo-Gubrunde, Billiri, and Kaltungo. Moreover, the basaltic plugs found between the Wuyo and Gubrunde regions tend to be oriented along with the study area's dominant NE-SW structural trend, suggesting its structural control [33]. The high to very high geothermal potential class (Orange-Pinkish colored anomalies) also agrees (conformed) with the field distribution of the following geothermal indicators viz; Wikki warm spring (32 °C), Ruwan-Zafi Hot spring (54 °C), mud volcanoes around Billiri and Kaltungo areas, and the distribution of the volcanic rocks near Billiri-Kaltungo The current geothermal prospectivity map shows the regions of greater possibility for geothermal energy occurrences, indicating that the Wuyo-Gubrunde areas and the Billiri-Kaltungo areas, displayed a very high potential for geothermal occurrences, as represented by an extensive NE-SW oriented anomaly which coincides with a lot of regional structural lineaments patterns [69]. The anomaly spans numerous towns such as the Billiri-Kaltungo area, Tula, Deba, Wuyo, and the eastern part of Bajoga shows multiple field exposures of granites, basalts, and mud volcanoes. The distribution of these surface manifestations along regional tectonic stress areas shows the relationship between the formation mechanism of the Benue rift itself. Furthermore, the expression of another high geothermal potential anomaly near Alkaleri and the western parts of Darazo town, whose lithologic outcrops show the presence of older and younger granitic rocks as well as the location of Wikki warm spring which prove the correlation between the geology, tectonic structural features, and the areas of shallower CPD anomalies/high-temperature gradients [70]. Hence, the sections of the study area displaying shallow CPDs/high-temperature gradients are ascribed to crustal thinning, magmatic rising, and fracturing and faulting methods that ensued throughout the development mechanism of the Benue basin in the Jurassic periods [36]. The majority of the parts portraying moderate to high temperature gradients agree wholly or partially with the study area's basement/volcanic rocks exposures. For instance, anomalies close to Alkaleri coincides with migmatite gneiss, banded gneiss, and rock outcrops. Furthermore, anomalies found near parts of Darazo also conform with rock outcrops of biotites hornblende granites, ignimbrites, and the fringe of the Keri-Keri Formation. Wuyo town and its environs are characterized by exposures of basalts, banded gneiss, biotite hornblende granites, and porphyritic biotite hornblende granites and also conform to moderate to high temperature gradient anomalies. The areas of Nafada, Deba, and fragments of Tula demonstrated a moderate level of conformity with their respective lithological outcrops. Nevertheless, a significant section of the regions showing low to very low temperature gradients agree to the sedimentary rock exposures of Pindiga, Gombe, and portions of the Keri-Keri Formations of the Gongola basin and are found to be less conductive. An earlier study by [71], which used bottom hole temperature data from the nearby Bornu basin, shows temperature gradients that range from 30 to 40 • C/km. It also has an average value of 34 • C/km. This value is lower than the average value of 50.2 • C/km reported in the current research. The high-temperature gradients obtained from the present study compared with the [22,35,71] could be attributed to the localization of the bottom hole temperature data used in those studies against the present study that used magnetic data is more expansive extensive in coverage. The principal result of this study is the generation of the very first ever produced geothermal prospectivity map of the present area of focus which highlights new promising regions of geothermal potential. This result guided stakeholders and investors in the power sector towards selecting the best sites for siting an exploratory well. Conclusions A preliminary (reconnaissance) study of the geothermal prospectivity using a Multicriteria evaluation approach on a case study of Northeastern Nigeria was conducted successfully, and it is a global contribution towards renewable energy exploration. The study reveals information that can assist in deciding on locations with better geothermal prospects that can be subjected to further detailed investigation in the future. The new areas of promising geothermal potentials identified include Wuyo, Nafada, Bajoga, Tula, Darazo, and Alkaleri and are found to be distributed at or closer to zones where the outcropping lithologies have high lineament densities, high heat flows, hightemperature gradients, and are closer to granitic or basaltic rocks outcrops. Hence, these surface geothermal indicators validate the accuracy of the potential geothermal map created. This research has demonstrated the capability of the GIS as a vital tool that can integrate multiple and dissimilar data sources into a significant geothermal prospectivity map. It also provided the first set of information to geoscientists and other stakeholders in the sustainable and renewable energy sector about the possible geothermal prospects of this part of the globe. Therefore, this contributes to the global sustainable energy drive via exploration for new areas of higher geothermal prospects.
v3-fos-license
2014-10-01T00:00:00.000Z
2011-03-21T00:00:00.000
434876
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnbeh.2011.00015/pdf", "pdf_hash": "87232e8a3c32cd89c7fe5968239fce107c285c62", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:327", "s2fieldsofstudy": [ "Biology", "Psychology" ], "sha1": "87232e8a3c32cd89c7fe5968239fce107c285c62", "year": 2011 }
pes2o/s2orc
Dopamine-Mediated Learning and Switching in Cortico-Striatal Circuit Explain Behavioral Changes in Reinforcement Learning The basal ganglia are thought to play a crucial role in reinforcement learning. Central to the learning mechanism are dopamine (DA) D1 and D2 receptors located in the cortico-striatal synapses. However, it is still unclear how this DA-mediated synaptic plasticity is deployed and coordinated during reward-contingent behavioral changes. Here we propose a computational model of reinforcement learning that uses different thresholds of D1- and D2-mediated synaptic plasticity which are antagonized by DA-independent synaptic plasticity. A phasic increase in DA release caused by a larger-than-expected reward induces long-term potentiation (LTP) in the direct pathway, whereas a phasic decrease in DA release caused by a smaller-than-expected reward induces a cessation of long-term depression, leading to LTP in the indirect pathway. This learning mechanism can explain the robust behavioral adaptation observed in a location-reward-value-association task where the animal makes shorter latency saccades to reward locations. The changes in saccade latency become quicker as the monkey becomes more experienced. This behavior can be explained by a switching mechanism which activates the cortico-striatal circuit selectively. Our model also shows how D1- or D2-receptor blocking experiments affect selectively either reward or no-reward trials. The proposed mechanisms also explain the behavioral changes in Parkinson's disease. 2002) . This relatively rapid modulation of CD neuronal activity seems to reflect a mechanism underlying reward-based learning. It has thus been hypothesized that these neuronal changes in the BG facilitate the eye movements to reward . It has also been shown that dopamine (DA) plays a crucial role in learning in the BG. Phasic DA signals, in particular, have been hypothesized to cause reinforcement learning (Montague et al., 1996;Schultz et al., 1997;Schultz, 2007). This hypothesis has been supported by a recent study where suppression of phasic DA release by pharmacological manipulation impairs the acquisition of reward-related behavior in healthy human subjects (Pizzagalli et al., 2008). Also, it was shown that tonic occupation of DA receptors (Breitenstein et al., 2006;Mehta et al., 2008) and systemic manipulation of DA level (Pessiglione et al., 2006) block normal learning. In the case of PD patients, the level of DA determines how the patients learn. For example, subjects on l-DOPA medication are better in positive learning and worse in negative learning, and PD subjects off medication are better in negative learning and worse in positive learning (Frank et al., 2004). However, there is evidence that DA has complex effects on BG neurons during reinforcement learning, including different effects on different BG pathways. In the BG there are two anatomically distinct pathways: the "direct" pathway whose striatal neurons have abundant D1 receptors, and the "indirect" pathway whose striatal neurons have abundant D2 receptors (Deng et al., 2006;Kravitz et al., 2010). These two receptors appear to modulate the glutamatergic synaptic plasticity in medium spiny neurons (MSNs) differently. Namely, D1 receptor-mediated DA signaling Previous studies gave an insight how the BG may contribute to this kind of learning. Particularly, neurons in the caudate nucleus (CD; part of the striatum) flexibly encode visual cues that predict different amounts or probabilities of reward Kawagoe et al., 1998;Lauwereyns et al., 2002;Samejima et al., 2005). For example, when a monkey performs a visually guided saccade task with positionally biased reward outcomes, called the "one direction reward (1DR)" task ( Figure 1A), many CD neurons respond to a visual cue and the responses are often enhanced (and occasionally depressed) when the cue indicates a larger-than-average amount of reward during a block of trials. Also, there was a tight block-to-block correlation between the changes in CD neuronal activity preceding target onset and the changes in saccade latency (Lauwereyns et al., promotes long-term potentiation (LTP; Reynolds et al., 2001;Calabresi et al., 2007) whereas D2 receptor-mediated DA signaling induces long-term depression (LTD; Gerdeman et al., 2002;Kreitzer and Malenka, 2007). These findings suggest that the LTP is dominant in the direct pathway MSNs whereas LTD is dominant in the indirect pathway MSNs. However, such unidirectional plasticity might cause saturation of synaptic efficacy. To overcome this problem, some computational models (e.g., Brown et al., 2004) have implemented bidirectional plasticity (both LTP and LTD) in both pathways. Indeed, recent experimental studies indicate that As the fixation point disappeared, a target appeared randomly on the right or left and the monkey was required to make a saccade to it immediately. Correct saccades in one direction were followed by a tone and juice reward; saccades in the other direction followed by a tone alone. The rewarded direction was fixed in a block of 24 trials, and was changed in the following block. (B) Distribution of saccade latencies in reward trials (in red) and in no-reward trials (in blue). (C) Illustration of D1 and D2 antagonist experiments. D1 or D2 antagonist was administrated in the caudate to examine the behavioral consequence in the 1DR task. Black, red, and purple connections indicate excitatory, inhibitory, and dopaminergic modulatory connections, respectively. (D) Hypothesized circuit involving D1 and D2 mediated plasticities. D1 and D2 mediated plasticities in direct and indirect pathways are assumed to contribute to eye movements. The purple arrows indicate dopaminergic modulatory connections. The lines with rectangle ends indicate inhibitory connections. Arrow ends indicate excitatory connections. Figures (A) and (B) are from Hong and Hikosaka (2008). Abbreviations: CD, caudate nucleus; D1, D2, D1, and D2 receptors; SC, superior colliculus; SNc/SNr, substantia nigra pars compacta/reticulata; GPe, globus pallidus external segment; FEF, frontal eye field; SEF, supplementary eye field; DLPF, dorsolateral prefrontal cortex; LIP, lateral intraparietal area; STN, subthalamic nucleus. one dIrectIon rewarded task Our model simulates the data from 1DR task. In the task, a visual target was presented randomly on the left or right, and the monkey had to make a saccade to it immediately. Correct saccades were signaled by a tone stimulus after the saccade. Saccades to one position were rewarded, whereas saccades to the other position were not rewarded. The rewarded position was the same in a block of 20-30 consecutive trials and was then changed to the other position abruptly for the next block with no external instruction. Thus, the target instructed the saccade direction and also indicated the presence or absence of the upcoming reward. While the monkey was performing 1DR task, the latency was consistently shorter for the saccade to reward target than for the saccade to no-reward target. Such a bias evolved gradually becoming more apparent as trials progressed (Figures 1B and 3B,E). The slow change in saccadic latency was particularly evident initially ( Figure 3B). After experiencing 1DR task extensively the monkey became able to switch the bias rapidly ( Figure 3E; Takikawa et al., 2004). Figure 2A shows neuronal circuits in and around the BG included in our model. In the BG, there are two opposing pathways: (1) direct pathway which facilitates movement initiation and is under the control of D1 DA receptors, and (2) indirect pathway which suppresses movement initiation and is under the control of D2 DA receptors (Kravitz et al., 2010). For the initiation of saccades, several cortical areas including the FEF, upon receiving visual spatial information, send signals to the SC to prime a saccade to the visual cue (Sommer and Wurtz, 2001). They also send signals to the BG to facilitate or suppress saccades. The activation of the direct pathway facilitates saccade initiation by removal of inhibition (i.e., disinhibition): it inhibits SNr neurons which otherwise exert tonic inhibition on SC neurons. The disinhibition of SC neurons increases the probability of a saccade in response to the priming signal from the cortical areas (Hikosaka and Wurtz, 1985). In contrast, the activation of the indirect pathway suppresses the saccade: it inhibits GPe neurons which causes disinhibition of STN neurons and consequently enhancement of the SNr-induced inhibition of SC neurons. The enhanced inhibition of SC neurons reduces the probability of making a saccade in response to the priming signal from the cortical areas (Hikosaka and Wurtz, 1985). LearnIng In the basaL gangLIa Following the findings by Shen et al. (2008), our model implements several mechanisms to change the efficacy of cortico-striatal synapses. In short, there are increasing (LTP) and decreasing (LTD) "forces" of opposing processes in each pathway: In the direct pathway, the co-occurrence of pre-and post-synaptic activity, together with an increase in DA concentration above a threshold, induces LTP (DA-dependent LTP), while the co-occurrence of pre-and post-synaptic activity alone induces LTD (DA-independent LTD). In the indirect pathway, the co-occurrence of pre-and post-synaptic activity, together with DA concentration above a threshold, induces LTD (DA-dependent LTD), while the co-occurrence of pre-and post-synaptic activity alone induces LTP (DA-independent LTP). We define "DA-dependent synaptic plasticity" as synaptic changes facilitated by over-the-threshold DA level. This situation occurs mostly during positive learning experience when the direct pathway, as well as the indirect pathway, implements both LTP and LTD (Picconi et al., 2003;Fino et al., 2005;Wang et al., 2006;Shen et al., 2008). For example, Shen et al. (2008) found that direct pathway MSNs also show LTD and indirect pathway MSNs also show LTP, both of which are independent of DA signaling. Numerous studies have examined the influence of the D1 and D2 mediated processes in the BG on animal learning behavior (e.g., Frank et al., 2004;Yin et al., 2009). However, few of them have provided quantitative data that can be used to test a computational model (Frank et al., 2004). The 1DR task ( Figure 1A) that has been used extensively in our laboratory is ideal for this purpose because trial-by-trial changes in the reaction time (or latency) of saccadic eye movements reflects reward-contingent learning and can be measured quantitatively. Furthermore, experimental manipulations of DA transmission in the CD and observation of ensuing oculomotor behavior were done by Nakamura and Hikosaka (2006; Figure 1C). They reported that after a D1 antagonist was injected in the CD, the saccadic latencies increased in reward trials, but not in no-reward trials. In contrast, after D2 antagonist injections, the saccadic latencies increased in no-reward trials, but not in reward trials. In addition to the quantitative behavioral data, we have accumulated a rich set of data on the neuronal activity in many brain areas in the BG that relay visuo-oculomotor information including the CD, substantia nigra pars reticulata (SNr), STN, globus pallidus external segment (GPe), superior colliculus (SC), as well as frontal cortical areas (see Hikosaka et al., 2000Hikosaka et al., , 2006; Figure 1D). We also have an extensive set of data that indicates how DA neurons change their activity during the 1DR task (e.g., Kawagoe et al., 2004;Matsumoto and Hikosaka, 2007;Bromberg-Martin et al., 2010), which is a pre-requisite for making a computational model of reinforcement learning. In the following we propose a formal version of our theory of BG, where the BG "orients" the eyes to reward (Hikosaka, 2007). The present model accounts for reward-contingent oculomotor behavioral changes in normal monkeys, as well as, experimentally induced oculomotor behavioral changes. IMpLeMentatIon of the ModeL We examined the possibility that the plasticity mediated by the DA actions on direct pathway MSNs and indirect pathway MSNs are responsible for the observed saccadic latency changes in normal and Parkinsonian monkeys. The model circuit was implemented with cell membrane differential equations (see Appendix) in Visual C++ using a PC. Our model implements only half of the hemisphere of the brain. This is because, during the left-ward saccades, for example, the right part of the BG is assumed to be active in learning because of the prevalent frontal eye filed (FEF)-to-striatum activation in the right hemisphere. This permits the striatal learning on the right side of the brain while the left side is not being affected. Because the 1DR alternates the left-side-reward and right-side-reward blocks of trials, in a symmetrical way, implementing one side with alternating blocks could represent the learning processes happening in both sides of the brain. Below, we will describe the basic architecture of the model, including how it generates saccades and how it is modulated by DA-independent and DA-dependent synaptic plasticity. For full details of the model equations, see the Appendix. hypothesize that the background level of DA concentration in the CD stays above the threshold of D2 receptor activation and below the threshold of D1 receptor activation ( Figure 2B). Accordingly, during the no-task state, indirect pathway MSNs are under the influence of D2-mediated LTD in addition to DA-independent LTP, while direct pathway MSNs only experience DA-independent LTD (Figure 2A). Note in the figures, DA-dependent LTP and LTD are shown in red while DA-independent processes are shown in blue. Also, the model circuit includes the recently identified lateral habenula (LHb; Matsumoto and Hikosaka, 2007)and LHbprojecting neurons in the GPb (Hong and Hikosaka, 2008) that have DA neurons burst phasically, notably due to changes in reward expectation ( Figure 2C). In contrast, the DA-independent synaptic plasticity happens as an opposing process constantly antagonizing the "DA-dependent synaptic plasticity," and becoming prominent whenever "DA-dependent synaptic plasticity" loses its strength. For this reason, DA-independent synaptic plasticity acts as a " forgetting" mechanism. Figure 2A shows the BG circuit of the model in its no-task (null) state where the subject makes saccades without any DA modulation. It has been shown that DA affinity is higher for D2 receptors than for D1 receptors (Richfield et al., 1989;Jaber et al., 1996). Here, we Black and open circles indicate inhibitory and excitatory neurons, respectively. In (C) and (D), the GPb-LHb-SNc circuit is omitted for clarity. LTP/LTD, long-term potentiation/depression; GPb, border region of globus pallidus; LHb, lateral habenula. Frontiers in Behavioral Neuroscience www.frontiersin.org connection (shown by open "cell body" with an arrow ending, as in STN-SNr connection), but reverses after an inhibitory connection (filled "cell body" with a rectangular ending, as in CD-GPe connection). When the animal detects a signal of no-reward, the level of DA in CD will go below the threshold of D2 receptors ( Figure 2D). In the indirect pathway this leads to an attenuation of LTD leaving DA-independent LTP intact. In the direct pathway, this leads to only DA-independent LTD. As a result, the activity of SNr neurons increases and the saccadic eye movement toward the target is suppressed ( Figure 2D). swItchIng MechanIsM After experiencing 1DR task extensively the monkey became able to switch the saccade latency bias more rapidly after the position-reward contingency is reversed. This raises the possibility that, in addition to the BG-based learning processes been shown to participate in reinforcement learning by modulating DA neurons in the substantia nigra pars compacta (SNc) and the ventral tegmental area. When the animal detects a signal indicating an upcoming reward, DA neurons exhibit a short burst of spikes (Eq. 17), causing a phasic increase in the concentration of DA in the CD which temporarily exceeds the threshold of D1 receptors ( Figure 2C). This phasic elevation of DA concentration, together with co-occurrence of pre-and post-synaptic activations, leads to the emergence of LTP in the direct pathway and an enhancement of LTD in the indirect pathway. Following the DA-induced changes in either the direct or indirect pathway, SNr neurons are inhibited and therefore SC neurons are activated (through disinhibition), leading to the facilitation of the saccade toward the target ( Figure 2C). The changes in activity through the direct or indirect pathway are illustrated by the directions of arrows (upward: increase, downward: decrease). Note that the direction of arrows remains unchanged after an excitatory Figure 3 | experience-dependent emergence of a switching mechanism that allows rapid changes of saccade latency in response to the change in reward location: before (A-C) and after (D-F) sufficient experience of the 1Dr task. We hypothesize the presence of "reward-category neurons" (RWD), a key driver of the switching, that have excitatory connections to FEF neurons and direct pathway MSNs in the CD in the same hemisphere. They would become active before target onset selectively when a reward is expected on the contralateral side (see Figure 4), an assumption based on experimental observations of neuronal activity in the FEF, CD, SNr, and SC. Before sufficient experience of the 1DR task (A-C), the saccade latency changes gradually in both the small-to-big-reward transition [red in (B,C)] and the big-to-small-reward transition [blue in (B,C)] similarly by experimental observation (B) and computer simulation (C). The saccade latency data in (B) is from monkeys C, D, and T. After sufficient experience of the 1DR task (D-F), the saccade latency changes quickly as shown in experiments (e) and computer simulation (F). This is mainly due to the additional excitatory input from the reward-category neurons. Note, however, that the decrease in saccade latency in the small-to-big-reward transition [red in (e,F)] is quicker than the increase in saccade latency in the big-to-small-reward transition [blue in (e,F)]. This asymmetry is due to the asymmetric learning algorithm operated by two parallel circuits in the basal ganglia illustrated in Figure 2. Figure (e) from Matsumoto and Hikosaka (2007). In the following, we first simulate the eye movements in the 1DR showing the baseline performance of the model. Next, we simulate the influence of D1 and D2 antagonist injections in the CD showing how the DA-mediated learning leads to behavioral manifestation. The simulation results for PD are presented to show the potential application of our model to understanding neurological disorders. resuLts sIMuLatIon of saccade Latency In the 1dr task In one block of trials in the 1DR task a saccade to a given target is followed by a reward, and in the next block of trials the saccade to the same target is followed by no-reward ( Figure 1A). Hence, in each block of trials the monkey learns a new position-reward association, and the learning is evidenced as changes in the saccade reaction time (or latency): decrease in saccade latency for the rewarded target and increase in saccade latency for the unrewarded target ( Figures 3B,E). The changes in saccade latency became quicker as the monkey experienced 1DR task extensively (compare Figure 3B and Figure 3E; Takikawa et al., 2004). Our model simulates these changes in saccade latency reasonably well (Figures 3C,F). In the early stage of the monkey's experience with the 1DR task, the saccade latency decreased gradually after a small-to-big-reward transition and increased gradually after a big-to-small-reward transition ( Figure 3B). These slow changes in saccade latency are simulated by the model (Figure 3C) by assuming that there is noreward-category activity ( Figure 3A), which would act as a switching mechanism. In other words, these changes in saccade latency, at this stage, are controlled solely by the striatal plasticity mechanisms which are described in the Section "Learning in the Basal Ganglia." After sufficient experience with the 1DR task, the changes in saccade latency occur more quickly ( Figure 3E). This was simulated by assuming the emergence of reward-category neurons which, before the target comes on, exert an excitation on FEF neurons as well as on the direct pathway MSNs when a reward is expected on the contralateral side (see Switching Mechanism). The performance of our model in an advanced stage of learning ( Figure 3D) is illustrated in Figure 4. Our model combines two kinds of neuronal mechanisms: (1) learning in the BG (i.e., plasticity at cortico-striatal synapses), and (2) switching mechanism (i.e., reward-category activity). Here, the activity of individual neurons (or brain areas) is compared between two reward contexts: a contralateral saccade is followed by a reward (Figure 4A) and noreward ( Figure 4B). Only the contralateral saccade is considered because the neuronal network simulates one hemisphere and is assumed to control only contralateral saccades. According to our model, the learning in the BG controls, mainly, the phasic response component to target onset. The response of direct pathway MSNs (D1) to the post-target input from the FEF increases when the contralateral saccades were rewarded repeatedly ( Figure 4A); this is mainly due to the development of DA-dependent LTP at the corticostriatal synapses. In contrast, the response decreases when the contralateral saccades were unrewarded repeatedly ( Figure 4B); this is mainly due to the development of DA-independent LTD at the corticostriatal synapses. Such reward-facilitated visual responses in CD neurons have been reported repeatedly using 1DR task (Kawagoe et al., 1998, described above, a switching-like process emerges in the brain and contributes to the quick saccade latency changes. Indeed, it is reported that the reward-dependent change in saccade latency occurs by inference (Watanabe and Hikosaka, 2005). For example, suppose the task changed from the left-reward block to the rightreward block. On the first trial of a new block, the monkey made a saccade to the left target and did not receive a reward. This allowed the monkey to detect that the block had changed, and to infer that the reward had switched from the left side to the right side. Then the monkey immediately made a rapid (short latency) saccade to the right target, even though the monkey had not yet received a reward from that target. Furthermore, such inference-dependent activities have been observed in all the neurons tested for the circuit diagram shown in Figure 2: CD neurons (Watanabe and Hikosaka, 2005) as well as GPb, LHb, and DA neurons (Bromberg-Martin et al., 2010). We hypothesize that this rapid switching is enabled by a population of neurons on each side of the hemisphere which becomes active when a reward is available on the contralateral side but not on the ipsilateral side. Such neurons, which we hereafter call "reward-category neurons," are assumed to have excitatory connections to neurons in the FEF and to the direct pathway MSNs on the same side. This assumption is based on our previous findings: presumed projection neurons in the CD (Lauwereyns et al., 2002;Takikawa et al., 2002;Watanabe et al., 2003) as well as neurons in the FEF (Ding and Hikosaka, 2006) ramp-up their activity when a reward was expected on the contralateral side. Further, many SNr neurons decrease their activity selectively when a reward is expected on the contralateral side (Sato and Hikosaka, 2002), suggesting that the reward-category neurons excite direct pathway MSNs, but not indirect pathway MSNs. However, where the rewardcategory neurons are located is unknown, and it is possible that the reward-category activity emerges from interactions of neurons in the cerebral cortex and the BG. The model implements the reward-category neurons, tentatively, as a module in the cerebral cortex, as illustrated in Figure 3D. When a reward is expected on the left side, for example, the reward-category neurons in the right cortex (red circle in Figure 3D; Cg RWD in Eq. 4 in Appendix) will ramp-up their activity before the execution of a saccade. This will excite the right FEF neuron and direct pathway MSNs, therefore boosting the activity of these neurons. Note that there will be no boost of activity in the FEF and MSNs in the left (ipsilateral) hemisphere. Due to this construction, the striatum receives strong cortal inputs boosted by the excitatory reward-category neurons only during contralateral reward trials. The reward-category activity also affects the SC directly via the FEF-SC excitatory connection (Figure 2) making the SC react more rapidly during reward trials (Ikeda and Hikosaka, 2003;Isoda and Hikosaka, 2008). Our model hypothesizes that the combination of cortical switching and trialto-trial updates of learning in the BG explain the change of saccadic latencies during the 1DR task (Bromberg-Martin et al., 2010). Note that we use the word "switching" only to mean the inferential abrupt change in the cortical circuit, and the ensuing abrupt behavioral change in saccadic latency in the second trial of a block. The plasticity in the FEF-MSN synapse is assumed to contribute to the gradual changes in saccadic latency reaching an asymptote. In other words, the change in saccade latency after reversal of position-reward contingency is caused by the activity of "reward-category" and synaptic plasticity in the BG. Frontiers in Behavioral Neuroscience www.frontiersin.org Such reward-suppressed visual responses in CD neurons have been reported (Kawagoe et al., 1998;Watanabe et al., 2003), although it is unknown if they were indirect pathway MSNs. These changes in the post-target response of indirect pathway MSNs lead to a stronger inhibition of SC neurons via GPe and STN neurons on no-reward trials ( Figure 4B) than reward trials ( Figure 4A). The effects of the switching mechanism mainly lead to tonic changes in neuronal activity before target onset. When a reward is expected on the contralateral side, the reward-category neurons (RWD category in Figure 4) ramp-up their activity shortly after the presentation of a fixation point (Figure 4A). FEF neurons (FEF in Figure 4) receive excitatory input from the reward-category neurons although it is unknown if they were direct pathway MSNs. These changes in the post-target response of direct pathway MSNs lead to a stronger disinhibition of SC neurons via SNr neurons on reward trials ( Figure 4A) than no-reward trials ( Figure 4B). Roughly opposite effects occur through the indirect pathway. The response of indirect pathway MSNs (D2) to the post-target input from the FEF decreases when the contralateral saccades were rewarded repeatedly, mainly due to the development of DA-dependent LTD at the corticostriatal synapses ( Figure 4A). In contrast, the response increases when the contralateral saccades were unrewarded repeatedly, mainly due to the development of DA-independent LTP at the corticostriatal synapses ( Figure 4B). In reward trials (A) the rewardcategory unit (REW category) ramps up its activity shortly after the presentation of the fixation point. The activity shuts off in response to the burst activity of DA unit (DA) signaling the reward value of the target. The FEF unit combines the tonic reward-category activity and the phasic target signal. In the BG, both the direct pathway MSN unit (D1) and the indirect pathway MSN unit (D2) receive an input from the FEF. The direct pathway MSN unit (D1), in addition, receives an input directly from the reward-category unit and therefore shows larger ramping activity than the indirect pathway MSN unit (D2). The activity of the direct pathway MSN unit (D1) is further enhanced by DA-dependent LTP, which is triggered by the DA burst, and mediated by D1 receptors. This results in a stronger disinhibition of the SC by the SNr leading to a stronger activity in the SC. In contrast, the activity of indirect pathway MSN unit (D2) is further depressed by DA-dependent LTD, which is triggered by the DA burst, and mediated by D2 receptors. This results in the suppression of the excitatory input from the STN to the SNr, further enhancing the SC activity. The combined effects from the direct and indirect pathways lead to a shorter latency saccade (see the arrow head on top, indicating the time of saccade initiation). In no-reward trials (B) the activity of the reward-category unit is much weaker, thus lowering the activity of the FEF unit and the direct pathway MSN unit (D1). The activity of the direct pathway MSN unit (D1) is further depressed by DA-independent LTD. In contrast, the activity of D2 MSN increases because DA-dependent LTD is attenuated due to the "pause" of DA activity (DA) and thus is dominated by DA-independent LTP. The combined effects from the direct and indirect pathways lead to a weaker activation of the SC unit and hence a longer latency saccade. The scale of all the ordinate axes is from 0 to 1. Frontiers in Behavioral Neuroscience www.frontiersin.org After the D1 antagonist injection, latency increased for the saccades made toward the reward position without affecting the saccades toward the no-reward position (Figure 5C left). Simulation results correctly follow this trend (Figure 5C right). As explained above ( Figure 2B) the model assumes that, in a normal condition, the threshold for D1 receptor activation (hereafter called "D1 threshold") is above the default concentration of DA level in the CD, and the threshold for D2 receptor activation (hereafter called "D2 threshold") is below it. After the injection of the D1 antagonist, the D1 threshold increases significantly while the D2 threshold remains unchanged (compare Figure 5D with Figure 2C). This leads to a selective suppression of DA-dependent LTP in the direct pathway which would be triggered by a phasic increase of DA concentration in reward trials ( Figure 5D). In consequence, the activation of direct pathway MSNs by the reward-predicting visual input becomes weaker. In turn, this causes SNr neurons to be less inhibited, SC saccadic neurons to be less disinhibited, and saccades to occur at longer latencies. In the case of no-reward trials, the situation remains unchanged after D1 antagonist injection because the D1 threshold, while elevated by the D1 antagonist, remains higher than the DA concentration (compare Figure 5E with Figure 2D) and the D1 antagonist does not affect the D2 threshold. Consequently, the saccade latency remains unchanged in no-reward trials (Figure 5C right), similar to the experimental data (Figure 5C left). In the preceding section we showed that our model can simulate the time course of saccade latency changes during the 1DR task (Figure 3). As seen in Figure 5A the D1 antagonist injection in the CD alters the saccade latency over time and our model simulates this change (Figure 5B). InfLuence of d2 antagonIst on saccadIc Latency In contrast, after the D2 antagonist injection in the CD, the saccadic latency increased selectively in no-reward trials . Our model explains this change as a consequence of the increased threshold for the D2 receptor activation (Figure 6E). Note that in the normal condition, the threshold, of the D2 receptors, is assumed to be below the level of DA concentration in the striatum ( Figure 2D). After the injection of the D2 antagonist, the D2 threshold increases significantly while the D1 threshold remains unchanged, which leads to selective changes in the indirect pathway. This change will not grossly affect saccades in reward trials because DA concentration is assumed to exceed both D1 and D2 thresholds ( Figure 6D). In no-reward trials, however, the change in the D2 threshold affects processes in the indirect pathway (compare Figure 6E with Figure 2D). This is because the removal of DA-dependent LTD enhances the activity of indirect pathway MSNs. The increased output in the indirect pathway leads to an increase in the SNr-induced inhibition on SC saccadic neurons, leading to longer saccade latencies, as shown in the simulated results in Figure 6C right. These results are similar to the experimental data (Figure 6C, left). The simulation also replicates the trial-by-trial changes in saccade latencies and their alteration by D2 antagonist (compare Figure 6A with Figure 6B). dIsrupted pLastIcIty MechanIsMs In parkInsonIan subjects Our model predicts altered reward-related learning in PD subjects. We first modeled the changes in synaptic plasticity that occur during PD. In animal models of PD, the synaptic plasticity of the BG is in addition to a phasic excitatory input encoding the onset of the target (Ding and Hikosaka, 2006). In the BG, both the D1-mediated direct pathway (D1) and the D2-mediated indirect pathway (D2) receive the reward-category signal from the FEF. However, direct pathway MSNs (D1) also receive an excitatory input directly from the rewardcategory neurons and therefore show larger ramping activity than indirect pathway MSNs (D2; Lauwereyns et al., 2002). This results in a ramp-down of the activity of SNr neurons (SNr; Sato and Hikosaka, 2002) before target onset. In consequence, SC neurons receive the reward-category signal via two routes: (1) pre-target tonic decrease in SNr-induced inhibition (disinhibition), and (2) pre-target tonic increase in FEF-induced excitation. When a reward is not expected on the contralateral side, the reward-category neurons are less active ( Figure 4B) and therefore the pre-target facilitation is weak in SC neurons. Indeed, SC neurons exhibit such pre-target ramp-up activity which is stronger when the contralateral saccade was rewarded than unrewarded (Ikeda and Hikosaka, 2003;Isoda and Hikosaka, 2008). In summary, the learning mechanism and the switching mechanism, when working together, enable quick adaptation of oculomotor behavior depending on expected reward. It is important to note that the two mechanisms interact in a mutually facilitatory manner. First, the reward-category activity facilitates the development of DA-dependent LTP in direct pathway MSNs ( Figure 4A) because it increases the likelihood of the co-occurrence of the presynaptic activity (i.e., FEF activity) and the post-synaptic activity (i.e., MSN activity) which is thought (and here assumed) to be a pre-requisite of this type of LTP (Wickens, 2009). Second, the changes in activity of DA neurons could modulate the rewardcategory activity. For example, when a reward is expected after a contralateral saccade, DA neurons exhibit a burst of spikes which then would cause LTP in the cortico-striatal synapses in direct pathway MSNs carrying the reward-category activity, leading to an enhancement of the reward-category activity in the MSNs. In contrast, the reward-category activity in indirect pathway MSNs would be suppressed because the same DA activity would cause LTD. On the other hand, the reward-category activity in indirect pathway MSNs would be facilitated when no-reward is expected because the DA neurons pause and therefore the DA-dependent LTD becomes weaker and instead DA-independent LTP becomes dominant. Such changes in the reward-category activity are evident in Figure 4 by comparing the pre-target activity in direct pathway MSNs (D1) and indirect pathway MSNs (D2) between the two reward contexts (Figures 4A,B). In short, the learning mechanism and the switching mechanism cooperate to enhance and accelerate the reward-dependent bias in saccade latency. InfLuence of d1 antagonIst on saccadIc Latency Our computational model has simulated reward-dependent oculomotor behavior successfully. Central to our model is the DA-dependent plasticity at the cortico-striatal synapses. Therefore, experimental manipulations of DA transmission in the striatum could provide critical tests of our model. Such experiments were done by Nakamura and Hikosaka (2006). They showed that the saccadic latency in the 1DR task changed differently and selectively after injections of D1 antagonist and D2 antagonist in the CD. Below, we will simulate the behavioral effects of these experimental manipulations based on the model. DA-independent LTD. In reward trials, however, the slight increase in the DA level can trigger weak LTP because the D1 threshold is lowered due to hypersensitivity ( Figure 7B). As a consequence, the net LTD is bigger after no-reward trials than reward trials (orange curves in Figure 7C). An opposite reaction occurs in indirect pathway MSNs. They undergo LTP in either reward or no-reward trials (black curves in Figure 7C) because DA-dependent LTD, which is rendered minimal due to the low DA level, is dominated by DA-independent LTP. In reward trials, however, the slight increase in the DA level can trigger weak LTD because the D2 threshold is lowered due to hypersensitivity (Figure 7B). In consequence, the net LTP is is bigger after no-reward trials than reward trials (black curves in Figure 7C). Our model predicts that these changes in synaptic plasticity would cause several changes in the pattern of behavior during the 1DR task ( Figure 7D). The results indicate that in reward trials the saccadic latency in the PD subject (red curve in Figure 7D) is disrupted ( Figure 7A) such that LTP is induced in indirect pathway MSNs (green dots) and LTD is induced in direct pathway MSNs (purple dots) after stimulation protocols that normally induce LTD and LTP, respectively (Shen et al., 2008). Our model simulates the reversal of synaptic plasticity ( Figure 7C) using several assumptions illustrated in Figure 7B. We assume: (1) DA concentration in the striatum of the PD patient is reduced (indicated by the low levels of the purple curves in Figure 7B) by about 84%, as shown by Fearnley and Lees (1991), (2) D1 and D2 receptors become hypersensitive as indicated by the low levels of the red and blue dashed lines in Figure 7B (e.g., Gerfen, 2003), and (3) a small number of DA neurons remain functional so that DA concentration increases and decreases slightly in response to big-and small-reward cues (indicated by up/down deviations of the purple curves from the flat background level in Figure 7B). Given these assumptions, our model predicts that direct pathway MSNs undergo LTD during either reward or no-reward trials (orange curves in Figure 7C). This is because DA-dependent LTP, which is rendered minimal due to the low DA level, is dominated by Nakamura and Hikosaka (2006, p. 60). (B) Simulated trial-by-trial changes in saccade latency. (C) After D1 antagonist injection, average saccade latency increased in big-reward trials, but not in small-reward trials. The experimental data was replicated by computer simulation. (D) Hypothesized mechanism of the effect of D1 antagonist in big-reward trials. The D1 antagonist effectively elevates the D1 threshold and therefore induces a smaller-than-usual LTP in the direct pathway MSNs, whereas the indirect pathway MSNs are unaffected. The attenuated LTP leads to a weaker activation of the SC and therefore a longer latency saccade. (e) Hypothesized mechanism of the effect of D1 antagonist in big-reward trials. The DA level remains below the D1 threshold similarly to the control condition ( Figure 2D) and therefore the saccade latency is not changed. Frontiers in Behavioral Neuroscience www.frontiersin.org dIscussIon This study explored the possible neuronal mechanisms underlying adaptive changes in oculomotor behavior in response to the change of reward locations. We did so by constructing a computational model and simulating animal's normal and experimentally manipulated behaviors. Our model, which combines a learning mechanism and a switching mechanism in the cortico-striatal circuit, simulates experimental results obtained using a saccade task with positional reward bias (1DR task) reasonably well. In the following we discuss possible physiological mechanisms presumed to be the bases of these phenomena, as well as, the limitations of our model. neuraL correLates of reInforceMent LearnIng In bg Basal ganglia are well known for their involvement in motor and cognitive functions. It is also known that many neurons in the BG are sensitive to expectation of reward (see Hikosaka et al., 2006;Schultz, 2006, for review). The 1DR task provides quantitative data that is suitable for testing a computational model of reward-based learning. After sufficient experience in the 1DR task, monkeys are able to longer than in the normal subject (red curve in Figure 3E). The saccade latencies during no-reward trials are even more sluggish as shown by the blue curve in Figure 7D. Interestingly, while both latencies are longer than those of normal subjects, the latencies during reward trials are still shorter than those in no-reward trials in PD patients. This means that even with the reversed directions of plasticity, the subjects show correct direction of learning. Our model also predicts the impact of l-DOPA in the PD subject. Figure 7B illustrates the hypothesized learning situation in PD with l-DOPA, showing the elevated DA level (green trace) that enables positive reinforcement learning with the assistance of increased sensitivity (e.g., Gerfen, 2003). Our model then predicts that the saccade latency decreases in rewarded trials ( Figure 7D, pink line), but not in no-reward trials (Figure 7D, green line), nearly reaching the level of normal experienced subjects (Figures 3B,C). One noticeable difference is that the latency in reward trials reaches its plateau more slowly than in normal subject. This is because of the still inefficient learning in the BG compared to that in normal subjects. Figure 6 | influence of D2 antagonist on saccadic latency. (A) Trial-by-trial changes in the latency of contralateral saccades, before (black) and after (blue) injection of a D2 antagonist into the CD. Data are from Nakamura and Hikosaka (2006). (B) Simulated trial-by-trial changes in saccade latency. (C) After D1 antagonist injection, average saccade latency increased in small-reward trials, but not in big-reward trials. The experimental data was replicated by computer simulation. (D) Hypothesized mechanism of the effect of D2 antagonist in big-reward trials. The phasic increase in the DA level exceeds both the D1 and D2 thresholds, although the D2 antagonist elevates the D2 threshold, and therefore the saccade latency remains largely unchanged. (e) Hypothesized mechanism of the effect of D2 antagonist in small-reward trials. The elevated D2 threshold eliminates the DA-dependent LTD in the indirect pathway MSNs and therefore potentiates the SNr-induced inhibition of the SC, leading to a longer latency saccade. Frontiers in Behavioral Neuroscience www.frontiersin.org when the DA level is low (i.e., in response to no-reward-predicting target). This bias of speeds in plasticity is suggested to result in faster acquisition and slower forgetting of the motivated behavior expressed as saccade latency change. pLastIcIty MechanIsMs In dIrect and IndIrect pathways We have constructed a model that implements a lumped LTP/LTD, which simplifies underlying complicated intracellular processes. Here we discuss some probable mechanisms, underlying these synaptic changes. The mechanisms of the synaptic plasticity in the BG have been studied extensively, yet there are conflicting experimental results (for reference, see Calabresi et al., 2007;Surmeier et al., 2007). It is shown that DA-mediated D1 receptor signaling promotes LTP (Reynolds et al., 2001;Calabresi et al., 2007) whereas D2 signaling induces LTD (Gerdeman et al., 2002;Kreitzer and Malenka, 2007). As adaptive learning theories require, however, the plasticity in these direct and indirect pathways seem to be bidirectional. For example, Shen et al. (2008) have shown that D1 and D2 receptor-bearing striatal MSNs had both LTP and LTD. In indirect pathway MSNs, D2 receptor activation is known to promote dephosphorylation processes in a variety of channels including AMPA and NMDA and Na + channels by suppressing adenylyl cyclase. It has also been reported that DA-independent LTP (or repotentiation) happens in indirect pathway MSNs when the afferents are stimulated with a following post-synaptic depolarization (Shen et al., 2008). This LTP process seems to be dependent reverse the positional bias in saccade latency fairly quickly. It may be suggested that such a quick reversal in behavior is achieved by a switching mechanism. Interestingly, the increase in saccade latency after the big-to-small-reward transition is slower than the decrease in saccade latency after the small-to-big-reward transition (Lauwereyns et al., 2002;Watanabe and Hikosaka, 2005;Nakamura and Hikosaka, 2006;Matsumoto and Hikosaka, 2007;Nakamura et al., 2008; Figure 3E). Such an asymmetry in the saccade latency change is best explained by a learning mechanism that distinguishes the two directions of saccade latency changes; it is unlikely to be explained solely by a switching mechanism. Indeed, our study using computational modeling and simulation indicates that a combination of a learning mechanism and a switching mechanism can explain the adaptive oculomotor behavior in experienced animals, while the learning mechanism alone can explain the adaptive oculomotor behavior in less experienced animals. More specifically, the asymmetric change in saccade latency can be explained by an asymmetric learning algorithm operated by two parallel circuits in the BG: D1-modulated direct pathway and D2-modulated indirect pathway. The direct pathway seems to express D1-mediated LTP (Reynolds et al., 2001;Calabresi et al., 2007), whereas the indirect pathway seems to express D2-mediated LTD (Gerdeman et al., 2002;Kreitzer and Malenka, 2007). Explained more mechanistically, D1-mediated LTP and D2-mediated LTD happen actively when the DA level is high (i.e., in response to the reward-predicting target), while the DA-independent LTD in the direct pathway and the DA-independent LTP in the indirect pathway happen rather passively (C) Simulated plasticity in the direct pathway MSNs (D1) and the indirect pathway MSNs (D2) in PD subjects performing the 1DR task. The simulation shows disrupted plasticity in PD, which is similar to that shown in the rat PD model (A). Note that the magnitude of the plasticity is larger for no-reward trials than for reward trials. (D) Simulated saccade latency in PD subjects with no treatment (PD) and PD subjects with l-DOPA. See text for further explanations. Frontiers in Behavioral Neuroscience www.frontiersin.org actually slowed down prosaccadic latency (Hood et al., 2007). More importantly, this longer latency in prosaccades was accompanied by an increased correct rate in anti-saccades (reduced fast reflexive prosaccades), where the impulsive tendency to make a saccade to the visual stimulus needs to be suppressed. It is likely that this l-DOPAinduced elongation of saccadic latency was due to an enhanced compensatory cortical mechanism in PD (e.g., Cunnington et al., 2001;Mallol et al., 2007) to suppress impulsive reflexive saccades as the task demanded. Our current model is focused on modulation of saccades by reward-oriented biases rather than by such taskdependent speed-accuracy tradeoffs, which might be implemented by different BG circuit mechanisms (Lo and Wang, 2006). It was reported that, compared to normal subjects, PD subjects on l-DOPA medication are better in positive learning and worse in negative learning, and that PD subjects off medication are better in negative learning and worse in positive learning (Frank et al., 2004). Assuming that DA agonist raises the DA level slightly over the optimal range (i.e., over the D1 and D2 thresholds), our model predicts that a slight elevation over the optimal range could drive the system to over learn, resulting in faster than normal saccadic latencies both in the reward and no-reward trials (result not shown). This conclusion directly parallels the conclusion by Frank et al. (2004). One interesting question arises in our model: Why are there two pathways (the direct and indirect pathways) in the BG even though their jobs could apparently be done by just one pathway? It is possible that the two pathways exist to flexibly control the output of the BG. In other words, while many situations require cooperative operations of the direct and indirect pathways, some other situations may call for separate operations of these two pathways. For example, if an animal meets a conflicting situation, such as, food is in sight while a predator is also nearby, an indirectpathway-specific "no go" command may save the animal from the recklessly daring situation. Another possible benefit of having two separate pathways comes from the connectional anatomy of the BG. In the rat, the indirect pathway of the BG receives a majority of its inputs from neurons in deep layers of the cerebral cortex which also project to the motoneurons in the spinal cord, whereas the direct pathway receives a majority of its inputs from neurons in the intermediate layers of the cerebral cortex, some of whose axons also contact contralateral BG (Lei et al., 2004). Assuming that this scheme holds true for primates, it is conceivable that an output from a cortical area is used for a motor command by its direct connection to the spinal cord. At the same time, its corollary connection to the indirect pathway may terminate the command once executed. This kind of mechanism may be beneficial especially when the animal needs to execute several sequential actions in a row. In conclusion, while many daily activities may make use of the synergistic learning involving both the direct and indirect pathways, some other occasions may call for learning in just one of the pathways. This dual pathway design of the BG may make motor behavior more flexible. acknowLedgMents We are grateful to M. Isoda, L. Ding for providing data (monkey T and D, respectively), C. R. Hansen, E. S. Bromberg-Martin for helpful comments. This work was supported by the intramural research program of the National Eye Institute. on adenosine A2a receptors, which couple to the same second messenger cascades as D1 receptors, and are robustly and selectively expressed by indirect pathway MSNs (Schwarzschild et al., 2006). It was demonstrated that antagonizing these receptors (not D1 receptors) disrupted the induction of LTP in indirect pathway MSNs. This type of LTP seems to be NMDA receptor dependent and post-synaptic (Shen et al., 2008). In direct pathway MSNs, D1 receptor activation by DA induces LTP by stimulating adenylyl cyclase therefore promoting phosphorylation processes of a variety of channels, such as AMPA and NMDA and Na+ channels. Note that D1 and D2 receptors target the same chemical agent, adenylyl cyclase, in opposite ways. (Picconi et al., 2003) showed that LTD (or synaptic depotentiation) seen in control animals was absent in the l-DOPA treated animals that had too much phospho[Thr34]-DARPP-32, an inhibitor of protein phosphatase. They reported that this DA-mediated phosphorylation pathway was responsible for the persistent LTP in the cortico-striatal synapses in their l-DOPA subjects, leading to dyskinesia. DA-independent LTD in direct pathway MSNs has also been demonstrated Shen et al., 2008). Notably, this type of LTD in direct pathway MSNs seems to be dependent upon post-synaptic signaling of endocannabinoid CB1 receptors Shen et al., 2008). These LTP and LTD processes in direct and indirect pathway MSNs seem to depend, directly or indirectly, on the level of DA in the BG. For example, when DA was depleted, the direction of plasticity changed dramatically: direct pathway MSNs showed only LTD and indirect pathway MSNs showed only LTP regardless of the protocol used (Shen et al., 2008). Picconi et al.'s (2003) report of the chronic high level l-DOPA-induced loss of LTD capability at the cortical-MSN synapses and ensuing behavioral symptoms is another piece of evidence pointing to the importance of DA in the synaptic learning of this region. In summary, experimental results suggest that direct and indirect pathway MSNs express both LTP and LTD, and their direction of plasticity is dependent on the level of DA. dopaMIne hypotheses of reInforceMent LearnIng and behavIor The simulation results of our model predict that while having significant learning deficit, PD patients still show some learning, consistent with the literature (e.g., Behrman et al., 2000;Muslimovic et al., 2007). To be more rigorous, the increments of saccadic latencies in Figure 7D (e.g., ∼125% of normal reward trials shown in Figure 7D red curve) are similar to the known increased saccadic latencies (between 120 and 160% of normal subjects, depending on severity) in human PD patients (White et al., 1983) and MPTP monkeys (Tereshchenko et al., 2002). The simulated saccadic latencies in no-reward trials are also slower than the counterparts of the normal subjects (∼110% of normal subjects' latency during no-reward trials, the blue curve in Figure 7D). Our model also predicts the impact of l-DOPA in the PD subject. As Figure 7D shows, the simulated PD subject with l-DOPA shortens the saccadic latency compared to the non-medicated counterpart, consistent with previous reports (Highstein et al., 1969;Gibson et al., 1987;Vermersch et al., 1994). In contrast, a recent study reports that in well medicated subjects l-DOPA appendIx ModeL equatIons Our model examined the possibility that the plasticity mediated by the dopamine (DA) actions on direct pathway medium spiny neurons (MSNs) and indirect pathway MSNs are responsible for the observed saccadic latency changes in normal and Parkinsonian monkeys. The model circuit was implemented with cell membrane differential equations in Visual C++ using a PC. Below, we describe the architecture of the model, including how it generates the saccade latency. Cortical process The cortico-striatal signal, FEF, is represented as follows: where a, b are constants of 0.7 and 0.6 respectively. For the initial stage of one direction reward (1DR) learning where the rewardcategory (Cg RWD ; Eq. 4) has not been formed, a = 1 and b = 0 are used. I is the visual input representing the target signal given as follows: delay end delay w wise.    (2) t start and t end above represent the beginning (1000 ms) and ending (1100 ms) of the target signal, respectively, coming from the visual area. t delay (50 ms) is the signal delay between the presentation of the visual target and the activation of the frontal eye field (Schall et al., 1995). The visual target stimulus itself was presented from 1000 ms till the end of the outcome (see Figure 4). The star as in ( ) * in Eq. 1 indicates a conduction function, f(x), as follows: The function above is an accelerating function of x, that ensures the output (FEF in Eq. 1) to have a linear response to the input (I in Eq. 2). To simulate the recognition of the reward trial, we used a "reward-category neuron" that has a ramping activity leading to a saccade (Lauwereyns et al., 2002;Takikawa et al., 2002) as follows: where, τ c (500 ms) is a time constant for the slow ramping activity of the category neuron. I FIX represents the fixation signal (I FIX = 1, for 800 ms at the beginning of a trial, 0 otherwise) that drives the neuron. The constant b of 100 was used to shut off the activity of the category neuron, once the substantia nigra compacta (SNc) activity (SNc; see Eq. 18), which generates DA, deviates from the current DA level (DA, see Eq. 9) indicating presence or absence of future reward. |x| indicates an absolute value function. The constant a of 1 and 0.4 was used for contralateral reward and no-reward blocks, respectively. This gave a larger ramping activity during contralateral reward trials compared to no-reward trials. Direct pathway The neural activity in the direct pathway of the caudate (CD dr ) is simulated as follows: where ( ) * indicates a conduction function as in Eq. 3. a and b are constants of 0.7 and 0.3 respectively. For the initial stage of 1DR learning where the reward-category (Cg RWD ; Eq. 4) has not been formed, a = 1 and b = 0 are used. w dr above is the synaptic weight between the cortex and the CD dr as follows: where E dr∼ denotes the eligibility trace of the direct pathway neuron (see below); A (1 when I > 0, 0 otherwise; also 1, for 100 ms beginning from the start of the outcome, when there has been a block change) is a cholinergic action in the caudate, deemed to facilitate plasticity mechanism (Shimo and Hikosaka, 2001;Morris et al., 2004); DA is the current level of DA (see below); θ D1 (normal: 0.55, DA depletion: 0.9, Parkinsonian: 0.23) is the threshold of the D1 receptor activation; τ (71 ms) is a time constant for the weight change; a, b are constants of 12 and 0.9 respectively; a, b of 0.06 and 0.06 were used to explain the inefficient learning during the initial learning stage in Figure 3C; w L (0.2) is the lower bound of the synaptic weight. The function [x, θ] p above describes a piecewise linear function which is zero except for values above θ as follows: E dr∼ denotes the eligibility trace of the direct pathway caudate neuron: where τ is a time constant of 33 ms. The eligibility trace acts as a time window where the plasticity is allowed to occur. The concentration of DA, was calculated using a simple integrating function: where SNc is the activity of the DA neurons in the substantia nigra pars compacta. Indirect pathway The neural activity of the caudate neuron in the indirect pathway (CD id ) is described as follows: where w id is the synaptic weight between the CX and CD id as follows: where E id∼ denotes the eligibility trace of the indirect pathway in the caudate neuron and has the same form and parameters as in Eq. 8. A is the cholinergic input as in Eq. 6. w L (0.1) is the lower bound of the weight. θ D2 (normal: 0.25, DA depletion: 0.75, Parkinsonian: 0.23) is the threshold of the D2 receptor activation; τ (71 ms) is a time constant for the weight change; a, b are constants of 0.9 and 12 respectively; a, b of 0.06 and 0.06 were used to explain the inefficient learning in Figure 3C. The thresholding mechanism for D1 and D2 receptors is similar to the one proposed by Brown et al. (2004). The conduction time delay between the cortex and the striatum is assumed to be 1 ms. Globus pallidus external segment The simulated GPi neuron gets inhibition from the CD id and has its own tonic component as follows: where T GPe (of 10) represents a tonic component. 1/(CD id * + 1) denotes a shunting form of suppression by the striatum. Subthalamic nucleus The activity of the subthalamic nucleus (STN) is simulated as follows: where T STN (4.0) represents the lumped version of cortical activity that becomes high when there are more than one plan to execute (conflict), and a tonic STN component. The lumped version of cortical activity was used because it is assumed that there is no coactivation of plans at a given time (Frank et al., 2007) during the 1DR trials. The conduction time delay between the cortex and the STN is assumed to be 7.5 ms; globus pallidus external segment (GPe) and STN, 2.5 ms. Substantia nigra pars reticulata The simulated substantia nigra pars reticulata (SNr) gets its excitatory input from STN and inhibitory input from CD dr as follows: where a is a threshold of 0.1 and T SNr (1.5) is a tonic component. The conduction times from the CD and STN to the SNr are set to 9 ms (Hikosaka et al., 1993) and 2.5 ms (assumed), respectively. Border region of the globus pallidus (GPb) To simulate the known physiology of the lateral habenula (LHb)projecting neurons in the border region of the globus pallidus (GPb), the following equation is used. GPb for large reward trials, or for small re = 115 100 t t (15) t start represents the onset time of the target stimulus; 115 and 100 are the known delay of GPb neurons and their firing duration in ms, respectively (Hong and Hikosaka, 2008). Also, GPb was set to 0.9 for large reward outcome and 0.1 for no-reward outcome for 100 ms beginning from the start of the outcome, when there has been a block change. For all trials, the outcome started 150 ms after the saccade for 200 ms (see Figure 4). Lateral habenula Lateral habenula is simulated to simply follow the input activity of the LHb-projecting GPi neurons: Substantia nigra pars compacta The substantia nigra pars compacta (SNc) is assumed to get inhibitory inputs from the LHb during trials as follows: where T SNc (normal: 0.5, PD: 0.25) is a tonic component that defines the DA tone in the caudate. a is a constant that defines the upper limit of the SNc activity. It was set to be 1 and 0.27 in the normal subject and Parkinsonian subject, respectively. The time constant t was set to 3.3 ms. Superior colliculus Superior colliculus (SC) is assumed to integrate excitatory inputs from the cortex and inhibitory inputs from SNr as follows: where FEF * /(SNr * + 1) represents a possible shunting nature of the SNr signal to the cortical input. The conduction time delays from the cortex and SNr to the SC are set to be 1 ms (assumed) and 0.7 ms (Hikosaka and Wurtz, 1983), respectively. Saccadic reaction time Reaction time (in ms) of the saccade was calculated as follows. RT SC SC start peak Frontiers in Behavioral Neuroscience www.frontiersin.org otherwise) is a constant that convert the SC signal to time; SC peak is the peak activation value of the SC. Because the constant a is subtracted by SC peak , the RT becomes smaller as the SC activity becomes bigger. Twenty milliseconds is the time delay between the SC saccade command and the initiation of the physical saccade (Robinson, 1972). where t SC is the time point when the SC activity has reached the threshold of saccade initiation (of 0.2); t start , the beginning of the target signal; M is a scaling factor (173, to consider the different data samples (monkeys) used by Nakamura and Hikosaka (2006), 176 otherwise) and a (1.4, to consider the different data samples (monkeys C, D, and T) used for the initial stage of learning, 1.59 Frontiers in Behavioral Neuroscience www.frontiersin.org
v3-fos-license
2018-12-11T19:45:43.772Z
2014-03-06T00:00:00.000
54642587
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=43494", "pdf_hash": "fbdadc29f5c81ca1f2db7cd7055595f38fb85ad7", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:328", "s2fieldsofstudy": [ "Physics", "Engineering" ], "sha1": "fbdadc29f5c81ca1f2db7cd7055595f38fb85ad7", "year": 2014 }
pes2o/s2orc
Two-Dimensional Numerical Investigation on Applicability of 45 ̊ Heat Spreading Angle The 45 ̊ heat spreading angle is familiar among thermal designers. This angle has been used for thermal design of electronic devices, and provides a heat spreading area inside a board, e.g. printed circuit board, which is placed between a heat dissipating element and a relatively large heat sink. By using this angle, the heat transfer behavior can be estimated quickly without using high-performance computers. In addition, the rough design can be made easily by changing design parameters. This angle is effective in a practical situation; however, the discussion has not been made sufficiently on the applicability of the 45 ̊ heat spreading angle. In the present study, therefore, the extensive numerical investigation is conducted for the rational thermal design using the 45 ̊ heat spreading angle. The two-dimensional mathematical model of the board is considered; the center of the top is heated by a heat source while the bottom is entirely cooled by a heat sink. The temperature distribution is obtained by solving the heat conduction equation numerically with the boundary conditions. From the numerical results, the heat transfer behavior inside the board is shown and its relation with the design parameters is clarified. The heat transfer behavior inside the 45 ̊ heat spreading area is also evaluated. The applicability is moreover discussed on the thermal resistance of the board obtained by the 45 ̊ heat spreading angle. It is confirmed that the 45 ̊ heat spreading angle is applicable when the Biot number is large, and then the equations are proposed to calculate the Biot number index to use the 45 ̊ angle. Furthermore, the validity of the 45 ̊ heat spreading angle is also confirmed when the isothermal boundary condition is used at the cooled section of the board. Introduction A heat spreading angle is one of the concepts to simplify the heat transfer calculation and has been used for thermal design of electronic devices.This angle is applied for the board, e.g.printed circuit board, placed between a heat dissipating element and a relatively large heat sink, and then assumes a heat spreading area inside the board.While high-performance CFD simulators provide detailed numerical information, such a simplified calculation is also effective in a practical situation because the estimation can be made quickly without using high-performance computers.In addition, the rough design can be made easily by changing design parameters such as dimensions, physical properties and operating conditions, etc. According to Vermeersch and Mey [1], the early paper about this spreading angle goes back to the publication by Balents et al. in 1969.Balents et al. [2] used the constant value of 45˚ as the heat spreading angle and conducted an approximate heat transfer analysis concerning a heat dissipating element mounted directly on a ceramic substrate which was bonded to a metal baseplate.After a while, the 45˚ heat spreading model of Balents et al. was used by Cook et al. [3] in 1976, and thereafter many attempts have been made to modify the value or extend the theory of the heat spreading angle.Nevertheless, the above-mentioned 45˚ heat spreading angle is still familiar among thermal designers. The applicability of the 45˚ heat spreading angle was described by several researchers.Masana [4] showed the comparison between the 45˚ heat spreading approach and the Fourier solution for a substrate (thickness: w) on which a square (dimensions: l × l) heat dissipating element was mounted.It was described that the common assumption of 45˚ for the spreading angle got close to the Fourier solution for large w/l.Guenin [5] also compared the 45˚ heat spreading analysis to the finite element analysis, and then concluded that the 45˚ heat spreading approximation provided the reasonable accuracy for boards in which the size of the heat source area was greater than the board thickness.Malhammer [6] described briefly that if a 20% error could be accepted, the 45 degree rule was always relevant for small heat sources.Lasance [7] reported that the 45˚ heat spreading angle was acceptable in a single layer board for h/k < 1 and d > 2 mm, where h was the heat transfer coefficient at the cooled section, k and d were the thermal conductivity and the thickness of the board, respectively.Furthermore, concerning the limitation, Lasance [7] and Ha and Graham [8] mentioned that the 45˚ heat spreading approach could not be used for multi-layer boards. The above descriptions are guides in conducting the heat transfer calculation using the 45˚ heat spreading angle.However, they seem to be incoherent; accordingly, the comprehensive investigation is needed for the rational thermal design using the 45˚ heat spreading angle.In the present study, therefore, the extensive numerical investigation is conducted on the applicability of the 45˚ heat spreading angle.For simplicity, the two-dimensional mathematical model is considered and the calculation is conducted in a cylindrical coordinate system.Based on the numerical results, the validity of the 45˚ heat spreading angle and its relation with design parameters are discussed. Mathematical Modeling Figure 1 shows the analytical system.The heat transfer characteristics inside the board are analyzed in a cylindrical (r − z) coordinate system.This board is essentially desk-shaped, and has the thickness of z b .The center (radius: r h ) of the top is heated by a heat source while the bottom (radius: r c ) is entirely cooled by a heat sink.The 45˚ heat spreading angle is shown in Figure 2. The governing equation is given by and the following boundary conditions are applied for the heated section (0 ≤ r ≤ r h , z = 0) and the cooled section (0 ≤ r ≤ r c , z = z b ), respectively: where λ is the thermal conductivity of the board, q the heat flux from the heat source, α the heat transfer coefficient at the cooled section and T f the cooling fluid temperature.The following adiabatic boundary condition is applied at the surfaces except for the heated and the cooled section: where n is the coordinate normal to the boundary surface. The governing equation is discretized by the control volume method [9] and solved numerically with the boundary conditions.The analytical solution of Equations ( 1)-( 4) having infinite series and Bessel functions was also shown by Lee et al. [10].However, because the analytical solution is complicated and time-consuming, the numerical procedure is adopted here to obtain the results quickly.The numerical calculation is conducted by changing the parameters as shown in Table 1 at z b = 4.0 mm, q = 1.0 W/cm 2 and T f = 20˚C. Dimensionless Parameters The dimensionless temperature, θ, and the dimensionless coordinates, R, Z, N, are defined as follows: Accordingly, the dimensionless governing equation and the dimensionless boundary conditions for the heated section (0 ≤ R ≤ R h , Z = 0), the cooled section (0 ≤ R ≤ R c , Z = 1) and the other adiabatic section are expressed as follows: ( ) where R h , R c and Bi are the following dimensionless radii and the Biot number: , , It should be noted that the heat transfer behavior inside the present model is characterized by R h , R c , and Bi. Temperature Distribution The representative numerical results are shown in Figures 3 and 4. The effect of λ is shown at α = 100 W/(m 2 ⋅K) and α = 5000 W/(m 2 •K). Figure 3 is the temperature contour inside the board, and Figure 4 the temperature distribution at the cooled section (the top surface of the board).It should be noted that in Figure 4, the scales of the vertical axis for α = 100 W/(m 2 ⋅K) and α = 5000 W/(m 2 ⋅K) are different.In addition, if the conductive thermal resistance inside the board is neglected, the temperature of the board becomes uniform and then the uniform temperature, T , is obtained by the following equation: The value of T is also shown in Figure 4.When α = 100 W/(m 2 ⋅K) and λ = 0.4 W/(m⋅K), it is observed that the contour lines inside the board are crowded near the heated section.Accordingly, the temperature at the cooled section changes conspicuously near the heated section and then approaches the cooling temperature at T f = 20˚C.In this case, because the conductive thermal resistance inside the board is relatively large, the heat flow is not spread out but concentrated near the heated section.When α = 100 W/(m 2 ⋅K) and λ = 400 W/(m⋅K), on the other hand, it is observed that except for the region near the heated section, the contour lines are almost perpendicular to the top/bottom surfaces and then the temperature at the cooled section is very close to the uniform temperature at 24.0 C T =  .In this case, the heat is spread out over the board.It is also confirmed that the temperature distri- bution becomes flatter and approaches T with the increase in λ. When α = 5000 W/(m 2 ⋅K), it is observed that the temperature inside the board is entirely lower than that in the case of α = 100 W/(m 2 ⋅K).Furthermore, the temperature at the cooled section is very close to the uniform temperature at 20.1 C T =  even in the case of λ = 0.4 W/(m⋅K).This is attributed to the decrease in the convective thermal resistance at the cooled section, which enhances the heat transfer in z direction. The temperature distribution at the cooled section is moreover shown in Figure 5 in the dimensionless form.The numerical results are shown by changing all dimensionless parameters of R h , R c and Bi.It should be noted that the scales of the vertical axis for R h = 1, R h = 2, R h = 4 are different.In addition, the value of Bi is also indicated in Figure 3 for reference.Based on Bi, the heat transfer behavior inside the board is characterized as follows.When Bi is very small, namely Bi = 0.001, because the conductive thermal resistance inside the board is very small compared to the convective thermal resistance at the cooled section, the temperature is almost uniform irrespective of R h and R c .When Bi = 0.1, the temperature distribution is conspicuous due to the increase in the conductive thermal resistance inside the board.Furthermore, when Bi = 10, because of the decrease in the convective thermal resistance at the cooled section, the temperature is entirely lower than that in the case of Bi = 0.1. Because the heated and the cooled area are prescribed by R h and R c respectively, it is expected that the temperature entirely increases with the increase in R h or the decrease in R c .However, due to the concentration of heat flow near the heated section, it is found that the temperature distribution is hardly affected by R c , namely, the board width when Bi is large (Bi = 10). 45˚ Heat Spreading Characteristics In order to evaluate the 45˚ heat spreading characteristics, the index, η, is defined here by the ratio of the heat transfer rate inside the 45˚ heat spreading area to the total heat transfer rate through the board.η is calculated by, ( ) Figure 6 shows the relation between η and λ changing α and r c as parameters.As reference, the values of η = 0.80 and η = 0.90 are indicated by in this figure.As expected, it is observed that η increases with the decrease in λ because of the concentration of heat flow near the heated section; η increases with α because of the enhancement of heat transfer in z direction.It is also confirmed that the high value of η such as η = 0.80 and η = 0.90 is obtained for small λ and large α, which corresponds to relatively large Bi. The numerical calculation is moreover conducted by using the following isothermal boundary conditions: This boundary condition is applied at the cooled section and then the numerical result of η is also shown in Figure 6.It is found that irrespective of the numerical conditions, the high value of η (η ≈ 0.91) is obtained under Equation (13), implying that the 45˚ heat spreading angle is applicable when the isothermal boundary condition is used at the cooled section. By changing all dimensionless parameters of R h , R c and Bi, η is rearranged as shown in Figure 7 in the dimensionless form.It should be noted that η = 1 is obtained irrespective of Bi when R h = 4 and R c = 5, because the cooled section is completely included within the 45˚ heat spreading area.This trivial case is not shown in this figure.The values of η = 0.80 and η = 0.90 are also indicated by lines in this figure.As shown, η is well characterized by R h , R c and Bi.It is observed that irrespective of R and R c , η increases with Bi and then relatively large increase in η is obtained between Bi = 0.01 and Bi = 10.Furthermore, it is also found that the value of η is not affected by R c when Bi is large.This is due to the concentration of heat flow near the heated section.From ) Equations ( 14) and ( 15) are also shown in Figure 8.It is confirmed that the 45˚ heat spreading angle is appli- Thermal Resistance When the 45˚ heat spreading angle is used, the thermal resistance of the board, R 45 , is simply obtained as ( ) The derivation of Equation ( 16) is shown in Appendix.From the numerical results, on the other hand, the thermal resistance of the board, R b , is also obtained by subtracting the convective thermal resistance at the cooled section from the total thermal resistance.R b is calculated by, where T h,ave is the average temperature at the heated section.By using the ratio of R 45 to R b , the comparison between these thermal resistances is shown in Figure 9 changing all dimensionless parameters of R h , R c and Bi.Irrespective of the numerical conditions, it is found that the value of R 45 /R b is less than 1, implying that R 45 is smaller than R b .This is owing to the fact that the effect of the conductive thermal resistance inside the board in r direction is neglected in Equation ( 16).It is also observed that the change in R 45 /R b with Bi and R c is similar to that in η; while in the range of R 45 /R b > 0.8, the effect of R h on R 45 /R b is smaller than the case of η.Therefore, Bi = 10 is the index to use Equation ( 16) when 20% error is accepted. Conclusions The numerical analysis is conducted on the applicability of the 45˚ heat spreading angle inside the board.In the present calculation range, the main findings can be summarized as follows: 1) The heat transfer behavior inside the board is well characterized by the three dimensionless parameters: the two dimensionless widths of the heated and the cooled section, and the Biot number. 2) However, when the Biot number is large, the heat transfer behavior is hardly affected by the dimensionless width of the cooled section due to the concentration of heat flow near the heated section. 3) The validity of the 45˚ heat spreading angle is confirmed when the isothermal boundary condition is used at the cooled section. 4) The 45˚ heat spreading angle is applicable when the Biot number is large.The equations are proposed to calculate the Biot number index to use the 45˚ heat spreading angle. 5) The thermal resistance of the board obtained by the 45˚ heat spreading angle can be used with 20% error when the Biot number is 10. Figure 7 , 8 . the values of Bi when η = 0.80 and η = 0.90 are obtained, and then these values denoted by Bi| η=0.80 and Bi| η=0.90 respectively are shown in Figure Since η is not affected by R c for large Bi as mentioned above, Bi| η=0.80 and Bi| η=0.90 depend on only R h .Moreover, the relations between Bi| η=0.80 , Bi| η=0.90 and R h are expressed as follows: Figure 8 .: Figure 8. Biot number index to use 45˚ heat spreading angle.cablewhen Bi is relatively large as expected, and then the specific numerical values of Bi are obtained by Equations (14) and (15) when 20% and 10% errors are accepted, respectively. Figure 9 . Figure 9.Thermal resistance of board: comparison between 45˚ heat spreading approach and numerical results.
v3-fos-license
2019-05-04T13:07:05.954Z
1999-10-01T00:00:00.000
144127101
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://shareok.org/bitstream/11244/25157/1/10.1177.088572889902200206.pdf", "pdf_hash": "5d158b8b7867466cffbb10b2e7011da4b38c60c1", "pdf_src": "Sage", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:336", "s2fieldsofstudy": [ "Education" ], "sha1": "6397437fc1b05c17c0838042c9f5e950bf95696d", "year": 1999 }
pes2o/s2orc
Video-based Simulations: Considerations for Teaching Students with Developmental Disabilities The use of video-based multimedia simulations for teaching functional skills to persons with developmental disabilities remains an unexplored application of technology for this group. This article examines the historical literature in this area, and discusses future considerations, design issues, and implications of using multimedia simulations. Implementation issues are presented, and suggestions regarding design, development, and application of multimedia simulations are offered. Considerations address the importance of appropriate role modeling and the combination of video-based simulation and in vivo training to foster generalization and maintenance in the context of transition to the real world. maintenance in the context of transition to the real world. The results of research indicate that successful community-based living and employment for persons with developmental disabilities is linked to the functional skills and knowledge they possess relevant to their home, community, and employment situations (Cuvo 8c Klatt, 1992;Wolfe, 1994). When these skills are taught in the context of community-based instruction, learning is more efficient and long term (Langone, 1990). Community-based instruction is a powerful approach for teaching functional living and vocational skills and has been shown to enhance students' learning of skills needed to become independent in their adult lives (McDonnell, Hardman, Hightower, Keifer-O'Donnell, & Drew, 1993;Snell & Browder, 1986). One issue associated with the delivery of community-based instruction is that the logistics of getting students to training sites frequently enough for learning to occur can pose problems because of travel and scheduling con-flicts. Teachers faced with such logistical problems often resort to developing more traditional classroom-based activities. Traditional classroom-based activities (e.g., lecturing about appropriate shopping skills, modeling social skills necessary to get along with coworkers) often do not provide an accurate depiction of the outcomes that are desirable for the target learners and will not facilitate transfer of learned skills to natural environments (Stokes & Osnes, 1989). Classroom-based simulations have been used with some success. However, efforts to reproduce accurate depictions of community sites can be time and cost intensive (Morrow & Bates, 1987). What is needed is a set of instructional techniques that can provide learners with accurate depictions (e.g., compressed video, photographs) of the multiple exemplars located in community settings in an efficient manner. Computer-based instructional applications can provide learners with almost limitless examples to support important concepts and skills. These technology solutions can also provide students with the ability to interact with the electronic environment, as opposed to being passive learners. Technology solutions related to computer-based instruction have offered many teachers the ability to provide daily skill practice for students with mild or learning disabilities (e.g., Lewis, 1993;Lindsey, 1999;and Male, 1997). As research in computer-based instruction has advanced, results indicate that instruction which incorporates a video-based component to teach concepts or skills creates a more realistic learning environment than the regular classroom (Cognition and Technology Group at Vanderbilt [CTGV] , 1993a;. The video provides a visual anchor for students to relate their learning to real-life situations. The authenticity of the video environment appears to create a classroom learning situation that is representative of the typical environment in which the learner will ultimately be performing the skills. Considering the potential of video-based computer instruction for providing more realistic simulated environments within the classroom, there is a paucity of research that examines the effectiveness of this technology for supplementing functional skills that are also taught concurrently in community environments. The purposes of this article are to (a) present instructional techniques that can be applied to the design and development of video or computerbased activities, (b) review the use of computer-based and video-based instruction for students with developmental disabilities, and (c) discuss implications for the application of multimedia simulations. The major focus of this article is to review the research on the use of computer and video technology for students with developmental disabilities. This review also will provide implications for the possible effectiveness of instructional technology to teach community-based skills. We recognize that in an ideal teaching situation, students with develop-mental disabilities should be taught in community settings as much as possible. However, due to the constraints mentioned earlier (e.g., transportation and scheduling), many students make infrequent visits into the community. The need for alternative methods would be greatest for those skills that present the practical problems of cost, safety, travel, and time and for those skills that have a high priority for promoting student independence. For teachers who are unable to provide sufficient community-based instruction because of logistical constraints, technology-based instructional tools are needed that present their students with relevant instructional contexts with interactive practice related to real-life activities. Such tools designed to augment instruction in natural settings may present an efficient set of strategies that can ultimately decrease the time involved to master skills once the students do interact in community environments. BEHAVIORAL INSTRUCTION AND TECHNOLOGY The extant literature on general case programming demonstrates that the use of behavioral strategies can be highly effective in teaching functional community-based skills (e.g., McDonnell et al., 1993). In addition, a technology-based instructional approach, &dquo;anchored instruction,&dquo; has been successfully used to provide learners with rich video-based environments that serve to assist them in concept development, application of academic skills, and problemsolving behaviors (CTGV, 1993a). These approaches are highly compatible since the technology involved in providing students with video environments and examples of key exemplars can overcome the problem of spending large amounts of time in natural settings. Research in the areas of community-based instruction and instructional technology can provide us with a knowledge base for creating effective instructional tools. Instructional technology research contributes information on how to create visual models and interactive environments. This research base and its link to creating effective simulations are examined in the next section. Effective Simulation Techniques The term &dquo;simulation&dquo; can be used to describe a wide variety of instructional activities, methods, or materials. Computer-based simulations have been used to represent the functioning of another system or event, real or fantasy, providing an experiential awareness of a process or event. For students with developmental disabilities, simulations must be age-appropriate and functional, providing a method for the students to achieve performance in natural settings (Horner, McDonnell, & Bellamy, 1986). Therefore, the medium chosen for development of the simulation should approximate that natural environment and provide the teacher with a tool for increasing the experiences for interaction. In order to develop effective instruction for students with developmental disabilities, teachers have to decide on which skills to train. Considerations for a community skill include the importance of the skill in fostering independence and transition. A number of publications provide readers with detailed information regarding the use of ecological inventories for identifying important skills for instruction (e.g., Snell & Browder, 1986). Teachers may need to develop alternative activities or simulations for those skills that would be the most difficult to train in the community. For example, training the skills for grocery shopping, requiring trips to different stores, or street crossing in high-traffic areas, would be more difficult than training laundry skills if the facilities are available at the school. Recognizing that simulation training should be used as an adjunct to, not a replacement for, community training, McDonnell 8c Ferguson (1988) compared the effectiveness and efficiency of training in the natural environment to instruction in both the natural environment and the classroom. Instruction in the natural environment was more efficient (as measured as the number of trials to criterion and the cost of the instruction) than the instruction in both environments, but the strategy using both environments promoted greater generalized responding. Factors concerning generalization need to be considered when evaluating or designing the use of a simulation. Generalization should be assessed (a) in the community setting and not just the simulated setting and (b) in several non-trained natural settings. Without these generalization components, the effectiveness of the use of a simulation to train the performance of behaviors across the full range of situations cannot be measured (McDonnell & Horner, 1985). The results of research indicate that in order for students to perform skills and generalize those skills in a variety of settings, they had to be taught in several natural settings (McDonnell, Horner, & Williams, 1984). Given the problems of training in the community and the additional problems of providing multiple experiences in the natural setting, researchers explored the use of simulations as an alternative for providing a range of experiences. Neef, Lensbower, Hockersmith, DePalma, and Gray (1990) compared the learning of students who were trained in laundry skills with either single case (using one example) or general case (using multiple examples) simulation and natural settings. Students who were taught with a range of examples using general case programming in either the simulated or natural settings were able to generalize to untrained machines better than the students training with a single case were. Instruction, therefore, should be designed to provide multiple experiences or examples of the natural envi-ronment whether in the actual setting or with a simulation of that setting. To illustrate, components from general case programming as suggested by Horner, Albin, and Ralph (1986) and Nietupski, Hamre-Nietupski, Clancy, and Veerhusen (1986) still provide accurate guidance for today's teachers. Their recommended steps include (a) defining the instructional universe, (b) determining the range of stimulus and response variations facing the students, (c) systematically varying simulations to provide a sufficient range of training exemplars, (d) sequencing and teaching the examples, (e) using community performance data to modify the simulations, (f) using simulations to provide intensified practice in problem areas, (g) testing with non-trained probe examples, and (h) conducting training in one natural training session combined with in-class simulation training. Applying these techniques, teachers may choose a range of examples to be taught in both the simulated and the natural setting. Simulations can be used to provide extended intense practice; additionally, training is conducted in at least one natural setting. In summary, the research on simulation training for community skills such as street crossing (Page, Iwata, 8c Neef, 1976), bus riding (Neef, Iwata, 8c Page, 1978), vending machine operation (Browder, Snell, & Wildonger, 1988), restaurant use (McDonnell & Ferguson, 1988;Nietupski, Clancy, Wehrmacher, & Parmer, 1985; van Den Pol, Iwata, Ivancic, Page, Neef, & Whitley, 1981), and laundromat use (Morrow & Bates, 1987) points to several recommendations for instruction and the development of any simulation training program. First, training conducted in several community locations can produce generalized responding. Second, simulation training can also produce generalized responding by incorporating a minimal amount of training in the natural environment. Third, testing for the generalization of the skills must occur in several additional novel locations. Finally, effective simulations must closely resemble the natural environment in appearance and in the behavioral response. Incorporating these recommendations into anchored instruction can assist teachers in creating simulations that more adequately depict real-life activities. Anchored Instruction Situated cognition has received considerable attention in the literature over the past decade. Brown, Collins, and Duguid (1989) and Lave (1988) believe that learning should be cognitively situated to take place in realistic settings under the guidance of &dquo;experts&dquo; who assist learners with the knowledge they need to solve problems and provide the cultural indoctrination necessary to be successful in certain community environments. A technology-based application of situated cognition has been termed &dquo;anchored instruction&dquo; (CTGV, 1990(CTGV, , 1993b. This application represents the use of visually rich video-based instructional contexts that emphasize learning through the application of knowledge and concepts to problems encountered in realistic environments. This approach contrasts with other methods of learning where knowledge and meaning are acquired through abstract activities to be stored in memory for later retrieval, such as memorizing community-relevant words out of context (Young, 1993). When students with developmental disabilities are faced with socially related problems in work settings, they often have problems choosing the most appropriate response. Using anchored instruction, these students could be presented with short computer-based scenarios that provide video models of how students with similar characteristics handle such problems. Because of its link to computer-based systems (as opposed to videotaped examples), anchored instruction has several advantages. Initially, computer-based anchored instruction can provide learners with large numbers of video examples linked to specific problems and solutions. The learner can access these examples almost instantaneously, thus allowing students a closed link between the instruction and the video anchor. Researchers and instructional developers continue to demonstrate that technology can provide one type of situated learning by using video that is &dquo;anchored&dquo; to academic (e.g., math, science) concepts, providing learners with examples of how experts use knowledge as tools to solve problems (CTGV, 1990(CTGV, , 1993a. Students and the teachers who facilitate their learning, explore video environments rich in information and examples that are used to solve increasingly complex problems (CTGV, 1993a;Young, 1993). The research supporting the effectiveness of situated learning in general (Griffin, 1995), and anchored instruction specifically (CTGV, 1993a(CTGV, , 1994, is promising and provides the opportunity to apply this technology-based situated learning approach to other instructional problem areas (Hedberg & Alexander, 1994). For students with developmental disabilities, the development of video-based multimedia simulations (i.e., anchored instruction) may enhance the possibilities for instruction in a way that provides a better bridge between classroom-based and community-based instruction, allowing for repeated practice in the classroom. Computer and Video-based Instruction There is no singular seminal article that addresses &dquo;computer-based&dquo; or &dquo;video-based instruction&dquo; related to independent living and vocational skills for students with developmental disabilities. A majority of the studies researching the effectiveness of technology in special education have been done with students with mild intellectual disabilities and seem to have focused on the effects of computer-assisted academically related instruction (see Okolo, Bahr, & Rieth, 1993;and Woodward & Rieth, 1997 for reviews in this area). In contrast, the use of technology with students with more moderate developmental disabilities has generally focused on the adaptation of devices to compensate for a disability condition (i.e., adaptive positioning equipment), use of assistive technology to assist the student to perform specific tasks (i.e., single switches, braille readers, or voice synthesis) that provide increased opportunities for independence (Church & Glennen, 1992;Garner & Campbell, 1987), or training of students to understand cause-and-effect relationships. Computer-based instruction. The literature provides evidence that computer-based instruction has been effective with persons who have more significant developmental disabilities in five areas related to functional and independent living skills: (a) establishing cognitive concepts such as cause-andeffect behaviors (Iacono & Miller, 1989;Robinson, 1986), (b) teaching functional academic skills such as sight words and arithmetic facts (Conners & Detterman, 1987;Lally, 1981;McGregor & Axelrod, 1988;Powers & Ball, 1983;Thorkildsen, Allard, 8e Reid, 1983), (c) developing motivational leisure skills (Duffy & Nietupski, 1985; McGregor 8c Axelrod, 1988; Powers & Ball, 1983;Sedlack, Doyle, & Schloss, 1982), (d) providing communication aids using assistive technology (Iacono & Miller, 1989), and (e) preparing computer-related job skills (Esposito & Campbell, 1987) . Literature that demonstrates the effectiveness of computer-based instruction for teaching basic skills or for fostering leisure skills provides evidence that with additional technological advances (i.e., video anchors) the power of this instruction can increase. Video-based instruction. In the historic literature, Stephens and Ludy (1975) pointed out the value of using video, in the form of motion picture or videotaped instruction, for teaching action concepts (e.g., running, walking, drinking, standing) to students with mild or moderate disabilities. In a comparison of motion picture training with slides or live demonstration, the group receiving motion picture training performed better than either of the other groups. Using motion picture or videotape for instruction was better than slide training because teachers showed the skills involving motion in a systematic manner and maintained the students attention. Haring, Kennedy, Adams, and Pitts-Conway (1987) indicated that students with autism were able to acquire and generalize social skills for grocery shopping after watching videotapes showing the correct process. Using the videotapes, the teachers had the flexibility to stop the tape and discuss the skills with the students. The students were first taught shopping skills within the natural environment, and the videotape was used to model and reinforce their social behaviors, enhancing the teacher's ability to promote generalization of the responses. Although the students improved in their shopping skills with training in the natural environment, their use of appro-priate social responses for shopping only increased after watching the videotapes. Morgan and Salzburg (1992) conducted two studies designed to isolate the effects of video-assisted training from those of modeling and practice. Three adults with severe disabilities were able to communicate with words or short phrases and verbally identified problem situations in work settings. Only one adult was able to generalize to the video probe and the work setting. Results indicated that only after the other two participants viewed the video and then actively rehearsed the situations were they able to generalize to the video probe and thereafter, the work setting. Although these traditional uses of motion video proved to be effective with the students in the Haring et al. (1987) and Stephens and Ludy (1975) studies, the role of their students was essentially passive; they sat and watched. Morgan and Salzburg (1992) also indicated that the addition of active participation increased the participants' ability to generalize to the work setting. The dimension of direct participation was missing. Interactive video or multimedia is an instructional technology which combines computer-assisted instruction and some form of educational video so that the learner can participate directly with the activity. Interactive-Video Instruction Payne and Antonow (1982) developed two types of interactive video programs because they felt that observation of the physical responses of the students would assist in determining if, and at what rate, skill acquisition was taking place. The first program gave the students the same feedback regardless of the response, and the second program either advanced to the next segment or repeated the segment contingent upon the response. Preliminary use of this program indicated that the students attended to the video and made an effort to complete the exercises. By modifying these preliminary programs-but using video equipment similar to that described by Payne and Antonow (1982), Crusco et al. (1986) extended the work to determine the skills necessary for students with moderate to severe disabilities to use interactive video independently. The students were observed for sitting in seat, maintaining eye contact, responding to verbal prompts, pushing the correct button, and avoiding incorrect, inappropriate, or excessive button pushing. Only two of the twenty students were able to learn all the steps, indicating that the medium did not appear to be appropriate as an independent learning station for students with moderate or severe intellectual disabilities. These results indicate support for the combination of computer-based anchored instruction and community-based activities. Additional research that demonstrates the link between the two approaches is needed. In another example of the use of interactive videotape, taught social skills to high school students with mild or moderate intellectual disabilities. The social skills taught were generic in nature (e.g., ways of solving problems in a variety of community and job situations). A field study conducted using these materials (Browning, White, Nave, & Zembrosky-Barkin, 1986) with students with mild and moderate disabilities indicated gains in the students' knowledge and application of the curriculum skills. All students including those with moderate disabilities improved from pre-to post-test, indicating that the program was appropriate for high school students with this type of disability. The Interactive Laserdisc for Special Education Technology program (Thorkildsen et al., 1983) developed and tested seven instructional interactive laserdisc programs for students with developmental disabilities. One program, &dquo;Matching and Prepositions,&dquo; designed as individual learning stations for students with moderate to severe disabilities, could not be used by the students independently. Findings provide additional evidence that technologybased instruction is most effective when combined with other instruction. Wissick, Lloyd, and Kinzie (1992) studied the effects of an interactive laserdisc-based grocery store simulation with three students with mild to moderate intellectual disabilities. Using a multiple-baseline-across-subjects design, students were assessed on the number of actions to locate and purchase an item for snack. All three students decreased the number of actions or steps needed to locate items in both the simulated and natural situations. Similarly, Langone, Shade, Clees, and Day (in press) used a multipleprobe-across-subjects design to evaluate the effectiveness of a multimedia computer-based instructional program using photographs of target stimuli (i.e., cereal boxes as they appear on grocery store shelves) in an attempt to increase the likelihood that selection of specified cereal boxes would generalize to the grocery stores in the community. Students with moderate to severe disabilities were evaluated in both the simulated and the natural settings to measure the generalization of the discriminations. Results indicated that the mean and median duration to find target cereals decreased, coupled with an increase in the number of cereals found. The literature indicates that there is promise for the use of technology; however, more research will be required to determine the exact relationship between these technology solutions and the actual generalization of the target skills in community environments. The use of the technology must replicate the community situation to enhance the learning environment for students and teachers. In this way, realistic exemplars can provide the teachers with a flexible and adaptable tool, and afford students increased opportunities for direct participation. Moreover, there is growing evidence that instructional applications of technology that provide video-based or multimedia simulations to teach functional living skills also provide repeated practice for students with developmental disabilities (Langone et. al., in press;Wissick et al., 1992). IMPLICATIONS FOR DESIGNING MULTIMEDIA SIMULATIONS The effectiveness of multimedia instruction for teaching functional skills has been demonstrated in a number of studies (Alcantara, 1994;Haring et al., 1987;Haring, Breen, Weiner, Kennedy, & Bednersh, 1995;McDonnell et al., 1984;Wissick et al., 1992). Much of the literature suggests that simulation training is most effective when paired with in vivo training (McDonnell & Horner, 1985;McDonnell, Horner, & Williams, 1984). Some research indicates that computer-based simulations alone may produce generalization and therefore be at least as effective as in vivo training in establishing skills in community-based settings, and potentially more cost effective (Langone et al., in press). This places &dquo;the onus on teachers...to provide environmental arrangements that establish discriminative stimuli associated with normative behaviors and reinforcers...&dquo; (Clees, 1995, p. 125). Design of Video-based Simulations The primary goal in creating a multimedia simulation is to enhance the learning environment. Such a simulation combines the capabilities of the computer and the video to present realistic representations of the environment, to provide appropriate feedback depending on the response, and to enable the teacher to review difficult material or steps in responding. Evaluation of the simulation's effectiveness is measured by the student's skills in the community or vocational situation. Current technology provides the teacher with tools to create situationappropriate simulations. As with the development of any instructional program for students with disabilities, a teacher first targets the functional adult living skills that the student needs to develop. These skills have been categorized into four domains: vocational, leisure/recreational, domestic living, and community living (Snell & Grigg, 1987). In addition, an evaluation of the student's school, home, and community environments assists the teacher in developing appropriate functional goals. To design video simulations for the identified skills, teachers can employ the steps from general case programming. Table 1 provides an example of general case programming applied to multimedia simulation development. Using the general case model, teachers can create a checklist of skills and develop an instructional design for the multimedia simulation video anchors. After creating a checklist of skills for the design, the instructional delivery (including the technology tools to develop the simulation and the Table 1 Designing Video-based Simulations Using the General Case Model type of instructional setting for the simulation) needs to be considered. Instructional Delivery Past studies used videotape and videodisc technologies to present instructional scenarios and demonstrated that these scenarios did positively effect learned behaviors and potentially assistedJearners in generalizing skills (e.g., Haring et al., 1987;Wissick et al., 1992). Unfortunately, videotape technology is linear in nature and requires considerable difficulty in locating and repeating important sequences on the tape that students may need to view for additional practice. Videodisc technology overcomes this weakness but requires considerable expense to create each videodisc, and the medium is limited by a relatively short number of video sequences that can be stored on one disc (i.e., 30 minutes of full-motion video). Teachers need to know the both the limitations of the media and the potential of current technologies available to design simulations. If a video camcorder or a camera is used to take realistic photos in community or vocational settings, conversion to a digital format using a video capture board in a computer or a scanner is necessary. Fortunately, new digital cameras save photographs on a diskette and camcorders record in digital video format, providing easy portability to any multimedia program. Alternatively, teachers could have regular photographic film developed and indexed on a Kodak TM CD-Rom. They could then incorporate the photos using multimedia programs such as HyperStudio or PowerPoint (Microsoft, 1998). Once developed, simulations that employ photographs and compressed video stored on a CD-ROM may allow for considerably more flexibility than previous technological solutions. This technology provides easy access to fullmotion or compressed video to (a) support instructional procedures designed to teach concepts, (b) highlight (increase the salience of) relevant dimensions of stimuli, and (c) allow for virtually unlimited practice. Alternatively, web-development tools are available to create interactivity using the World Wide Web. The advantage of publishing programs on the web is that other teachers in the area could access the simulations for their students. Future research efforts should address the effectiveness of current and emerging technologies (e.g., compressed video, DVD, World Wide Web) on improving generalization of skills to in vivo settings more efficiently. Within the classroom, the teacher may choose to work with students individually on the simulation or have peer tutors work with students. The use of multimedia simulations can provide an environment for interaction with peers in an age-appropriate setting. Instead of replacing human contact, technology applications should provide age-appropriate instruction and increased interaction with peers and community members. Garner and Campbell (1987) listed three guidelines for effective use of technology with students with severe intellectual disabilities: (a) document changes in the behavior of the learner, (b) verify the functional relationship between the intervention and behavior of the learner, and (c) associate the outcomes with social and empirical validity; the latter is clearly an important factor when using simulations. Therefore, the degree to which teachers actively consider how technology (simulations in this case) can help students achieve personal goals, is also a highly relevant an issue. Considerations for Using Video-based Simulation The use of multimedia simulations remains an under-explored application of the technology that may effectively address numerous instructional and social domains for teaching students with developmental disabilities. Simulations can provide replications of lifelike situations and can be designed as an individual or group instructional tool. A variety of different simulations can be designed for students with developmental disabilities to assist the teacher in replicating aspects of community or vocational skills that require extra practice or are difficult to teach in the community. In addition, students would be using age-appropriate programs in conjunction with their peers. The technology would not be used to isolate the students but to create a learning environment shared with peers and teachers. When developing or selecting video-based simulations, teachers must also consider how social validity, design, and instructional delivery factors will interact to influence learning. Future research must also seek to identify the most effective combination of these factors that promotes efficient generalization. A number of unanswered questions regarding the efficacy of interactive video still need to be examined. For example, an evaluation of the effects of video versus practice in the natural setting is needed to further our knowledge base. Also, future research questions should address the effectiveness of motion video versus still photographs. Designers should have specific understanding of how effective models operate. Video-based simulations must include appropriate models (e.g., peers without disabilities) that actively participate in the task, providing the learner with guidance, support, and mediation in the learning situation. Models should pose additional/novel questions and give relevant feedback to provide added social validity and mediation to make the simulation even more authentic or conducive to learning. In many respects, the function attributed to the role model is directly in line with a philosophy inherent to the recently revised AAMR's definition of mental retardation (American Association on Mental Retardation, 1992) -that we must conceptualize the abilities of persons with developmental disabilities to include the degree and frequency to which they receive support from friends who are part of their daily lives. Teachers should use simulations with a specific understanding that both generalization and maintenance of community living skills are important. Simulations should be used to augment instruction that will take place in the real word. There is no reason to believe that teaching community living skills via simulation will automatically transfer to the natural environment. We also should consider the merit of using simulations to help maintain important skills that may decline if not practiced. Even if students have been able to demonstrate acquisition of a skill in the natural environment but do not have frequent opportunities to perform the skill, their skill level might deteriorate. Simulations may also help instruction to be more efficient by decreasing the number of community trials necessary for mastery. We propose using the authenticity and anchored features of multimedia simulations to assist the learner in becoming as independent as possible within the simulation/classroom environment. In turn, the teacher works with the student to apply his or her learned skills in ways that promote smoother and more reliable transition to the real world.
v3-fos-license
2018-04-10T20:40:03.000Z
2018-04-10T00:00:00.000
51926311
{ "extfieldsofstudy": [ "Physics", "Mathematics", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-018-30136-y.pdf", "pdf_hash": "13d67c2c7af19429b3e98df73d8347b0a989b3ff", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:340", "s2fieldsofstudy": [ "Physics" ], "sha1": "12a42b46a0c724b2141ae4abc98b79d4f8e7a686", "year": 2018 }
pes2o/s2orc
Finite-time scaling in local bifurcations Finite-size scaling is a key tool in statistical physics, used to infer critical behavior in finite systems. Here we have made use of the analogous concept of finite-time scaling to describe the bifurcation diagram at finite times in discrete (deterministic) dynamical systems. We analytically derive finite-time scaling laws for two ubiquitous transitions given by the transcritical and the saddle-node bifurcation, obtaining exact expressions for the critical exponents and scaling functions. One of the scaling laws, corresponding to the distance of the dynamical variable to the attractor, turns out to be universal, in the sense that it holds for both bifurcations, yielding the same exponents and scaling function. Remarkably, the resulting scaling behavior in the transcritical bifurcation is precisely the same as the one in the (stochastic) Galton-Watson process. Our work establishes a new connection between thermodynamic phase transitions and bifurcations in low-dimensional dynamical systems, and opens new avenues to identify the nature of dynamical shifts in systems for which only short time series are available. Bifurcations separate qualitatively different dynamics in dynamical systems as one or more parameters are changed. Bifurcations have been mathematically characterized in elastic-plastic materials 1 , electronic circuits 2 , or in open quantum systems 3 . Also, bifurcations have been theoretically described in population dynamics [4][5][6] , in socioecological systems 7,8 , as well as in fixation of alleles in population genetics and computer virus propagation, to name a few examples 9,10 . More importantly, bifurcations have been identified experimentally in physical [11][12][13][14] , chemical 15,16 , and biological systems 17,18 . The simplest cases of local bifurcations, such as the transcritical and the saddle-node bifurcations, only involve changes in the stability and existence of fixed points. Although, strictly speaking, attractors (such as stable fixed points) are only reached in the infinite-time limit, some studies near local bifurcations have focused on the dependence of the characteristic time needed to approach the attractor as a function of the distance of the bifurcation parameter to the bifurcation point. For example, for the transcritical bifurcation it is known that the transient time, τ, diverges as a power law 19 , as τ ~ |μ − μ c | −1 , with μ and μ c being the bifurcation parameter and the bifurcation point, respectively, while for the saddle-node bifurcation 20 this time goes as τ ~ |μ − μ c | −1/2 (see 12 for an experimental evidence of this power law in an electronic circuit). Thermodynamic phase transitions 21,22 , where an order parameter suddenly changes its behavior as a response to small changes in one or several control parameters, can be considered as bifurcations 23 . Three important peculiarities of thermodynamic phase transitions within this picture are that the order parameter has to be equal to zero in one of the phases or regimes, that the bifurcation does not arise (in principle) from a simple low-dimensional dynamical system but from the cooperative effects of many-body interactions, and that at thermodynamic equilibrium there is no (macroscopic) dynamics at all. Non-equilibrium phase transitions 24,25 are also bifurcations and share these characteristics, except the last one. Particular interest has been paid to second-order phase transitions, where the sudden change of the order parameter is nevertheless continuous and associated to the existence of a critical point. A key ingredient of second-order phase transitions is finite-size scaling 26,27 , which describes how the sharpness of the transition emerges in the thermodynamic (infinite-system) limit. For instance, if m is magnetization (order parameter), T temperature (control parameter), and  system's size, then for zero applied field and close to the critical point, the equation of state can be approximated as a finite-size scaling law, c / 1/ with T c the critical temperature, β and ν two critical exponents, and g[y] a scaling function fulfilling g[y] ∝ (−y) β for y → −∞ and g[y] → 0 for y → ∞. It has been recently shown that the Galton-Watson branching process (a fundamental stochastic model for the growth and extinction of populations, nuclear reactions, and avalanche phenomena) can be understood as displaying a second-order phase transition 28 with finite-size scaling 29,30 . In a similar spirit, in this article we show how bifurcations in one-dimensional discrete dynamical systems display "finite-time scaling", analogous to finite-size scaling with time playing the role of system size. We analyze the transcritical and the saddle-node bifurcations for iterated maps and find analytically well-defined scaling functions that generalize the bifurcation diagrams for finite times. The sharpness of each bifurcation is naturally recovered in the infinite-time limit. The finite-size behavior of the Galton-Watson process becomes just one instance of our general finding for the transcritical bifurcation. And as a by-product, we derive the power-law divergence of the characteristic time τ when μ is kept constant, off criticality 19,20 . Universality of Convergence to Attractive Fixed Points In this paper, we consider a one-dimensional discrete dynamical system, or iterated map, x n+1 = f(x n ), where x is a real variable, f(x) is a univariate function (which will depend on some non-explicit parameters) and n is discrete time. It is assumed that the map has an attractive (i.e., stable) fixed point at x = q, for which f(q) = q, with |f ′(q)| < 1, where the prime denotes the derivative 20 . Moreover, the initial condition, x 0 , is assumed to belong to the basin of attraction of the fixed point. Additional conditions on x 0 will be discussed below. We are interested in the behavior of x n = f n (x 0 ) for large but finite n, where f n (x 0 ) denotes the iterated application of the map n times. Naturally, for sufficiently large n, f n (x 0 ) will be close to the attractive fixed point q and we will be able to expand f(f n (x 0 )) around q, resulting in By rearranging and introducing the variable c n+1 , the inverse of the "distance" to the fixed point at iteration n + 1, we arrive at (we may talk about a distance because we calculate the difference in such a way that it is always positive). Iterating this transformation  times leads to where only the lowest-order terms have been considered 29 . When the variable z, defined as = − , is kept finite (with  → ∞ and M → 1) a non-trivial limit of the previous expression exists if α = 1. It is found that the right-hand side of the expression is dominated by the second term, which grows linearly with . Therefore, for , and taking the inverse, we obtain This is exactly the same result as the one derived in ref. 29 for the Galton-Watson model, leading to the realization that this model is governed by a transcritical bifurcation (but restricted to a fixed initial condition x 0 = 0). The scaling law (4) means that any attractor of a one-dimensional map is approached in the same universal way, as long as a Taylor expansion as the one in Eq. (2) holds, in particular if f ″(q) ≠ 0. In this sense one may talk about a "universality class", as displayed in Fig. 1 1) (the rescaled difference concerning the point M = 1) remains constant. Note that, in order to have a finite z, as  is large, M = f ′(q) will be close to 1, implying that the system will be close to its bifurcation point, corresponding to M = 1 (where the attractive fixed point will lose its stability). Therefore, in the scaling law, C can be replaced by its value at the bifurcation point C * , so, we write C C = * in Eq. (4). In principle, the value of the initial condition x 0 is not of fundamental importance. The same results can be obtained, for example, by taking x 1 = f(x 0 ) as the initial condition and then replacing  by  − 1 because, for very large , 1 − . Therefore, as  grows, memory of the initial condition is erased, as  can be made as large as desired. However, x 0 has to fulfill , in the same way that all the iterations x n must also satisfy these inequalities (i.e., all the iterations have to be on the same "side" of the point q, see the caption of For example, for the logistic (lo) map 20 , a transcritical bifurcation takes place at μ = 1 and the attractor is at q = 0 for μ ≤ 1 and at q = 1 − 1/μ for μ ≥ 1, which leads to Thus, in order to verify the collapse of the curves onto the function G, the quantity lo 0 must be displayed as a function of − 1  μ | − |; if the resulting plot does not change with the value of  the scaling law can be considered to hold. Alternatively, the two regimes μ  1, can be observed by writing  lo 0 as a function of  μ = − y ( 1). In the latter case the scaling function turns out to be G(−|y|). Figure 1(b) shows precisely this; the nearly perfect data collapse for large  is the indication of the fulfillment of the finite-time scaling law. For comparison, Fig. 1(a) shows the same data with no rescaling (i.e., just the distance to the attractor as a function of the bifurcation parameter μ). In the case of the normal form of the transcritical (tc) bifurcation (in the discrete case), f tc (x) = (1 + μ)x − x 2 , the bifurcation takes place at μ = 0 (with q = 0 for μ ≤ 0 and q = μ for μ ≥ 0). This leads to exactly the same behavior for z  μ = − | | (or for μ = y  in order to separate the two regimes, as shown overimposed in Fig. 1(b), again with very good agreement). For the saddle-node (sn) bifurcation (also called fold or tangent bifurcation 31 ), in its normal form (discrete system), f sn (x) = μ + x − x 2 , the attractor is at q μ = (only for μ > 0), so the bifurcation is at μ = 0, which leads The scaling law can be written as To see the data collapse onto the function G one must represent sn 0   as a function of  z 2 μ = − (or as a function of y = −z for clarity sake, as shown also in Fig. 1(b)). In order to create a horizontal axis that is linear in μ, we first define = − z u , in which case f x F ( )  μ = − = for the horizontal axis of the rescaled plot. Although the key idea of the finite-time scaling law, Eq. (4), is to compare the solution of the system at "corresponding" values of  and μ (such that z is constant, in a sort of law of corresponding states 21 ), the law can also be used at fixed μ. At the bifurcation point (μ = μ c , so z = 0), we find that the distance to the attractor decays hyperbolically, i.e.,  , as it is well known, see for instance ref. 19 . Out of the bifurcation point, for non-vanishing μ − μ c we have z → −∞ (as  → ∞) and then G(z) → e −z , which leads to The same data under rescaling (decreasing the density of points, for clarity sake), together with data from the transcritical bifurcation in normal form (tc) and the saddlenode bifurcation (sn). The collapse of the curves into a single one validates the scaling law, Eq. (4), and its universal character. The scaling function is in agreement with G(−|y|). Note that the initial condition x 0 is taken uniformly randomly between 0.25 and 0.75, which is inside the range necessary for all the iterations to be above the fixed point. This range is, below the bifurcation point, 0 < x 0 < 1 (lo), 0 < x 0 < 1 + μ (tc), and, above, the transcritical bifurcation (both in normal form and in the logistic form) and as 1/(2 ) c τ μ μ = − for the saddle-node bifurcation (with μ c = 0 in the normal form) 12 . These laws, mentioned in the introduction, have been reported in the literature as scaling laws 20 , but in order to avoid confusion we propose calling them power-law divergence laws, and keep the term scaling law for behaviors such as those in Eqs (1), (4) and (5). Note that this sort of law arises because G(z) is asymptotically exponential for z → −∞; in contrast, the equivalent of G(z) in the equation of state of a magnetic system in the thermodynamic limit is a power law, which leads to the Curie-Weiss law 32 . Scaling Law for the Distance to the Fixed Point at Bifurcation in the Transcritical Bifurcation In some cases, the distance between  f x ( ) 0 and some constant value of reference will be of more interest than the distance to the attractive fixed point q, as the value of q may change with the bifurcation parameter. For the transcritical bifurcation we have two fixed points, q 0 and q 1 , and they collide and interchange their character (attractive to repulsive, and vice versa) at the bifurcation point. It will be assumed that q 0 is constant independent of the bifurcation parameter (naturally, q 1 will not be constant), and that "below" the bifurcation point q 0 is attractive and q 1 is repulsive, and vice versa "above" the bifurcation. We will be interested in the distance between q 0 and 0 , which, below the bifurcation point corresponds to the quantity calculated previously in Eq. (4), but not above. The reason is that, there, q was an attractor, but now q 0 can be attractive or repulsive. Note that, without loss of generality, we can refer −  q f x ( ) 0 0 as the distance of  f x ( ) 0 to the "origin". Following ref. 29 , we seek a relationship between both fixed points when the system is close to the bifurcation point. As, in that case, q q 0 0 0 1 to the lowest order in (q 1 − q 0 ). Naturally, M 0 = f ′(q 0 ) and C 0 = f ″(q 0 )/2. We also seek a relationship between to the first order in (q 1 − q 0 ). We now write Using Eq. (7) it can be shown that where we have also used that C C C 1 0 = = * , to the lowest order, with * C being the value at the bifurcation point. Therefore, we obtain the same scaling law as in the previous section: 0 0 − * with the same scaling function G(y) as in Eq. (3), although the rescaled variable y is different here (y ≠ z, in general). This is possible thanks to the property y + G(−y) = G(y) that the scaling function satisfies. Note that the scaling law (1) has the same form as the finite-time scaling (9) with y given by Eq. (8), and therefore we can identify β = ν = 1. Note also that we can identify M 0 = f ′(q 0 ) with a bifurcation parameter, as it is M 0 < 1 "below" the bifurcation point (M 0 = 1) and M 0 > 1 "above". In fact, M 0 can be considered as a natural bifurcation parameter, as the scaling law (4) expressed in terms of M 0 becomes universal. M defined in the previous section cannot be a bifurcation parameter as it is never above one because it is defined with respect to the attractive fixed point. For the transcritical bifurcation of the logistic map we identify q 0 = 0 and M 0 = μ, so y ( 1)  μ = − . For the normal form of the transcritical bifurcation, q 0 = 0 but M 0 = μ + 1, so  y μ = . Consequently, Fig. 2(a) shows  f x ( ) 0 (the distance to q 0 = 0) as a function of μ, for the logistic map and different , whereas Fig. 2(b) shows the same results under the corresponding rescaling, together with analogous results for the normal form of the transcritical bifurcation. The data collapse supports the validity of the scaling law (9) with scaling function given by Eq. (3). Scaling Law for the Iterated Value x n in the Saddle-Node Bifurcation In the case of a saddle-node bifurcation, the -th iterate can be isolated from Eq. (5) to obtain and H(y) = y(e y + 1)(e y − 1) −1 /2. Therefore, the representation of f x ( ) 0   as a function of μ 2 unveils the shape of the scaling function H. In terms of and, therefore, plotting   f x ( ) 0 as a function of  μ 4 2 must lead to the collapse of the data onto the scaling function I(u), as shown in Fig. 3. Comparison with the finite-size scaling law (1) allows one to establish β = ν = 1/2 for this bifurcation (and bifurcation parameter μ, not μ ). Conclusions By means of scaling laws, we have made a clear analogy between bifurcations and phase transitions 23 , with a direct correspondence between, on the one hand, the bifurcation parameter, the bifurcation point, and the finite-time solution  f x ( ) 0 , and, on the other hand, the control parameter, the critical point, and the finite-size order parameter. However, in phase transitions, the sharp change of the order parameter at the critical point arises in the limit of infinite system size; in contrast, in bifurcations, the sharpness at the bifurcation point shows up in the infinite-time limit,  → ∞. So, finite-size scaling in one case corresponds to finite-time scaling in the other. Specifically, we conclude that the finite-size scaling behavior derived in ref. 29 can be directly understood from the transcritical bifurcation underlying the Galton-Watson branching process. It is remarkable that the critical behavior of such a stochastic process is governed by a bifurcation of a deterministic dynamical system. Moreover, by using numerical simulations we have tested that the finite-time scaling laws also hold for dynamical systems continuous in time, as well as for the pitchfork bifurcation in discrete time, although with different exponents and scaling function in this case (this is due to the fact that the condition f ″(q) ≠ 0 does not hold). The use of the finite-time scaling concept by other authors does not correspond with ours. For instance, although ref. 33 presents a scaling law for finite times, the corresponding exponent ν there turns out to be negative, which is not in agreement with the genuine finite-size scaling around a critical point. In addition, we have also been able to derive the power-law divergence of the transient time to reach the attractor out of criticality 12,19,20 . Our results could be useful for interpreting different types of fixed points found in renormalization group theory 23 . Also, they might allow to idenfity the type of bifurcations in systems for which information is limited to short transients, such as in ecological systems. In this way, the scaling relations established in this article could be used as warning signals 34 to anticipate the nature of collapses or changes in ecosystems 5,6,34-36 (due to, e.g., transcritical or saddle-node bifurcations) and in other dynamical systems suffering shifts.
v3-fos-license
2017-04-04T16:40:10.681Z
2014-10-30T00:00:00.000
2090051
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://jech.bmj.com/content/69/1/77.full.pdf", "pdf_hash": "81ad2181dc6208c00ea5fbdd9d07a4e4a6c0c64f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:342", "s2fieldsofstudy": [ "Medicine" ], "sha1": "1f7e586f8560e74cd2103ff99b47c15a1b8c699c", "year": 2014 }
pes2o/s2orc
Environmental influences on children's physical activity Background This paper aims to assess whether 7-year-olds’ physical activity is associated with family and area-level measures of the physical and socioeconomic environments. Methods We analysed the association of environments with physical activity in 6497 singleton children from the UK Millennium Cohort Study with reliable accelerometer data (≥2 days and ≥10 h/day). Activity levels were assessed as counts per minute; minutes of moderate to vigorous activity (MVPA); and whether meeting recommended guidelines (≥60 min/day MVPA). Results Higher levels of children's physical activity were associated with households without use of a car and with having a television in a child's bedroom (for counts per minute only). Aspects of the home socioeconomic environment that were associated with more children's physical activity were lone motherhood, lower maternal socioeconomic position and education, family income below 60% national median, and not owning the home. Children's activity levels were higher when parents perceived their neighbourhood as poor for bringing up children and also when families were living in the most deprived areas. Relationships were independent of characteristics such as child's body mass index and ethnic group. When adjusted for physical and socioeconomic correlates, the factors remaining significant in all outcomes were: household car usage and maternal education. Conclusions Although physical and socioeconomic environments are associated with children’s physical activity, much of the variation appears to be determined by the child's home socioeconomic circumstances rather than the wider environment where they live. INTRODUCTION Physical activity is health enhancing, and is associated with both reduced risk of adiposity, diabetes, hypertension, musculoskeletal problems and promotion of psychological well-being. 1 Rates of activity are low among UK children, particularly girls. Using accelerometer data, the 2008 Health Survey for England (HSE) found that 51% of boys and 34% of girls aged 4-10 years met the current minimum physical activity recommendations, 2 although it is important to recognise that current recommendations are based on evidence on selfreported physical activity. Equivalent figures for 7-year-olds in the Millennium Cohort Study (MCS) were 63% and 38%. 3 Understanding what influences children's physical activity may help to identify interventions to promote active lifestyles. Observational studies relating physical activity between environmental factors give inconsistent findings. Some find an association, 4-12 others do not. 13 14 These differences may be due to limitations in study design: with one exception, 15 most studies have a small sample size (less than 150 children 4 5 10 ), focus on population subgroups 5 13 or, as noted in a recent review, rely on reports of physical activity, sometimes by parents. 16 Recent studies have also emphasised adolescents rather than children and most are based in North America or Australia. 16 Differences between studies may also be due to the environmental measures used. Most research focuses on either perceived or objective measures of the physical environment and only two studies have examined both. 6 10 Furthermore, children experience their environments at different levels, such as the immediate home environment as well as its neighbourhood. Environments may be characterised in different ways, for example, in socioeconomic and physical terms. Most studies to date have focused on the built environment of the neighbourhood; they show that children are more likely to be physically active if their neighbourhood has facilities such as walking/cycling paths and parks [8][9][10] ; playgrounds and recreational community centres 4 9 17 and features sidewalks, lighting, street connectivity or land-use diversity. 5 6 8 10 17 The few studies to examine the association between children's physical activity and rurality have shown mixed results. 18 Questions also remain on the mechanisms through which the built and socioeconomic environments exert influences on physical activity as few studies have controlled for individual socioeconomic factors. 7 9-12 The present study addresses several of these research gaps. Using data on 7-year-old children from a large, nationally representative UK cohort, it explores the influence of characteristics of the home and neighbourhood environments on accelerometer-measured physical activity, taking account of family socioeconomic circumstances and using measures that reflect physical and social characteristics of the neighbourhood, objective and subjective. METHODS Participants The MCS is a UK-wide prospective study of children born between September 2000 and January 2002. The original cohort comprised of 18 818 children (72% response rate) whose parents were first interviewed when their child was aged 9 months. 19 Three more home interviews were carried out at ages 3, 5 and 7 years with further follow-up conducted at 11 years (data not available at time of analysis) and beyond. Detailed information regarding demographic, social, and health factors relating to the children, and the children's siblings and parents was obtained through interviews of the main respondents and their partners in the home. 20 This study uses data from the age seven survey, which received ethical approval from the Northern and Yorkshire Research Ethics Committee (07/MRE03/32). The present analysis did not require additional ethics approval. Physical activity data At age 7, 14 043 children (13 681 singletons) were interviewed and invited to participate in the accelerometry study. Those who consented were asked to wear the Actigraph GT1M uniaxial accelerometers (Actigraph, Pensacola, Florida). Previous studies have demonstrated this device to be a technically reliable instrument, able to detect differing levels of physical activity intensity. 21 Accelerometers programmed to use a 15 s sampling epoch and to record activity as counts and steps were sent to those who consented to participate (n=12 768 singletons). Children were instructed to start wearing their accelerometer the morning after receiving it and to do so for seven consecutive days during waking hours, except during bathing/aquatic activities. Data were collected between May 2008 and August 2009. Accelerometers were returned from 9980 children (9721 singletons). Data from the activity monitors were downloaded using the Actigraph software V.3.8.3 (Actigraph, Pensacola, Florida, USA) and processed in house, 22 according to predetermined criteria. 23 Non-wear time was defined as any time period of consecutive zero-counts ≥20 min and these periods were removed from the summation of activity. A threshold for extreme values was set to ≥11 715 counts and time spent at intensity above this threshold was excluded. 24 Participants with recording periods of ≥10 h on ≥2 days were included in analyses, 23 resulting in a sample of 6497 singleton children. Small differences were found in the demographic characteristics of the sample of children with reliable accelerometer data (n=6497) relative to the whole cohort sample (n=13 681 singletons) interviewed at age 7 years. 3 To allow for possible bias in the selection of children participating in the accelerometry sample, an inverse probability weight was applied. 25 This was in addition to the standard weighting applied to all cohort children to allow for the original sampling design and attrition. The following outcome variables were derived: total physical activity (mean daily counts per minute (cpm) of wearing time, mean daily minutes of moderate to vigorous activity (MVPA) and adherence to current recommended guidelines (at least 60 min MVPA per day). The cut-off classifying physical activity as moderate-to-vigorous (>2241 cpm) was defined according to a calibration study in children of similar age, testing a range of activities from sedentary (eg, sitting) to vigorous (eg, basketball and jogging). 26 These measures were standardised by introducing the notion of a standard day with equal duration (735 min, equal to mean wear time across all reliable days), minimising in this way the potential association between physical activity and wearing time. 25 Explanatory variables We examined the influence of a number of environmental factors: all collected at the age 7 survey (fourth sweep of the MCS survey). Gender, season (based on the astronomical definition: spring (21 March-20 June), summer (21 June-20 September), autumn (21 September, 20th December), winter (21 December-20 March), ethnic group 27 and body mass index (BMI) of the children (based on measured height and weight information and categorised according to the International Obesity Task Force (IOTF) cut-offs for children) were included. 28 To minimise loss of information questionnaire items missing at age 7 years were retrieved from the previous sweep (age 5 years) if available. Home environment (reported measures) ▸ The physical environment was represented by the type of accommodation (house/bungalow; flat, studio or maisonette and bedsit or other), number of household cars/vans in regular use, whether participants had access to a garden, and whether the child had a television (TV) in their bedroom. ▸ The socioeconomic environment was represented by lone motherhood (being a lone mother or not); housing tenure (own/mortgage or other); family size (only child or not). We also included socioeconomic circumstances of the mother on the National Statistics Socioeconomic Classification, grouped into four categories: managerial and professional; intermediate occupations; routine and manual occupations and never worked or long-term unemployed. 29 Maternal education was divided into two groups: at or above O-level (or equivalent)/ below O-level. Poverty was defined by whether family income was <60% of the national median, before housing costs but after benefits and using a modified Organisation for Economic Co-operation and Development (OECD) equivalence scale. 30 Neighbourhood (reported and objective measures) ▸ The physical environment was represented by the following reported measures: accessibility to play areas and whether the area in which they lived (defined as one mile or 20 min walk from their house) was perceived to be good and safe to raise children. In addition, the objectively measured 2005 Rural/Urban Area Classification (RUAC) at the Lower Super Output Area level (LSOA; an average of 1500 people) was included in the analysis. 31 The RUAC categories were: urban (>10 000); rural, which included village, hamlet and isolated dwellings. ▸ The socioeconomic environment was represented by the objectively measured 2004-2005 Index of Multiple Deprivation (IMD) at the LSOA level. 31 The IMD measures relative levels of deprivation in small areas based on a number of indicators. The indicators are income, employment, health deprivation and disability, education, skills and training, barriers to housing and services, crime and living environment. As there is no unified definition for these measures across the UK, these are held as country-specific variables. While the IMD definitions are not directly equivalent, they could be broadly compared by introducing in addition to the main UK country and IMD variables, an interaction term between UK country and IMD. For the purposes of this study we used the IMD country-specific quintiles. This parameterisation allows us to compare the higher quintiles (more deprived) of each country with the country-specific IMD reference category. Statistical analysis Analyses were performed using STATA/SE V.12.0 (Stata Corporation, Texas, USA). Sampling weights were used to account for the stratified clustered design of the MCS. Weights were adjusted for attrition between contacts at successive MCS sweeps and for missing accelerometer data. Details on the adjustment for non-response and non-compliance are given elsewhere. 25 Total activity and MVPA were log-transformed to account for their positively skewed distributions. For each regression coefficient b, we calculated the quantity 100×(e b −1); similarly, the lower and upper bounds of b's 95% CI were subject to the same back-transformation. These values can be interpreted as the percentage change between geometric means of total activity or time spent in MVPA associated with varying levels of the covariates of interest. The p values were calculated using the command nlcom in Stata, which is based on the delta method to approximate nonlinear combinations of parameter estimates. 32 Regression models examined the association between characteristics of the home and neighbourhood environments and the three outcomes describing children's physical activity. Considering the stratified cluster sampling design of MCS study, weights to adjust for attrition between contacts at successive MCS sweeps and for missing accelerometer data were taken into account during the estimation using the Stata command svyset. Linear regression models were fitted to analyse total activity (cpm) and MVPA, while logistic regression analysis was used for activity adherence. Analyses were repeated separately for each outcome using two different models: model 1 was adjusted for gender and season; model 2 was further adjusted for children's BMI and ethnic group. Two-level linear and logistic regression models (models 1 and 2) were also fitted to examine the relationships between physical activity and objective measures of the neighbourhood environment. The two levels of analysis were family and the electoral wards (or superwards). Families were considered as the first level of analysis to account for contacts at successive MCS sweeps and for missing accelerometer data. The wards were defined as the second level of analysis in our study to account for the MCS sampling design. Aforementioned, with main effects for UK country and IMD we included an interaction term between each UK country and country-specific IMD in the multilevel models. Two-level linear and logistic regression models (model 3) were, in addition to gender, season, children's BMI and ethnic group, adjusted for environmental characteristics and objective measures of the neighbourhood environment that were statistically significant in models 1 and 2. Multicollinearity was assessed using the variance inflation factor for each estimator (for individual and area levels of analysis). Random intercept-only multilevel models were fitted using gllamm, a Stata programme for mixed-effects modelling. 33 The intracluster correlation coefficients (ICC) from the multilevel models were used to quantify the amount of variation in measures of physical activity resulting from differences between areas. As a sensitivity analysis, we repeated analyses separately for boys and girls; results were not different to those presented here. Characteristics of the children and their families and reported measures of home-environments and neighbourhoodenvironments showed no difference between non-movers and those who moved between contacts at 5 and 7 years (563 children; results not shown). Excluding children who moved between contacts at 5 and 7 years from the analysis did not affect the associations (data not shown). Table 1 shows the characteristics of the children in the sample, and the physical and socioeconomic characteristics of their home and neighbourhood environments. Although most children appeared to be in relatively advantaged circumstances (eg, their family had the use of a car or owned their own home), the sample was diverse: for example, 22.0% were lone mothers. RESULTS Just over half of the children had a TV in their bedroom and around a fifth were overweight or obese. Most families were living in a neighbourhood with good access to play areas (90.4%), and were generally satisfied with the neighbourhood (71.8%). Descriptive statistics for all physical activity variables and sedentary time have been previously published elsewhere. 3 Home environmental measures More cars in use in the household were significantly associated with less children's physical activity in unadjusted and adjusted analyses. In unadjusted analyses, children who had a TV in their bedroom were more physically active and more likely to meet activity guidelines. This association was attenuated after adjustment for all significant correlates of the home and neighbourhood but remained statistically significant for counts per minute. Type of accommodation and access to gardens were not consistently associated with physical activity (table 2). Measures of the home environment indicating socioeconomic disadvantage (lone motherhood, non-ownership, lower levels of maternal occupation, education and income) were associated with higher levels of physical activity, which persisted after adjusting for child's ethnic group and BMI. However, these associations were attenuated to non-significance in fully adjusted models, except for the association with maternal education (table 2). Neighbourhood environmental measures Perceiving the neighbourhood to be a poor or very poor place to raise children was associated with more physical activity in children (total and MVPA only) but this was not significant in the fully adjusted models. No other variables describing the physical environment of the neighbourhood (access to play areas, perceptions of safety and whether urban or rural) were associated with physical activity. Children's physical activity increased with increasing level of deprivation as indicated by the country-specific IMD quintile for England only. This association was not significant in the fully adjusted models for all outcomes. The IMD for Wales, Scotland and Northern Ireland was not significantly associated with children's physical activity. At the individual-level of analysis, the overall performance of the models in terms of the percentage of the variation of the dependent variables explained by the variation of the predictor variables was approximately 12% in the unadjusted models and 14% in the final adjusted model for total activity. Equivalent figures for MVPA were 13% and 16%. In all models, the ICCs indicated that statistically significant proportions of the variation in physical activity were explained by variation at the area level for all models. For example, in total activity, 3.31% was explained by IMD when adjusting for gender and season and environmental characteristics that were significant in models 1 and 2. Statement of principal findings In this large population-based study, accelerometer-measured physical activity in 7-year-old children was significantly associated in unadjusted analyses with characteristics of the physical and socioeconomic environments, for the home and neighbourhood. For the home environment, we found that children living in a family with no cars, those living in relatively disadvantaged circumstances had higher levels of physical activity. At the area level, more physical activity was associated with higher deprivation (IMD) for England only and with parental perceptions of it being a poor area for children. There was no association with rurality. In general, relationships were independent of child's BMI and ethnic group and were more likely for total activity and MVPA than for adherence to guidelines. However, when indicators of the environment were considered together, the only factors that remained significant were no cars in the household, lower levels of maternal education and a TV in the child's bedroom (for counts/min only), all associated with increased physical activity. This suggests that the dominant effect of the environment on physical activity is through socioeconomic characteristics related to personal assets (having the use of a car or higher maternal education). That having a TV in the child's bedroom should be associated with higher level of physical activity is counter-intuitive. However, in this sample having a TV in the child's bedroom was more common in less advantaged families, and so it may be acting as a marker of disadvantage rather than being on a causal pathway through sedentary behaviour. We also tested the hypothesis of whether having a TV in the child's bedroom would reduce wear time (eg, evening) in a way that could raise the average activity per observed minute and found no effect on wear time and therefore physical activity levels. For all associations, effect sizes associated with activity appeared to be modest but these are difficult to compare with other studies due to differences in methods of analysis, measures and sample characteristics. 16 Strengths and limitations Strengths of this study include its large national and representative sample and the use of accelerometers to provide more objective measures of physical activity. To the best our knowledge this is the first study to investigate the influence of the physical and socioeconomic environments measured at the home and area level on children's physical activity. For some variables, such as access to a garden, there was little variability to analyse. Furthermore, although the range of measures was broad, many may be limited in their capacity to describe the environment and may, at the same time, be characterising personal traits of those who live in environments, rather than or as well as, the environment itself. This challenge is inherent to this field of study and we attempted to address it through our analytic strategy. Acknowledged limitations include inability of accelerometers to measure certain types of activities, including aquatic activities, cycling and activities mainly requiring upper-body movement as well as to capture contexts of physical activity (eg, walking while carrying a load or walking uphill). In addition, selfreported measures of the physical and socioeconomic environments may result in bias, although patterns of associations with physical activity were reasonably consistent, at least in direction of effect, across different measures of the environment. Comparison with other studies Our study supports earlier evidence reporting environmental influences on children's physical activity. However, while few other studies have examined reported and objective measures of the physical and socioeconomic environments 6 10 11 only, Roemmich et al 10 and McCormack et al 11 adjusted for socioeconomic aspects of the environment. Current research has also explored the association between physical activity and the socioeconomic environment among adolescents; 7 11 however, the relationship among children is less clear. 34 To the best of our knowledge, this is the first study of school-age children that examines individual and neighbourhood characteristics of the physical and socioeconomic environments. For most health behaviours, socioeconomic advantage is associated with health enhancing behaviours. 35 For physical activity among children we found the reverse. Other evidence on this is mixed. 36 There has been a recent focus on the independent contribution of sedentary behaviour on children's health 37 and TV viewing has been used as a proxy for sedentary behaviour. 10 12 However, we found that children with a TV in their bedroom were more physically active. Only a few studies have investigated the association between number of TV sets at home and sedentary behaviour 10 38 39 and from those, only Roemmich et al 10 found a positive association. However, no study has reported the association between TVs in the home and physical activity. Our findings may indicate that sedentary and physically active lifestyles coexist (the 'Active Couch Potato' 40 ). Alternatively, having a TV in the child's bedroom could be a proxy of socioeconomic disadvantage 41 or some other pathway, which is particularly associated with physical activity. Children's health behaviours develop first within the family environment and factors such as access to media may be important influences on children's sedentary and active behaviours. 41 Access to a car could be another indicator of affluence as well as a disincentive to active travel. Our finding, that children who lived in households that used one or more cars were less active compared with those in households with no cars, agrees with current literature indicating that not having a car is an indicator of lower socioeconomic status and walking as a mode of transport. 42 Maternal perceptions of neighbourhood safety were not associated with children's activity. This is consistent with some previous studies. 43 Other studies have reported significant associations between reported or objective measures of the neighbourhood and physical activity, 4 5 11 mainly focusing on the physical environment, of which we had few measures. We were able to examine neighbourhood social deprivation and found that it predicted higher levels of physical activity although not after accounting for individual characteristics. Neighbourhood deprivation is likely to be associated with families having lower levels of assets. Only a recent North American study has examined the association between neighbourhood deprivation and children's physical activity and findings are mixed. 44 They found a strong association between higher neighbourhood deprivation and lower physical activity among African-Americans, but less consistent associations in white adolescents. Implications for research, policy and practice Better measures are needed of the environment, for the home and the neighbourhood and to describe aspects related to physical and socioeconomic influences. Such measures need to discriminate between predictors of physical activity that relate to places (homes and neighbourhoods) and those that relate to people who live in those places. Analysis should also consider how people and the places where they live interact to affect health-enhancing behaviours. This study may provide a starting What this study adds? ▸ This is the first study to explore the influence of characteristics of the home and neighbourhood environments on accelerometer-measured physical activity in children, taking account of family socioeconomic circumstances and using measures that reflect physical and social characteristics of the neighbourhood. ▸ Higher levels of children's physical activity were associated with measures indicating disadvantage, at family and neighbourhood level. When adjusted for physical and socioeconomic correlates, the factors remaining significant were: household car usage and maternal education. ▸ The results of our study suggest that the dominant effect of the environment on physical activity is through home socioeconomic characteristics rather than the wider environment. What is already known on this subject? ▸ There is conflicting evidence on the association between the physical and socioeconomic home and neighbourhood environments and physical activity in children. ▸ These conflicts may be due to limitations in study design or the environmental measures used as most studies are small, have focused on the association between physical activity and the physical environment and have used self-reported measures of physical activity. point, but methodological development is needed to determine causal pathways and potential interventions. Increasing activity levels in children is a public health priority. 45 The results of our study show that, although both physical and socioeconomic environments are associated with children's physical activity, much of the variation appears to be determined close to home rather than in the wider environment.
v3-fos-license
2019-01-13T07:46:02.450Z
2015-04-30T00:00:00.000
146801469
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jefjournal.org.za/index.php/jef/article/download/82/78", "pdf_hash": "356d0182e5558ca7e9b4927e59ec0ba135956af6", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:343", "s2fieldsofstudy": [ "Economics" ], "sha1": "356d0182e5558ca7e9b4927e59ec0ba135956af6", "year": 2015 }
pes2o/s2orc
MUNICIPAL ASSESSMENTS VERSUS ACTUAL SALES PRICES IN HEDONIC PRICE STUDIES In most hedonic price model studies, the actual sales price of a property is employed as the dependent variable in the parametric regression analysis. Although the use of this price is pervasive, alternatives to it do exist. One such alternative is the assessed property value, which is more readily available than the actual property price. The aim of this study is to compare implicit price estimates of property characteristics (both structural and locational) based on actual sales price data and assessed property values. To this end, a seemingly unrelated regression with two hedonic price equations is used, one which employs actual market prices as the dependent variable and the other which employs assessed values. The results show that the hypothesised influence of structural and locational housing characteristics on residential property prices is the same for assessed values, and actual market prices cannot be accepted. This finding should act as a caution for hedonic practitioners not to base their conclusions and recommendations solely on the use of assessed values in hedonic price models. INTRODUCTION The hedonic price model is generally used to estimate the effects of non-market amenities and disamenities on adjacent property values (Rosen, 1974).The source of data on housing prices is an important theoretical consideration to take into account when deciding on the appropriate dependent variable to use for a hedonic price analysis, with data on actual market transactions being preferable (Kiel & Zabel, 1999;Cotteleer & Van Kooten, 2012;Ma & Swinton, 2012).Although actual property prices are preferred (as they reflect how people allocate their money and, therefore, how they value property characteristics), assessed value estimates are often more readily available and have frequently been used as a substitute for actual property prices (Darling, 1973;Doss & Taff, 1996;Lee, Taylor & Hong, 2008).However, hedonic practitioners need to be cautioned that a dependent variable other than the actual sales price (i.e. an assessed property value) may be imperfectly correlated with actual market prices (Freeman, 2003).If this is the case, the errors in measuring the assessed value of the structure will conceal the underlying relationships between the property values and house characteristics (i.e.biased coefficient estimates will be present in the model) (Freeman, 2003). Although preferred, the use of actual sales prices is also not immune to disadvantages, with the main disadvantage being a potential lack of data if sales do not occur frequently (Cotteleer & Van Kooten, 2012).In addition to this, the threat of errors in sales price data is also present, along with the risk of distorted markets (Doss & Taff, 1996).Distorted markets occur as a result of asymmetric information and tend to be common in markets where real estate agents and lenders are dominant in the market (Doss & Taff, 1996).Real estate agents may influence the market by affecting the list price of properties and by affecting the bidding strategies of buyers.Lenders may affect the market with their lending practices (for example, by requesting large deposits from first time buyers) (Doss & Taff, 1996).Other transactions, such as trading between relatives, may not reflect true market values either.For this reason, it is recommended that all transactions that are not of arm's-length should be excluded from the hedonic analysis (Cotteleer & Van Kooten, 2012). Previous studies have attempted to compare coefficient estimates for property characteristics using regression equations with different dependent variables (Nicholls & Crompton, 2007;Bowman, Thompson & Colletti, 2009).These studies, however, did not develop test statistics in order to compare these estimates.Only one study could be found in the international literature that developed test statistics for this comparison -the study by Cotteleer and Van Kooten (2012). In line with the methodology followed by Cotteleer and Van Kooten (2012), the contribution of this paper is to provide a complete analysis and statistically test for differences in parameter estimates obtained from assessed values versus actual sales prices.To achieve this, a seemingly unrelated regression (SUR) hedonic price model consisting of two equations was estimatedone with municipal assessed values as the dependent variable and the other employing actual sales prices as the dependent variable.The resulting parameter estimates were then compared. The remainder of the paper is organised as follows: section 2 presents the methodology of the SUR.Section 3 presents the data and variables used in the study.Section 4 presents the empirical results of the SUR and investigates the statistical differences present in the two equations.Section 5 concludes the paper. METHODOLOGY In order to test for statistical differences between assessed values and sales prices, the correlation coefficient between the assessed values and the actual sales prices was calculated.Histograms for the distribution of actual sales prices and assessed values were also constructed and a paired t-test was performed.In addition to this, a SUR model was estimated to test for significant differences between the two equations.The correlation coefficient, r, is a measure of the strength of the linear relationship between two variables (Studenmund, 2006).This is defined as: The correlation coefficient will always lie between -1 and 1 (Keller, 2011).A value of -1 indicates a perfect negative relationship, a value of 0 indicates no relationship and a value of 1 indicates a perfect positive relationship (Keller, 2011).A paired t-test is used to determine whether or not there is a significant difference in two population means.In order to conduct a paired t-test, it is necessary to pair the observations in one of the samples with the observations in the other (Shier, 2004).The relevant t-statistic is calculated as follows: where: d̅ = the mean difference SE = the standard error of the mean difference The t-statistic is then used to test the null hypothesis that no significant difference in the mean values is present (Shier, 2004). A SUR model involves pairing the actual sales prices and assessed values and specifying a regression model for each of the properties for which both values are available (Cotteleer & Van Kooten, 2012).By analysing both equations in one model, the relevant test statistic can be derived (i.e. the Wald statistic). Generally, the SUR model specifies the m th of M equations for the i th of N individuals as follows: where: xim = regressors that are assumed to be exogenous βm = Km × 1 parameter vectors m = an error term Pm could, for example, represent the i th individual's expenditure on house m and xim could, for example, represent a matrix of house characteristics.In order to estimate the SUR model, observations over both equations and individuals are combined (Cameron & Trivedi, 2005).If independence over i is assumed, all equations for a given individual are first stacked (Cameron & Trivedi, 2005).The process of stacking M equations for the i th individual produces the following: which represents the following form: where: Pi and I = M × 1 vectors with m th entries Pim and im Given the definitions of Xi and Pi it can be shown that ̂SOLS is: which implies that system OLS (SOLS) is identical to separate equation-by-equation OLS (Cameron & Trivedi, 2005).In cases where all M equations have the same regressors, the efficient estimator is single equation OLS (Greene, 2011).Alternative estimators include the feasible generalised least squares (GLS) estimator and the maximum likelihood estimator (Cameron & Trivedi, 2005).In many cases the feasible GLS estimator is more efficient compared to systems OLS but it collapses to OLS if precisely the same regressors are present in each equation. Once the SUR model has been estimated, the Wald statistic can be calculated to test whether the restriction β1 = β2 holds (i.e. the Wald statistic tests the hypothesis that coefficients in the equation with actual market prices as the dependent variable are equal to the coefficients in the equation with assessed values as the dependent variable (Cotteleer & Van Kooten, 2012)).More formally, Ho: β1 = β2 Ha: β1 ≠ β2 where: β1 = the estimated coefficients for the actual sales prices equation β2 = the estimated coefficients for the assessed values equation DATA AND VARIABLES The locus of this study is the neighbourhood of Walmer, situated in Port Elizabeth, Nelson Mandela Bay, Eastern Cape.The upmarket Walmer neighbourhood is situated approximately 10 minutes by vehicle from Port Elizabeth's main beaches.The suburb is home to longstanding Port Elizabeth families and its history dates back to the early 1800s.Various amenities are located in close proximity to it.The area is well catered for in terms of residential property.Free-standing homes, townhouse complexes, security complexes and guesthouses can be found in the area.FIGURE 1 shows a map of the Walmer Neighbourhood. FIGURE 1: The geographical location of the Walmer neighbourhood and the Walmer Township The Walmer neighbourhood has a total of 2 625 residential properties, and a total of 1 326 transactions took place from 1995 to 2009 (excluding repeat sales) (South African Property Transfer Guide, 2011).The population in this study was, thus, limited to the 1 326 transactions that took place over the study period.Of these transactions, a simple random sample of 170 was drawn.The sample size was determined by employing the following equation: where: n = sample size N = population size e = level of precision Using Equation 7, the sample size was determined with a level of precision of 7.2%, which ensured a representative sample from the population, because the generally accepted level of precision for representative samples is 10% or less (Fink, 2003). Data on the 2007/2008 assessed values of properties (the specific valuation roll considered in this study) was purchased from the Nelson Mandela Bay Municipality, with the valuations applicable in this study becoming effective (for property rates purposes) on 1 July 2008.For the purposes of the 2007/2008 valuation the Nelson Mandela Bay Municipality outsourced the valuation task, which was awarded to a private-owned contractor, namely eValuations (Weyers, 2011).Historical sales price data for residential property stands in the neighbourhood of Walmer, Nelson Mandela Bay, that were traded at least once during the period 1995 to 2009 was also obtained.This data was purchased from the South African Property Transfers Guide (SAPTG).All transactions that were not arm's-length ones were excluded from the analysis.Data from the ABSA house price index (Port Elizabeth and Uitenhage) was then used to adjust assessed values and house prices to 2009 constant rands to control for real estate market fluctuations.Adjusting actual sales prices to control for house price inflation is a relatively common approach when the data originates from different years (Cummings & Landis, 1993;Carroll & Clauretie, 1999;Leggett & Bockstael, 2000;Cho, Bowker & Park, 2006;Cotteleer & Van Kooten, 2012). The selection of appropriate structural and neighbourhood characteristics was guided by a study by Sirmans, MacPherson and Zietz (2005).A total of 11 independent variables were thought to influence house prices in the Walmer neighbourhood, namely the number of bedrooms, the presence of a garage, the presence of air-conditioning, the number of bathrooms, the age of the house, the size of the erf, the number of storeys, the presence of an electric fence, the presence of a swimming pool, the distance to the Walmer Township and the distance to the nearest school. A lack of house characteristic data on the municipal database necessitated the physical collection of data.Information on the structural characteristics of houses in the Walmer neighbourhood was collected via personal interviews during January 2010.TABLE 1 presents the descriptive statistics of the dependent and independent covariates employed in the study.Source: Authors' calculations The average house in the sample has 3.6 bedrooms, 2.67 bathrooms, is 55.45 years old, has an erf size of 1776 square metres, has 1.18 storeys, and is located 1469 metres from the nearest school and 1799 metres from the Walmer Township.The majority of houses in the sample have a garage and a swimming pool, although less than half of the houses have air-conditioning or electric fencing.The average sales price is R1 626 395 and the average assessed value is R1 784 135.As can be seen from TABLE 1, assessed values appear to overvalue prices at the low end of the price spectrum, and undervalue prices at the top end.Unfortunately, a more thorough critical assessment of the municipal assessment model is not possible, since the assessment is outsourced to a privately owned contractor who is under no obligation to release the details of its operation. EMPIRICAL RESULTS It was first considered whether or not there were any significant differences between actual sales prices and assessed values.The correlation coefficient for the 170 observations was 0.79, indicating an imperfect overlap.Actual sales prices are generally lower than assessed values, although actual sales prices have a larger standard deviation (see TABLE 1).This is also apparent from Figures 2 and 3. As can be seen from Figures 2 and 3, the distribution of the assessed values has fewer observations in the tails of the distribution compared to actual sales prices.A paired t-test was also conducted in order to determine whether or not a significant difference between the mean values was present.The t-statistic of 2.1 was greater than the critical value of 1.96.This led to the rejection of the null hypothesis of no significant difference between the two mean values. Although there is evidence of divergence between actual sales prices and assessed values, hedonic price models based on assessed values and actual sales values can still result in similar coefficient estimates of location-specific amenities (Cotteleer & Van Kooten, 2012).Therefore, in order to compare the coefficient estimates derived using actual sales prices as the dependent variable with those using assessed values as the dependent variable a SUR model was estimated.3).Three model specifications were estimated, namely the linear, semi-log, and double-log.This allows for goodness-of-fit comparisons based on R-squared values.All models were estimated using Stata Version 11.0.(Damore & Nicholson, 2014).However, when the two hedonic price equations were estimated independently using robust standard errors the results were unaffected. A visual inspection of the coefficient estimates in the SUR model indicates that all the coefficients had similar signs in the actual sales price and assessed value equations, except for distance to the nearest school in all three of the models, age, number of bedrooms and presence of a garage in the semi-log model and age in the double-log model. The coefficients of the number of bedrooms and the age variables are counterintuitive in the linear model.However, the international literature reveals that it is possible for hedonic price regressions to reveal counterintuitive signs on the coefficients of structural house characteristics (Sirmans et al., 2005).More specifically, a review of 78 hedonic price studies by Sirmans et al (2005) revealed that the coefficient on the age variable was positive seven times.The same review found that the number of bedrooms coefficient was negative in nine out of 40 cases (Sirmans et al., 2005).Overall, the models explained variation in actual sales price and assessed value fairly well, with R-squared values ranging from 0.41 to 0.54.Based on the results of the SUR model, the hypothesis that all 11 coefficients included in the linear model (excluding the constant) are equal was rejected with near certainty.More specifically, the Wald statistic was 119.This was greater than the Chi-squared critical value of 19.68 (with 11 degrees of freedom).The hypothesis that all 11 coefficients are equal in the semi-log model was also rejected with near certainty (the Wald statistic was 118.5).Finally, the Wald statistic of 88 generated by the double-log model indicates that the coefficients in this model are also significantly different. These results are similar to the results obtained in the Cotteleer and Van Kooten (2012) study.In the Cotteleer and Van Kooten (2012) study the coefficient estimates of 29 house characteristics were too dissimilar to assume that they were equal in both equations. The majority of the property characteristics included in the estimated SUR model displays the correct signs, with the exception of the age (age of the house) variable (positive sign) and the number of bedrooms (negative sign).A possible explanation for the negative sign on the age coefficient in both equations is the fact that the Walmer neighbourhood is one of Nelson Mandela Bay's oldest and most affluent suburbs.Buyers perhaps prefer the older, more traditional homes in the suburb.The other perplexing finding is the negative sign on the bed (number of bedrooms) coefficient.One would expect house prices and assessed values to be positively related to the number of bedrooms.The results of this study show the opposite.Perhaps more value is placed on the size of the bedrooms as opposed to the number. The influence of the other variables on house prices and assessed values is fairly predictable.Properties situated on bigger erfs are valued more highly than properties situated on smaller erfs.The numbers of storeys, number of bathrooms, the presence of a swimming pool, the presence of an air-conditioner, the presence of a garage and the presence of an electric fence all have positive influences on both house prices and assessed values. CONCLUSION This study employed data from the suburb of Walmer, Nelson Mandela Bay, to investigate the effectiveness of using assessed values as proxies for actual sales prices in a hedonic price framework.In order to do this, a SUR model was estimated and was used to build test statistics in order to compare estimated parameters in the actual sales price equation and the assessed value equation. The results of this study rejected the hypothesis that the coefficient estimates for house characteristics generated by a hedonic price model using assessed values as the dependent variable are similar to those estimated using the actual sales price as the dependent variable.In addition to this, clear differences exist between the distributions of actual sales prices and assessed values, with actual sales prices values being, on average, lower than assessed values.This does not necessarily imply that they are different in an economically relevant manner (which would be of interest to policy makers).However, although there are clear data advantages to using assessed values as the dependent variable, actual sales prices reflect true market conditions more accurately than assessed values.Economic intuition, thus, suggests that actual sales prices are preferred to assessed values.This should act as a caution to hedonic practitioners that the use of assessed values as the dependent variable should be avoided if actual transaction data is available. standard deviation of variable X Sy = sample standard deviation of variable Y Journal of Economic and Financial Sciences | JEF | April 2015 8(1), pp.35-46 FIGURE FIGURE 2: Actual sales price distribution FIGURE 3 : FIGURE 3: Assessed value distribution TABLE 2 presents the results of the SUR.In the estimation of the SUR model, m = 1 represented the equation with sales prices as the dependent variable and m = 2 represented the equation with assessed values as the dependent variable (see Equation3).Three model specifications were estimated, namely the linear, semi-log, and double-log.This allows for goodness-of-fit comparisons based on R-squared values.All models were estimated using Stata Version 11.0.
v3-fos-license
2021-08-08T06:16:23.782Z
2021-07-30T00:00:00.000
236947140
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/22/15/8221/pdf", "pdf_hash": "65052887c83440f302910e69fa7822fbb09a6ddc", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:344", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "a1f33c10e07525853d2ee828f65a72764f714322", "year": 2021 }
pes2o/s2orc
CYP2E1 in Alcoholic and Non-Alcoholic Liver Injury. Roles of ROS, Reactive Intermediates and Lipid Overload CYP2E1 is one of the fifty-seven cytochrome P450 genes in the human genome and is highly conserved. CYP2E1 is a unique P450 enzyme because its heme iron is constitutively in the high spin state, allowing direct reduction of, e.g., dioxygen, causing the formation of a variety of reactive oxygen species and reduction of xenobiotics to toxic products. The CYP2E1 enzyme has been the focus of scientific interest due to (i) its important endogenous function in liver homeostasis, (ii) its ability to activate procarcinogens and to convert certain drugs, e.g., paracetamol and anesthetics, to cytotoxic end products, (iii) its unique ability to effectively reduce dioxygen to radical species causing liver injury, (iv) its capability to reduce compounds, often generating radical intermediates of direct toxic or indirect immunotoxic properties and (v) its contribution to the development of alcoholic liver disease, steatosis and NASH. In this overview, we present the discovery of the enzyme and studies in humans, 3D liver systems and genetically modified mice to disclose its function and clinical relevance. Induction of the CYP2E1 enzyme either by alcohol or high-fat diet leads to increased severity of liver pathology and likelihood to develop ALD and NASH, with subsequent influence on the occurrence of hepatocellular cancer. Thus, fat-dependent induction of the enzyme might provide a link between steatosis and fibrosis in the liver. We conclude that CYP2E1 has many important physiological functions and is a key enzyme for hepatic carcinogenesis, drug toxicity and liver disease. The Discovery of CYP2E1 Early studies of the metabolism of ethanol, independent of alcohol dehydrogenase (ADH), were performed using a mutant strain of deermouse (Peromyscus maniculatus) that is genetically deficient in low-Km ADH. Nonetheless, these mice were found to eliminate ethanol by a previously unidentified enzyme system. One of the enzyme candidates proposed was a cytochrome P450 active in ethanol oxidation in liver microsomes [1], which was also metabolized other short chain alcohols [2] and appeared to be inducible by ethanol treatment in animal models [3]. The exact mechanisms of P450-dependent ethanol oxidation were initially unclear but were associated with the production of reactive oxidative species (ROS), including hydroxyl radicals, which can indirectly oxidize ethanol [4,5]. The finding of an ethanol-dependent increase in P450-mediated oxidation [3,6] was important for the isolation of a specific form of ethanol inducible P450 and to understand the direct P450-mediated oxidation of ethanol. One such enzyme was purified from ethanol-and benzene-induced rabbits and from ethanol-treated rats [7,8]. This enzyme had the highest ethanol oxidation capacity of several different P450 forms isolated [7]. The corresponding cDNA was cloned in 1987 [9] and the enzyme was named cytochrome P450 2E1 (CYP2E1) in 1991 [10]. Expression, Functions and Cellular Fate of CYP2E1 CYP2E1 is well conserved across mammalian species, indicating an important physiological function. In humans, the CYP2E1 gene has nine exons located on chromosome 10 and spans 11,413 base pairs [11]. Substantial inter-ethnic polymorphisms exist in CYP2E1. However, only very rare variants causing amino acid shifts have been described [12], and despite several epidemiological studies there is no clear evidence that any polymorphic variants have any functional relevance [13,14]. One recent study suggested a link between CYP2E1-333A > T and NASH, the authors showed increased inflammation and NASH in patient biopsies with the TA allele, largely mediated by a small increase in interferoninducible protein 10 [15]. However, because only a relatively low number of patients were studied, much credence cannot be given to these findings unless they are reproduced independently. In liver, CYP2E1 is expressed mainly in the endoplasmic reticulum (ER) of the hepatocytes, but is also found in the hepatic Kupffer cells [16][17][18]. Hepatic CYP2E1 is also present in the mitochondria, by translocation, following expression in the nuclear genome [19][20][21] and the plasma membrane [19,[22][23][24]. However, the magnitude of induction of CYP2E1 in mitochondria is smaller than that of ER-resident CYP2E1. CYP2E1 metabolizes a variety of small, hydrophobic substrates and drugs (reviewed in [13,14,21,25]). It is responsible for the metabolism of many toxic or carcinogenic chemicals, including chloroform and benzene, and drugs, such as paracetamol, salicylic acid and several inhalational anesthetics (e.g., isoflurane, sevoflurane and halothane). It therefore follows that conditions which elevate the expression of CYP2E1 can increase the damage caused by conversion of drugs to toxic intermediates [26]. The bioactivation of several pre-carcinogens by CYP2E1 has been discussed in relation to the development of cancers, particularly hepatocellular carcinoma (HCC) [27,28]. In addition, CYP2E1 metabolizes endogenous substances including acetone, acetol, steroids and polyunsaturated fatty acids, such as linoleic acid and arachidonic acid to generate ω-hydroxylated fatty acids [29][30][31][32]. CYP2E1 also metabolizes ethanol and other short chain alcohols, however its contribution to the overall ethanol clearance is low and P450-dependent alcohol metabolism does not influence overall ethanol clearance in rats [33]. Despite its importance as a metabolic enzyme the crystal structure and binding sites were only determined relatively recently, in 2008 [34,35]. CYP2E1 is regulated by multiple, distinct mechanisms at the transcriptional, posttranscriptional, translational, and post-translational levels ( Figure 1) [36][37][38][39][40]. CYP2E1 expression is elevated in response to a variety of physiological and pathophysiological conditions, such as starvation and uncontrolled diabetes, and also by ethanol, acetone and several other low molecular weight substrates [7,29,[41][42][43][44][45]. However, it is primarily regulated at the post-transcriptional and post-translational levels [46]. Substrate-induced enzyme stabilization is the most important regulatory mechanism for CYP2E1 [47,48]; substrates and other chemicals binding to the substrate binding region stabilize the enzyme and prevent degradation by the proteasome-ER complex [38,48,49], involving the UBC7/gp78 and UbcH5a/CHIP E2-E3 ubiquitin ligases [50]. Thus, ER-mediated degradation is mainly active on CYP2E1 in the absence of substrates, whereas ligand-stabilized CYP2E1 is degraded more slowly by the autophagic-lysosomal pathway [51]. Due to its important physiological functions, the expression of CYP2E1 is under tight homeostatic control and multiple endogenous factors regulate CYP2E1 mRNA stability and protein expression including hormones (such as insulin, glucagon, growth hormone, adiponectin and leptin), growth factors (such as epidermal growth factor and hepatocyte growth factor) and various cytokines (reviewed in [40]). As previously mentioned, the CYP2E1 gene is highly conserved and no functionally important genetic variants have been described. This indicates an important endogenous role, which is supported by our findings, wherein the knockdown of human CYP2E1 expression in the in vivo relevant three-dimensional (3D) liver spheroids [52] causes dramatic cell death in hepatocytes (unpublished observations from our lab). It is clear that CYP2E1 has important effects during catabolic conditions as the gene is transcriptionally induced during starvation [41]. The hepatic production of glucose is essential under starvation conditions with the primary supply of brain glucose, and about 10% of plasma glucose originating from acetone [53]. CYP2E1 readily oxidizes acetone to acetol [36], which is subsequently converted to pyruvate and then to glucose during gluconeogenesis. In addition, during conditions of starvation energy supply from fatty acids is essential and CYP2E1 is efficient in the ω-oxidation of fatty acids. Mechanisms of Action of CYP2E1; Radical Mediated Toxicity CYP2E1 is unique in that its heme iron is constitutively in the high spin state. In the cytochrome P450 redox cycle, substrate binding is necessary in order to transfer the low spin form into a high spin form. The conversion of low spin (three electrons in the outer Fe 3+ shell with opposite spin) to a high spin (parallel spin of the three outer shell electrons) form of P450 can be determined by absorption spectra analyses where the high spin form has a peak at 390 nm. The constitutive high spin form of CYP2E1 facilitates electron transfer to dioxygen in the absence of the substrate and is the major reason for CYP2E1 being a 'leaky' enzyme which generates ROS such as superoxide and hydrogen peroxide. In the presence of iron, hydrogen peroxide is split yielding hydroxyl radicals formed in an iron-catalyzed Haber-Weiss reaction. Such reactions occur spontaneously in an environment enriched in hydrogen peroxide and non-heme iron. The hydroxyl radicals generated via the action of CYP2E1 can react with the hydrogen on the α-carbon of ethanol yielding a cytotoxic ethanol radical [54], this radical is oxidized spontaneously to acetaldehyde. The extent of contribution of non-heme Fe 2+ in the reaction cycle of CYP2E1 is unknown. The enzyme can oxidize several different substrates based on a conventional cytochrome P450 redox cycle and it is presumed that both reaction mechanisms work in parallel with regard to ethanol and other short chain aliphatic alcohols. Thus, it is difficult to distinguish ROS production by CYP2E1 from ROS production from the uncoupling of the nicotinamide adenine dinucleotide phosphate acid (NADPH)-dependent electron transport chain and cytochrome P450 reductase. Accordingly, many experiments have been conducted in the presence of EDTA-chelated iron, which causes uncoupling and nonspecific generation of ROS, including the hydroxyl radical by extracting a proton from ethanol, yielding acetaldehyde ( Figure 2) [4,55]. As such, determination of the true contribution of CYP2E1-mediated ROS formation and effects on the subsequent development of alcoholic liver disease (ALD) has been based on the use of CYP2E1 specific antibodies, knockdowns and CYP2E1 transgenic animals. The generated superoxide is dismutated to hydrogen peroxide (dashed arrow), which is cleaved in the presence of non-heme iron (Fe 2+ ) yielding hydroxyl radicals. These can extract the hydrogen on the α-carbon of ethanol creating a radical of ethanol which is spontaneously converted to acetaldehyde. Non-heme iron chelated to EDTA enhances the cleavage of hydrogen peroxide yielding the reactive hydroxyl radical. Basic mechanism as described in [54]. POR-P450 reductase. The high spin nature of the CYP2E1 heme iron also makes the enzyme unique in that it can reduce compounds. The most well-known reaction is the reduction of carbon tetrachloride into the corresponding radicals that efficiently induce lipid peroxidation [56]. This reaction is believed to be the major cause of the hepatotoxicity of carbon tetrachloride. In addition, halothane is reduced in a similar manner by CYPE1 to a reactive radical and binding of this radical metabolite to CYP2E1 converts the enzyme into a cell surface autoantigen believed to contribute to halothane hepatitis ( Figure 3) [29]. In addition to halothane, other anesthetics, such as enflurane, isoflurane and desflurane, are metabolized by CYP2E1 to trifluoroacetylated components some of which may be immunogenic [57,58]. Similar autoimmune consequences of CYP2E1 action are also found in CYP2E1-mediated formation of hydroxyethyl radicals which bind to cellular proteins causing the production of autoantibodies in alcoholics [59]. [60,61] (A) and carbon tetrachloride [62] (B). Aerobically, halothane undergoes cytochrome P450 catalyzed oxidation to trifluoroacetic acid (TFA), bromide and a reactive intermediate that can acetylate liver proteins and produces neo-antigens which stimulate an immune reaction that mediates severe hepatic necrosis. Anaerobically, halothane is reduced to a radical that can generate the metabolites chlorotrifluoroethane (CTE) and chlorodifluoroethylene (CDE) and also covalently bind proteins generating autoimmune reactions and also induce lipid peroxidation. CYP2E1 in ALD ALD is the most frequent liver disease in Europe, causing approximately 500,000 deaths per year, HCC resulting from pathological changes in the liver contributes highly to this death rate. There are strong genetic determinants in the development of ALD, e.g., PNPLA3, TM6SF2, and only approximately 10-20% of alcoholics develop cirrhosis of the liver [63]. ALD and NAFLD have several common mechanisms; generally, fibrosis occurs in response to enhanced ROS levels, lipid mediators and pro-inflammatory cytokines. Thus, ALD and NAFLD are mechanistically similar and share histopathological features, particularly in terms of CYP2E1 induction and oxidative stress (reviewed in [64]). In addition, they share common genetic risk factors. CYP2E1 is the most relevant CYP in ALD [65], it is highly inducible, has high catalytic activity for ethanol [25] and is prone to futile cycling in the absence of substrate to produce ROS [66]. CYP2E1 is regarded as a 'leaky' enzyme due to loose coupling of the CYP redox cycle, permitted by the constitutively high-spin state of the heme iron and, therefore, it has a great capacity to produce oxyradicals and initiate lipid peroxidation [67][68][69][70][71]. Thus, CYP2E1 may be important in mediating the effects of ethanol on ALD via increased lipid peroxidation [67]. In order to study the link between CYP2E1 and ALD, Morgan et al., generated CYP2E1 transgenic mice which exhibited increased serum ALT levels, higher histological scoring and ballooning hepatocytes with alcohol diet [72]. Using a transgenic mouse model expressing extra copies of human CYP2E1, Butura et al., showed increased liver injury and expression of stress related genes with alcohol diet [73]. Microarray analyses revealed that enhanced expression of structural genes, particularly cytokeratin 8 and 18, may be related to the observed pathology and they were suggested as biomarkers for ALD [73]. JunD, part of the transcription factor complex AP-1, was induced by CYP2E1 and alcohol, and its expression correlated with the degree of liver injury. This transcription factor complex is also linked to increased macrophage activation. Furthermore, JunD also has a role in hepatic stellate cell activation and regulates the cytokine interleukin 6 [73]. A second approach is to study CYP2E1 knockout mice in comparison to wild-type mice. Abdelmegeed et al., showed that that aged wild-type mice had increased hepatocyte vacuolation, ballooning, degeneration, and inflammatory cell infiltration compared with CYP2E1-null mice [74]. They also found that the aged wild-type mice had increased hepatocyte apoptosis, hepatic fibrosis, levels of hepatic hydrogen peroxide, lipid peroxidation, protein carbonylation, nitration and oxidative DNA damage, indicating an endogenous role for CYP2E1 for these events. Another approach to study the influence of CYP2E1 on ALD involves the use of CYP2E1 inhibitors. Chlormethiazole is a specific CYP2E1 inhibitor [75] and has a pronounced inhibitory effect on ALD in the intra-gastric alcohol rat model [76]. Similar experiments were also conducted using diallyl sulfide and phenylethyl isothiocyanate as CYP2E1 inhibitors demonstrating a protective effect against hydroxyradical formation from ethanol and lipid peroxidation and inhibition of some pathological scores [77,78]. Together, these studies indicate a significant contribution of alcohol-dependent induction of CYP2E1 for the development of ALD [79]. The major cause is its ability to increase ROS formation after ethanol treatment, but other factors controlling the redox properties of the liver also contribute to the observed pathology. Intestinal CYP2E1 in ALD In addition to causing gut dysbiosis, alcohol increases CYP2E1 levels and nitroxidative stress in the intestinal epithelium similarly as in the liver [80][81][82][83][84]. This causes intestinal leakiness followed by increased circulating endotoxin levels [82,[85][86][87]. Endotoxin can initiate a hepatic necroinflammatory cascade starting from increased levels of NF-κB and release of inflammatory cytokines, such as TNF-α by Kupffer cells [85,88,89]. These results suggest a role of intestinal protein nitration in mediating alcohol-induced gut leakiness and subsequent hepatic injury in a CYP2E1-dependent manner. Even though fatty acids induce CYP2E1 in liver, as discussed later in this review, it is unclear whether this holds for intestinal CYP2E1 and increases gut leakiness playing a role in the development of NAFLD. NASH patients exhibit increased gut leakiness [90] and dysbiosis (reviewed in [91]) and gut inflammation and dysbiosis augments hepatic inflammation and fibrogenesis in mouse NASH models. However, the role of intestinal CYP2E1 and oxidative stress for gut leakiness was not tested [90,92,93]. It has been hypothesized that ethanol produced by microbiota fermenting dietary sugars could cause dysbiosis, increased CYP2E1 levels and nitroxidative stress in NAFLD patients (reviewed in [91]) and maybe endotoxin levels are thus lower in patients with alcoholic cirrhosis compared with non-alcoholic cirrhosis [87]. CYP2E1 in NAFLD NAFLD has been known for over 40 years; despite this, the underlying mechanisms remain poorly understood [94]. NAFLD refers to a continuum of liver diseases beginning with non-alcoholic fatty liver (NAFL), an accumulation of hepatic lipids not explained by alcohol consumption. In some cases, this can progress towards non-alcoholic steatohepatitis (NASH), fibrosis and cirrhosis and eventually HCC. Most NAFLD patients are obese and exhibit mild systemic inflammation, which induces insulin resistance and plays a role in the mechanism of liver damage [95][96][97][98]. NASH encompasses varying degrees of liver injury and is now recognized as the hepatic component of metabolic syndrome [99][100][101][102]. Liver cirrhosis is part of alcoholic steatohepatitis (ASH) and NASH, for which the only curative solution is often liver transplantation, and can progress to HCC in 3-4% of cases. The major causes of liver fibrosis are: hepatitis C infection (33%), alcohol (30%) and NASH (23%) [103]. The individual risks for liver cirrhosis caused by NASH and alcohol are largely determined by genetic risk factors, with polymorphisms in PNPLA3 and TM6SF2 and also MBOAT7, HSD17B13 and PCKS7 being the major common genetic determinants [104]. The multiple parallel hit model aims to explain the initiation and progression of NAFLD and has, in recent years, superseded the more simplistic two-hit model [101,105]. It suggests that multiple concurrent environmental and genetic insults, such as insulin resistance, oxidative stress-induced mitochondrial dysfunction, ER stress, endotoxin-induced TLR4-dependent release of inflammatory cytokines, and free fatty acid (FFA) accumulation combine to produce cell death and damage [106]. Oxidative stress mediated by ROS likely plays a primary role as the initiator of hepatic and extrahepatic damage and can cause damage in myriad ways by peroxidation of cellular macromolecules [107]. Oxidative stress can lead to lipid accumulation both directly and indirectly, most simplistically, ROS can peroxidate cellular lipids. Presence of these peroxidated lipids increases post-translational degradation of ApoB, preventing hepatocellular lipid export and leading to lipid accumulation. Alternatively, ROS can directly peroxidate proteins such as ApoB, directly preventing their function and producing a similar effect [108]. The connection between CYP2E1 and NASH was first suggested by Weltman et al., in 1996, where elevated levels of CYP2E1 were observed in steatosis and NASH patients, particularly in the centrilobular region [109] corresponding to the site of maximal hepatocellular injury in NASH [41,110]. This connection is based on the high propensity of CYP2E1 to generate ROS, even in the absence of substrates (Figure 4). In addition, CYP2E1 levels are elevated in obesity, steatosis and NASH in both humans and rodents [111][112][113]. CYP2E1 Links Insulin Resistance and NAFLD The increased circulating levels of ketone bodies and fatty acids observed in obesity, steatosis and NASH may induce CYP2E1 and insulin resistance [66,106,114,115]. In addition, Cyp2e1 knockout mice are protected against high-fat diet-induced obesity and insulin resistance and the production of proinflammatory cytokines in adipose tissue was prevented [116]. CYP2E1 was recently shown to be linked to insulin resistance via the anti-apoptotic protein Bax inhibitor-1, which plays an important role in the regulation of CYP2E1 [117,118]. Furthermore, the repressive effects of insulin on CYP2E1 levels are lost in insulin resistance, commonly associated with NAFLD and NASH [106,115]. The Role of CYP2E1 in Hepatic Lipid Accumulation Experiments preventing the degradation of CYP2E1 in mice by ablation of E3 ubiquitin ligase have demonstrated that the elevation of CYP2E1 alone in this model is insufficient to induce NASH. Concurrent elevation of CYP2E1 and induction of hepatic lipid accumulation from enhanced liver fat-production or ingestion of a high fat/high carbohydrate diet is required to induce NASH-like symptoms [32]. Conversely, CYP2E1 stabilization with CHIP-knockout in mice was associated with increased functional activity together with microvesicular fat accumulation and increased lipid peroxidation by activation of the hepatic JNK-cascade [119]. This occurred even though the mice were fed with a nonfat/carbohydrate-enriched diet, suggesting that overexpression of hepatic CYP2E1 and consequent oxidative stress was sufficient for NASH development [119]. This occurred even though the mice were fed with a non-fat/carbohydrate-enriched diet, suggesting that overexpression of hepatic CYP2E1 and consequent oxidative stress was sufficient for NASH development. Further research is required to elucidate the complexities of the role of CYP2E1 in hepatic lipid accumulation. Complementary Roles of CYP4A and CYP2E1 in Lipid Oxidation through PPARα Animal studies have suggested complementary metabolic roles of CYP2E1 and CYP4A in lipid oxidation and the production of oxidative stress [66,120]. Besides CYP2E1, only CYP4A11 is responsible for the ω-hydroxylation of medium-length chain carboxylic acids in humans [121,122]. Silencing of CYP2E1 in mice fed a methionine-choline deficient diet increased CYP4A levels and the animals displayed elevated lipid peroxidation and NASHlike symptoms [120]. In addition, antibodies against CYP2E1 inhibited lipid peroxidation in microsomes from wild-type mice, but antibodies against CYP4A had little effect [120]. The inverse was found with microsomes from Cyp2e1 knockout mice where antibodies against CYP4A blocked lipid peroxidation, whilst antibodies against CYP2E1 had no effect. Thus, it could be concluded that CYP4 serves as an initiator or catalyst of oxidative stress as a complimentary pathway to CYP2E1. In humans, CYP4A11 is elevated in NAFLD patients [123]. For this reason, it is more difficult to study the role of CYP2E1 alone in oxidative stress. Structure-activity relationship studies on CYP4A11 and its orthologs and CYP2E1 have confirmed similarity in their potential to be inhibited [122]. This was further tested using a strong inhibitor of CYP4A as a new class of drug candidate in targeting CYP2E1. These drugs, 12-imidazolyl-1-dodecanol and 1-imidazolyldodecane, are competitive inhibitors of the active site of CYP2E1 and restored intracellular redox balance via reduction of ROS and lipid peroxidation in vitro as well as in rats fed with high-fat diet or alcohol. Such ω-imidazolyl-alkyl derivatives inhibiting CYP2E1 may serve as a possible new therapeutic approach to NAFLD and especially to NASH [124,125]. However, it is unclear whether dual inhibition of CYP4A and CYP2E1 or only inhibition of CYP2E1 is required for these effects. De novo lipogenesis and fatty acid oxidation are implicated in NAFLD pathogenesis, although lipid uptake, storage and export also play a role [126,127]. Fatty acid oxidation is a cyclic process in which fatty acids are shortened, releasing acetyl-CoA after each cycle [128,129]. One of the main regulators of this process is the peroxisome proliferatoractivated receptor-α (PPARα) [130], However, other factors are also involved in fine-tuning the process. Both CYP4A and CYP2E1 enzymes appear to interact with the PPARα pathway. Thus, CYP4A genes are partly controlled by PPARα; Kroetz et al., showed that the induction of CYP4A in diabetes and under starvation conditions in rats was dependent on PPARα expression [131]. Centrilobular fat accumulation and upregulation of PPARα and PPARαmediated pathways in Cyp2e1-null mice fed with ethanol indicates an interplay between CYP2E1 and PPARα-mediated fatty acid homeostasis [132]. Increased mitochondrial fat oxidation was speculated to be due to the PPARα mediated induction of carnitine palmitoyl transferase I. These data suggest that CYP2E1 and ethanol can regulate PPARα-mediated fatty acid homeostasis, but PPARα only becomes important when the CYP2E1 level is low in high-fat conditions. The physiological relevance of this scenario is dubious since high-fat conditions induce CYP2E1 expression. Abdelmegeed et al., showed a significant fat-induced increase in mitochondrial CYP2E1 in both wild type and PPARα-null mice [133]. PPARα-deficient mice showed greater mitochondrial dysfunction, regardless of diet, evidenced by reduced expression of mitochondrial 3-ketoacyl-CoA thiolase. However, the resultant increase in oxidative stress was more prominent in PPARα-null mice, which exhibited higher levels of CYP2E1 than the corresponding wild-type mice. These data suggest that mitochondrial CYP2E1 might play a role, at least partially, in mediating high-fat induced NASH development in PPARα-null mice. Taken together, even though these studies show interplay between CYP2E1, CYP4A and PPARα, all of which play a role in NASH development and progression, the clinical relevance has not yet been demonstrated. Possible Role of Mitochondrial CYP2E1 in NAFLD Different CYP2E1 isoforms exist in several cell compartments including the ER and mitochondria as previously mentioned [82,134]. In recent years, the role of non-ER-based CYP2E1 in the development and progression of NAFLD has gathered interest. Currently, the differences, if any, between isoforms in terms of induction or substrates is not well understood. Mitochondrial CYP2E1 is expressed at approximately 30% of the level of the microsomal CYP2E1 under basal conditions in rats and is associated with elevated mitochondrial oxidative stress [135]. It is present in two forms, one highly phosphorylated form mediated via cAMP-dependent protein kinase A, and one shortened amino terminaltruncated form [19,20,135]. The regulation of these modifications is unclear, however, they are hypothesized to cause conformational changes and altered interactions with molecular chaperones and signal recognition particles, directing the CYP2E1 to the mitochondria. Differences in mitochondrial CYP2E1 from its microsomal counterpart in both alcoholic fatty liver disease (AFLD) and NAFLD have been reported. A few studies have suggested that mitochondrial CYP2E1 is a major source of alcohol and drug-induced oxidative stress [112,136,137]. Mitochondrial CYP2E1 may be more responsible for the damage to mitochondrial function and membrane and may contribute to the biochemical and toxicological effects which were previously ascribed to CYP2E1 in the ER. Whether CYP2E1 in both ER and mitochondria work simultaneously or sequentially, and whether mitochondrial CYP2E1 exerts more pronounced effects on mitochondrial dysfunction in AFLD and NAFLD, is unclear due to lack of specific inhibitors [82]. Mitochondrial CYP2E1 may have a longer half-life than CYP2E1 in the ER, possibly due to unavailable ER degradation by the ubiquitin-proteasome pathway [138]. Kupffer Cells and CYP2E1 in NAFLD The progression of NAFLD to NASH is characterized by increased inflammation and fibrosis. Therefore, liver-resident immune cells, such as Kupffer cells, are implicated. Their activation is driven by several factors, including cytokines and cell death [139]. Macrophages may be activated when hepatocytes die as a result of ROS and lipid-induced stress, their contents are released and detected by macrophages as damage-associated molecular patterns such as HMGB1 and caspase-cleaved keratin 18 [140]. CYP2E1 is also inducible in Kupffer cells [16][17][18] and the lipid peroxidation product 4-hydroxynonenal upregulates transforming growth factor-β expression in macrophages, causing further inflammation [141]. Macrophages with stably increased CYP2E1 expression (murine RAW 264.7 macrophages transfected with CYP2E1) displayed increased levels of CD14/Tolllike receptor 4, NADPH oxidase and H 2 O 2 , accompanied by activation of ERK1/2, p38, and NF-κB in [142]. Apart from mitochondria-derived ROS, NADPH oxidase 2 (NOX2) activation in liver-infiltrating macrophages has also been reported to contribute to oxidative stress-induced liver damage in NAFLD [143]. On the other hand, the amount of CYP2E1 detected in rat Kupffer cells was 10-fold lower than in hepatocytes [142] and, taking into consideration the comparative number of different liver cell types, the hepatocyte localization of CYP2E1 will be of utmost importance for causing oxidative stress. CYP2E1-Mediated ROS Production and Lipid Peroxidation As previously mentioned, ROS can be produced from many cell systems including the mitochondrial respiratory chain [144], the cytochromes P450 [145], and oxidative enzymes [146]. When compared with other cytochromes P450, CYP2E1 possesses a remarkably high NADPH oxidase activity, resulting in significant production of ROS, such as hydrogen peroxide, superoxide anion radicals and hydroxyl radicals in the presence of iron catalysts [49,65,67,82]. ROS production from CYP2E1 causes a free radical chain reaction with unsaturated fatty acids generating toxic lipid intermediates, a reaction magnified by the presence of free iron. Such lipid peroxidation products, e.g., the biologically active aldehydes, hydroxynonenal, 4-hydoxyhydroperoxy-2-nonenal and malondialdehyde, can also modify the integrity of cellular membranes and damage proteins and DNA [141,147]. ROS-mediated activation of the JNK pathway can interfere with insulin sensitivity through phosphorylation of IRS-1 and IRS-2 and impair glycogenesis via action on GSK3, leading to increased gluconeogenesis [32,148]. In addition, ROS increase the expression of several cytokines, including transforming growth factor-β, interleukin 8, tumor necrosis factor-α and Fas ligand [149][150][151][152]. Both cytokines and lipid peroxidation products may act together to trigger the diverse lesions of NASH [153]. By-products of ROS-induced damage, such as 4-hydroxynonenal and 3-nitrotyrosine, are significantly increased in the plasma and liver, respectively in NAFLD and NASH patients [134,154]. CYP2E1 is crucial in this regard as the production of ROS when it is induced, e.g., in response to FFAs, underlies much of this process. In vivo data on the role of CYP2E1 in lipid peroxidation comes from rodent studies [74,78] and from the observation of CYP2E1 induction in human liver in NASH [109]. Using CYP2E1 inhibitors, Morimoto et al., found a relationship between CYP2E1 and lipid peroxidation in rats with ALD [78]. Besides increased lipid peroxidation, CYP2E1 was found to increase hepatic nitroxidative stress [77]. Abdelmegeed et al., used young and aged female wild-type and Cyp2e1-null mice to show that aged wild type mice had increased hepatic fibrosis, levels of hepatic hydrogen peroxide, lipid peroxidation compared with Cyp2e1-null mice [74]. These data are consistent with a role of CYP2E1 in development of liver fibrosis. There may also be a relationship between CYP2E1 and mitochondrial dysfunction due to mitochondrial CYP2E1-mediated ROS production [137]. In addition, lower ROS detoxification and lipid peroxidation-derived reactive aldehydes play a role in mitochondrial stress associated with NAFLD development and progression [155,156]. This can trigger a vicious cycle, further increasing ROS generation through abnormal electron leakage. Together, these studies largely agree that CYP2E1-mediated increased nitroxidative radicals, lipid peroxidation, and post-translational protein modifications are the main mechanisms by which CYP2E1 likely plays a prominent role in NAFLD development and progression [82]. Expressed simply, CYP2E1 is a major source of hepatic ROS. The action of ROS from CYP2E1 and other sources can induce lipid peroxidation and cause other non-specific damage to cellular macromolecules. Therefore, conditions that induce the expression of CYP2E1 will also increase the production of ROS and ROS-related damage. The link between oxidative stress and the development and progression of ALD and NAFLD has led to the study of the potential of antioxidants for prevention or treatment for these diseases using either Mediterranean diet or drugs. Clinical trials have shown that vitamin E can ameliorate NAFLD by attenuating oxidative stress and inflammation (reviewed in [157,158]); however, the safety of prolonged vitamin E use is uncertain [159][160][161]. In addition to vitamin E, flavonoids have anti-inflammatory and antioxidant properties and reduce the expression of CYP2E1 (reviewed in [162]). For instance, quercetin and several saponins, alkaloids, terpenoids and polyphenols have been tested in vivo and in vitro and many have shown promise for NASH treatment (reviewed in [163]). Unfortunately, these effects have not persisted under clinical trial conditions thus far. In Vitro Models for Examining the Role of CYP2E1 and FFA on Liver Fibrosis and Damage The liver is composed of hepatocytes (~60%) and non-parenchymal cells (NPCs) (~40%). The main types of NPCs are stellate cells, Kupffer cells and sinusoidal endothelial cells. NAFLD and NASH are not formed solely by the action of hepatocytes but are rather the results of complex interactions between the NPCs, hepatocytes and external factors. For this reason, models with only hepatocytes or hepatocyte-like cells cannot accurately recapitulate NAFLD development and progression compared with heterocellular models. However, by comparing the monocellular and heterocellular NASH models the role of NPCs in the development and progression of NAFLD can be studied [164]. In vitro models can be divided into groups based on the cell types and culture methods employed. Many recent advances in in vitro models for NAFLD and liver toxicity in general use 3D cultures, often due to the increased physiological relevance these models can offer. There are multiple 3D culture techniques ranging from matrix-based systems to suspension cultures, both with and without liquid flow. In addition, tissue-on-a-chip models have been developed and show great promise. Some advances in modelling NAFLD were recently reviewed by Soret et al. [165]. Human primary hepatocytes (PHH) dedifferentiate in two-dimensional culture systems precluding their useful application to long-term experiments, such as the development of NAFLD. Compared with traditional monolayer cell culture models, 3D Spheroid models have been shown to more accurately mimic the in vivo environment [52]. PHH 3D spheroids maintain tissue-like architecture, cell-cell interactions and hepatic phenotype and can be used to model the onset of NAFLD ( Figure 5) [164,[166][167][168]. Using such a 3D PHH spheroid model with NPCs, we found that NPCs and FFA induce CYP2E1 on the mRNA and protein level [164]. In some donors, heterocellular spheroids showed elevated CYP2E1 protein expression in the presence of NPCs only, without added FFA. This might be indicative of the fact that increased lipid levels during steatosis might stimulate induction of liver fibrosis by the subsequent induction of CYP2E1 causing an increase ROS. The contribution of CYP2E1 in NASH may indeed be of importance, but more studies are needed in this topic. Conclusions It is evident that CYP2E1 has important functions for both lipid and glucose homeostasis as well as being an important enzyme in toxicology. The enzyme is highly conserved with essentially no functionally different genetic variants, emphasizing its important endogenous functions, many of which may still be unknown. The toxicologically relevant functions are to a great extent related to the high-spin nature of the iron in the enzyme, allowing effective reduction of dioxygen and other compounds in the absence of bound substrate, as well as both reductive and oxidative radical formation by the enzyme. The enzyme action is important for generation of ALD and NASH and it is likely that the link between steatosis and NASH could to some extent be explained by the fact that excess lipids highly induce the hepatic levels of CYP2E1, thus resulting in ROS stress and increased lipid peroxidation, key events for development of NASH. Although much has been learned, there are still many factors to consider for the future. CYP2E1 resides mainly in the ER of hepatocytes where it takes part in the metabolism of fatty acids, acetone and other endogenous compounds. The function of mitochondrial CYP2E1, if any, is unknown. The contribution of CYP2E1 to elimination of ethanol is still controversial and conclusions differ greatly between authors. In addition, the relationship between the specific and nonspecific CYP2E1-mediated oxyradical-mediated oxidation of ethanol is unclear. CYP2E1 expression is elevated following ethanol treatment, but the rate of ethanol oxidation is very low, as compared with ADH. Furthermore, ADH is strongly induced by high ethanol levels and might represent the major component for adaptive ethanol oxidation. The role of CYP2E1 for development of NASH is still unclear. Further experimentation is necessary to directly show the role of CYP2E1 in enhanced production of mediators activating stellate cells to produce profibrotic cytokines and, indeed, the effect of CYP2E1 on activation of liver endothelial cells has not been described. The elevation of CYP2E1 following lipid treatment may have anti-steatotic effects because of higher rates of lipid degradation. Inhibitors of CYP2E1 have been shown to be effective for the development of ALD, but must also be examined for NASH production.
v3-fos-license
2023-06-20T13:11:38.095Z
2023-06-15T00:00:00.000
259195387
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1111/1365-2745.14303", "pdf_hash": "bfd521df1ae246e9c9846fb1434b28cd20bb10f0", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:345", "s2fieldsofstudy": [ "Economics" ], "sha1": "bc73f067242c3424c9633b5abefce3deaca54247", "year": 2024 }
pes2o/s2orc
Interstage flow matrices: Population statistic derived from matrix population models Many population statistics describe the characteristics of populations within and among species. These are useful for describing population dynamics, understanding how environmental factors alter demographic patterns, testing hypotheses related to the evolution of life history characteristics and informing the effective management of populations. In this study, we propose a population statistic: the interstage flow. The interstage flow is defined as the product of the element in the ith row, the jth column of the population projection matrix and the jth element of the normalized stable stage distribution. The sum of the interstage flow matrix elements is equal to the population growth rate (PGR), which is the dominant eigenvalue of the population projection matrix. The interstage flow matrix elements allow decomposition of PGR into component contributions made by transitions between developmental stages. We demonstrate the utility of interstage flow matrices using matrix population models from the COMPADRE plant matrix database. We compared interstage flows among four life history/functional groups (FGs) (semelparous herbs, iteroparous herbs, shrubs and trees) and described how PGR reflected individual transitions related to stasis, fecundity and growth. We found that the individual flows are different among FGs. Synthesis. The proposed population statistic, the interstage flow matrix, describes the contribution of individual developmental stage transitions to the PGR. The flow of individuals between developmental stages differs in distinctive ways among different life histories and FGs. The interstage flow matrix is a valuable statistic for describing these differences. | INTRODUC TI ON Detailed demographic data have been compiled by plant ecologists for populations that span a range of taxonomic, life history and environmental conditions.Researchers have developed a number of statistics to describe these data, and the development of convenient population statistics has been an important aspect of theoretical population ecology.Matrix population models (MPMs) can be used to derive many useful population statistics, such as population growth rate (PGR), generation time, age at maturation, sensitivity and elasticity.These statistics are helpful for describing population dynamics, understanding how environmental factors alter demographic patterns and testing hypotheses related to the evolution of life history characteristics (Salguero-Gómez, 2017;Salguero-Gomez et al., 2016;Silvertown et al., 1993Silvertown et al., , 1996)). The growing availability of plant demographic data has allowed more robust hypothesis tests and spurred innovative new directions in comparative plant demography.For example, Silvertown et al. (1996) calculated MPM elasticities of about 90 plant species and showed that the relative influence of different developmental stage transitions (e.g.recruitment from seed, stasis, transitions to larger size classes) on population growth varied systematically across plant functional groups (FGs).These differences likely reflect morphological, physiological and environmental constraints that influence the evolution of plant life history.In 2015, the COMPADRE database (Salguero-Gómez et al., 2015;www. compa dredb. org) began compiling published plant MPMs in a standardized and accessible format.Salguero-Gomez et al. (2016) used COMPADRE to quantitatively describe the life history strategies of 418 plant species explicitly in terms of their demographic characteristics.They found that species assorted along two relatively independent strategies: a slowly growing, long-lived strategy and a reproduction-focused strategy. In this study, we introduce a statistic called the interstage flow, which is expressed in matrix form and describes the flow of individuals between developmental stages or ages.The approach provides information that is not provided by standard population statistics based on the population projection matrix such as elasticity. Elasticity describes the proportional change in PGR with a proportional change in a population matrix element, while interstage flow describes the contribution of demographic transitions to PGR itself. Comparing both interstage flow and elasticity can help to better understand the ecological characteristics of species.Furthermore, the elements of a population projection matrix, that is, survival, growth and fecundity, do not explicitly describe the demographic contribution of the individuals transitioning between classes.For example, a particular developmental stage might have high seed production, but if the proportional abundance of individuals in that stage is low, its contribution to PGR may be minimal.In contrast to the population | Definition of interstage flow The dynamics of populations over a discrete time interval can be described as follows: where A is the population projection matrix, and n t indicates a vector of population sizes of each stage at time t.We propose a metric (statistic): the interstage flow matrix F IS .It is defined as follows: where a ij is a matrix element of A. w j is the jth element of the stable stage distribution (w) of matrix A. The stable stage distribution is normalized such that the sum of all elements is equal to 1, that is, where d denotes the matrix dimension (MD) of A. We present how to obtain the interstage flow matrix in Figure 1 using a population projection matrix of a Japanese perennial herb, Trillium apetalon (Ohara et al., 2001), as an example.The population projection matrix A is and the normalized stable stage distribution can be obtained from the matrix as: (1) 0.451 0.643 0 0 0 0.021 0.8 0 0 0 0.08 0.981 Therefore, the (2, 1) element of the interstage flow matrix is 0.451 × 0.402 (=0.181) and the (1, 4) element of the interstage flow matrix is 5.13 × 0.08 (=0.410).These are not the probabilities of transitions between stages, but rather the normalized number of individuals transitioning between stages.Then, the interstage flow matrix is The unique property of f IS ij is that the relationship between the PGR (λ) and the elements of the flow matrix is as follows: Equation (4) implies that the elements of the interstage flow matrix quantify the extent to which each transition between developmental stages contributes to the PGR. If matrix A is primitive, it is confirmed by the strong ergodic theorem that the stable stage distribution (the vector of population size at time t) is: which is proportional to the normalized stable stage distribution (w): that is where N t = ∑ d j=1 n j,t .During a single timestep, N t w j individuals, the number of individuals at stage j, flow to several stages, including stage j.These flows reflect the demographic processes of survival and reproduction.During the survival process, N t w j individuals flow to stage i with probability a ij , and the number decreases because a ij is less than 1.The amount of interstage flow from stage j to stage i (the number of survivors) is N t a ij w j .In the reproduction process, N t w j individuals reproduce their offspring with per-capita fecundity a ij .The amount of interstage flow in the reproduction process is similar to that in the survival process, N t a ij w j .The sum of all interstage flows is j=1 a ij w j , which must be equal to N t+1 because the sum of the surviving individuals and new offspring at the next time step is N t+1 .This can be proven by the following mathematical calculations: The number of n j,t increases or decreases as a single time step proceeds.This was calculated using Equation (1) as: Equation ( 6) can be written based on Equation (5) as follows: The sum of the elements of the left-hand side of the equation is N t+1 , and the sum of the elements of the right-hand side is It is supposed that the population dynamics described by Equation (1) grow at a PGR and with a structure proportional to the stable stage distribution after sufficient time has elapsed (p.86 in Caswell, 2001).Hence, N t+1 = N t .Therefore, ∑ d i=1 ∑ d j=1 a ij w j = .This means that all interstage flow matrix elements sum to the PGR.We provide another proof of Equation ( 4) in Appendix S1. | Relationship between interstage flow and elasticity Elasticity is frequently used to quantify relative change in the PGR per relative change in matrix elements (Caswell et al., 1984;De Kroon et al., 1986;Kaneko & Takada, 2014;Pfister, 1998;Takada & Kawai, 2020;Yokomizo et al., 2017).Patterns of proportional values of interstage flow and elasticity do not necessarily mirror each other.We illustrate this using the following population matrices that only differ in matrix elements 7). (5) | COMPADRE plant matrix database We described how interstage flows (Equation 2) are related to life | Matrix selection We used matrices satisfying the following eight criteria: 1.No maximum stage-specific survival in the submatrix U exceeded one. 2. At least one element in submatrices F or C was greater than zero. 3. Matrices were irreducible and primitive.This is because we considered that the population density in a stage does not become zero or oscillate between years if the stage distribution is stable.A matrix is irreducible or primitive if d is the dimension of the population projection matrix of A (Caswell, 2001). 4. Dimensions were equal to or larger than four because small matrices cannot reflect stage-specific information about transitions. Note, however, that an interstage flow matrix can be obtained even from matrices with small dimensions. 5. Matrices were for unmanipulated populations.We removed matrices for experimentally manipulated populations. 6. Matrices satisfied A = U + F + C. For some populations, this equation does not hold true, for various reasons (e.g.errors in the database). 7. Higher stages in matrices are for larger sizes, active or matured stages.In some matrices, dormant stages are in larger rows than active ones with the same size categories.We did not use those matrices in this study, because we assumed transitions to larger rows are classified as growth. 8. Only for analyses involving elasticity, we used matrices in which all matrix elements can be classified into only one category out of stasis, fecundity and growth (See the next section). When multiple matrices were available for a species, we calculated the mean matrix and the average of each element.However, we could not calculate a mean matrix if there were multiple population projection matrices with different dimensions.In such cases, we selected the population projection matrix associated with the study with the longest research period. | Species functional classification We categorized the selected matrices into the following growth form categories: herbaceous perennials, shrubs and trees based on the 'OrganismType' classification in the metadata of the COMPADRE database.We also categorized herbaceous perennials as either semelparous or iteroparous.Semelparous herbs die immediately after reproduction and typically have high fecundity to compensate for the loss of future reproductive opportunities (Charnov & Schaffer, 1973;Pianka, 1976Pianka, , 1978)).The trade-off between fecundity and adult survival in semelparous herbs affects elasticity (Takada & Kawai, 2020).Semelparous herbs were identified from the elements of population projection matrices by using the method described by Takada et al. (2018). We obtained interstage flow matrices for 14 species of semelparous herbs, 143 species of iteroparous herbs, 36 species of shrubs, 93 species of trees and 286 species in total.In addition, we obtained elasticity matrices for 14 species of semelparous herbs, 135 species of iteroparous herbs, 36 species of shrubs, 91 species of trees and 276 species in total. | Classification of matrix elements into stasis, fecundity and growth We classified each interstage flow matrix element as reflecting the developmental stage transitions of fecundity, growth to larger size classes, or stasis within the same stage using an approach similar to the method used by Silvertown et al. (1996) (see Figure 3a).The diagonal or upper diagonal elements of U (transitions to the same or lower stages) were classified as stasis and lower diagonal elements of U (transitions to larger stages) were classified as growth.All elements of F (number of seeds produced per individual) were classified as fecundity.All elements of C (clonal reproduction) were classified as growth. An example of the classification process is shown in Figure 3b.In this case, each element is classified into stasis, fecundity or growth. The flow matrix is derived according to Equation (2) using the population projection matrix A. We summed the elements of the flow matrix for stasis, fecundity and growth, respectively (see Figure 3b).(a) | Statistical analysis We calculated interstage flow matrices for the selected plant species based on Equation (2).We used Dirichlet regression to describe how the patterns of interstage flows vary across plant FGs. Dirichlet regression is used when a set of bounded variables has a constant sum, for example with proportions and probabilities (Adler et al., 2014).In our study, the sum of proportional values of interstage flows related to stasis, fecundity and growth was one, making Dirichlet regression a suitable choice.MD (the number of developmental stage transitions described by the matrix) is not an innate characteristic of plant species, but rather is defined by researchers and reflects the design and constraints of individual studies.MD has been shown to influence other statistics derived from matrices, such as PGR and elasticity (Ramula et al., 2008;Ramula & Lehtila, 2005).We, therefore, included MD as an explanatory variable in our analysis.Categorical variables for FG, PGR, MDs, interaction terms of FG and PGR, interaction terms of FG and MD, and interaction terms of PGR and MD were included in our model as explanatory variables.All analyses used R version 4.1.0software (R Core Team, 2021), and we used the DirichReg() function in the DirichletReg package (Maier, 2021). | RE SULTS FGs had distinct patterns of flow allocation among stasis, fecundity and growth (Table 1; Figures 4 and 5).The ternary plots in Figure 4 illustrate these differences.Semelparous herbs had interstage flows that tended to be dominated by growth and fecundity (Figure 4a). The spatial median, which shows the central tendency of the distribution of interstage flows, of semelparous herbs is located in an area with large values of growth and fecundity, suggesting that the interstage flows of semelparous herbs tended to be dominated by growth and fecundity.At the other end of the spectrum, the interstage flows of trees tended to be dominated by stasis (Figure 4d).Iteroparous herbs and shrubs tended to have intermediate patterns that spanned a range between those two extremes (semelparous herbs and trees) (Figure 4). PGR and MD also influenced patterns of interstage flow. Growing populations tended to have large interstage flows related to fecundity and relatively small flows related to both growth and stasis (Figure 5).Overall, PGR had a statistically significant influence on interstage flows related to stasis and growth, but the influence differed among FGs (see interaction terms between FG and PGR in Table 1).Interstage flows related to stasis tended to decrease with MD, but this influence also varied among FGs and PGR (Table 1). The pattern of elasticities among FGs broadly mirrored those observed for interstage flow.Trees tended to have large elasticities for stasis, and semelparous herbs tended to have large elasticities for growth, whereas iteroparous herbs and shrubs had elasticities that spanned an intermediate range between growth and stasis.But in contrast to interstage flow, elasticities generally ascribed less of an influence on fecundity (Figure 4).Also, in contrast to interstage flow, the largest elasticities in growing populations tended to be related to growth (Figure 6).Overall, PGR, MD, and their interaction had a statistically significant influence on the elasticities related to stasis, fecundity and growth (Table 2). | Comparison between interstage flow and elasticity We showed the difference between elasticity and interstage flow in Figure 2a,b using a hypothetical example of two population matrices.The results indicate that even if interstage flows of different demographic processes differ, their elasticities can be similar. That is, demographic processes can similarly change the PGR proportionally (elasticity represents the proportional change in PGR with a proportional change in a population matrix element), while their contribution to PGR itself (interstage flow) differs.Similarly, the results imply that even if elasticities of two different demographic processes differ, interstage flows of their processes could be similar. Patterns of interstage flow varied among plant functional types (Table 1; Figure 4) in ways that are consistent with life history tradeoffs and that are broadly similar to those based on elasticities. However, interstage flow generally ascribed a larger role for fecundity than did elasticity and that pattern was consistent across FGs (Figure 4).In addition, while growing populations tended to be dominated by interstage flows related to fecundity, they tended to have high elasticities related to growth (Figures 5 and 6).However, these are potentially distinct demographic processes (Salguero-Gómez, 2018). | Retrospective and prospective approaches The different patterns we observed for interstage flow and elasticity reflect their different perspectives on population growth.Interstage flow is derived from the steady-state properties of populations (i.e. stable stage distributions and vital rates).Interstage flow, therefore, describes how PGR would be driven by the different stage transitions.Interstage flow is a retrospective analysis that describes how PGR has been influenced by events and conditions in the recent past.In contrast, elasticity is a prospective analysis that describes the relative contribution of vital rates to expected future PGR after a theoretical perturbation (Caswell, 2001). Retrospective and prospective approaches provide complementary perspectives that are useful in different contexts (Horvitz et al., 1997).For example, elasticity analysis has been commonly used in the management of non-native invasive populations to predict how targeting control efforts at different life stages or vital rates will influence future population growth (Kerr et al., 2016).However, 2. Both the abundance of individuals and the composition functional traits can vary among populations within a species.These intraspecific differences can in turn influence the strength of species interactions and community composition (Start, 2020). Interstage flow could be a useful tool for better understanding the linkages between abundance, functional composition and community dynamics.For instance, interstage flow could be helpful in developing mechanistic stage structured demographic models that link the often stage dependent expression of functional traits and the density dependent dynamics of competitors or predators. 3. Life history stages can vary in their environmental requirements, and this can be an important consideration for conservation efforts.For example, the elevational distribution of plant species is shifting in response to global climate change, but the ability of species to shift their elevational range is complicated by the fact that seedling and adult stages often have different abiotic requirements (Lenoir et al., 2009).Interstage flow could be a useful metric for planning and monitoring conservation efforts such as population translocations. 4. Interstage flow may also be useful for addressing questions that involve the link between population dynamics and broader ecosystem properties.For example, plants allocate assimilated energy to processes that support fecundity, growth and survival (Harper, 2010).The degree to which relative allocations vary across FGs or across populations experiencing different environmental conditions can affect ecosystem processes.For example, when resource availability is low or environmental stressors such as herbivory and abiotic stress are high, most of the gross plant assimilated energy may be allocated to support functions related to survival (such as herbivore defence), making less primary production available to consumers (Fridley, 2017) through an increase in biomass and is, therefore, a way to link demography and energy flow. | Interstage flow in integral projection models Integral projection models (IPMs) were developed as an alternative to MPMs about 20 years ago and are in the form of a dynamical equation with time-discrete and stage-continuous scheme (Easterling et al., 2000).The mathematical expression is as follows: where n(x, t) and K(y, x) represent a size distribution at time t (continuous function of size x) and the contribution of individuals with size x to size y called as kernel, respectively.The concept of interstage flow proposed in this paper can be applied to IPMs.The interstage flow is equivalent to the product of the kernel and size distribution, K(y, x)n(x, t) in IPM.The unique property of interstage flow related to the PGR λ holds also in IPMs. Suppose that the dynamics of Equation ( 8) converges to a stable size distribution with growing at a PGR ( ) and the stable dis- projection matrix, the matrix elements of interstage flow explicitly decompose population growth into the contributions made by individuals transitioning between different developmental stages.The idea of interstage flow was introduced by Kawano et al. (1987), but they neither formally interpreted its ecological meaning nor comprehensively analysed its properties.No work has demonstrated how the proportional change of PGR with a proportional change in a population matrix element (elasticity) can be decomposed into interstage flows.Also, no work has been done to test how interstage flow matrices vary across taxa or FGs.In this study we formally define the interstage flow matrix, describe some of its useful properties and demonstrate the relationship between interstage flow and elasticity.We then compare how patterns of elasticity and interstage flow vary across plant functional types and interpret the ecological meaning using 286 plant species from the COMPADRE database.We also explore some potential applications of interstage flow to ecological research. E 1 The transition probabilities between stages and interstage flow of the Japanese perennial herb, Trillium apetalon.Four stages (seedling, one-leaf, three-leaf and flowering) were set to construct the population projection matrix of the species.The population projection matrix and the interstage flow matrix are shown on the top of the figure.The matrix is fromOhara et al. (2001) and was constructed using long-term census data of the Japanese perennial herb, Trillium apetalon.The flow chart between stages is shown on the bottom of the figure.The numbers attached to each arrow in the flow chart are the transition probabilities between stages.The normalized stable stage distribution (w) in the centre was obtained from the population projection matrix.Using the stable stage distribution and the population projection matrix elements, the interstage flows between stages were calculated as shown in the right part of the figure (see Equation2).Interstage flows are shown by each arrow on the right and in the interstage flow matrix. a 13 and a 33 .The calculated proportional interstage flows and elasticities are shown in Figure 2a,b, respectively.Elasticities derived from M A and M B are similar, even for elements a 13 and a 33 .In contrast, there are more marked differences in proportional interstage flow especially in a 13 .This indicates that their interstage flows can be different even when population matrices have similar elasticities.Interstage flow and elasticity are related as follows (see Appendix S2 for details):where e kl and f IS ij are the elasticity and flow matrix elements, respectively.Equation (7) indicates that the sensitivity ∕ a kl is the sum of changes in interstage flows when the matrix element changes.In other words, we can decompose elasticity into changes in interstage flows ( f IS ij ∕ a kl ) based on Equation (7) (Figure2c).Elasticity provides information on the predicted proportional influence of matrix elements on PGR, but it does not provide information on how changes in growth rate are driven by changes in transitions of individuals between life stages.The decomposition of elasticity into changes in interstage flows explicitly describes these changes to stage transitions and the sum of change is proportion to elasticity, e kl ∝ Figure 3a).The submatrix U contains the transition and survival rates of individuals in each age or stage.Submatrix F contains the number of seeds produced per individual, and submatrix C contains the clonal reproduction rates. Interstage flows and elasticities.(a) proportions of interstage flows of matrix M A and M B , (b) elasticities of matrix M A and M B , (c) changes in interstage flows, f IS ij ∕ a 33 .a ij is matrix elements of M A and M B .Elasticity to a 33 is e 33 = that, in the (a), the interstage flows are normalized by dividing the population growth rate to compare the interstage flows and elasticities. To allow comparison of interstage flows among plant species with different PGRs, we divided the stasis, fecundity and growth by its PGR-defined as proportional values of stasis, fecundity and growth, respectively.The sum of the proportional values of stasis, fecundity and growth equals 1.0.F I G U R E 3 Classification of matrix elements into stasis, fecundity and growth.(a) Population projection matrix A is partitioned into three submatrices: U contains only transitions and survival of existing individuals; F sexual reproduction; C clonal reproduction.The diagonal or upper diagonal elements of U were classified as 'stasis', and lower diagonal elements of U were classified as 'growth'.The elements of F and C were classified as 'fecundity' and 'growth', respectively.(b) A hypothetical example of matrix elements partitioned into stasis, fecundity or growth.The classification for each element is identical between population projection matrix and flow matrix.Flow matrix elements are summed for each class (i.e.stasis, fecundity and growth).The sum of the flow matrix elements is the population growth rate. The interstage flow matrix, like other population statistics, potentially depends on how the population matrix is constructed.For instance, interstage flows and elasticity are dependent on MD.In addition, categorizing matrix elements to different demographic processes involves a degree of ambiguity and discretion.For example, we classified interstage flow matrix elements relating to asexual reproduction (i.e.clonal growth) into growth, the same as transitions into larger size classes (i.e.growth of smaller plants into larger ones). TA B L E 1 Summary of the results of Dirichlet regression for summed interstage flows for (1) stasis, (2) fecundity and (3) growth.FG_Semelparous, FG_Shrub and FG_Tree are categorical variables for semelparous herbs, shrubs and trees, respectively.F I G U R E 4 Ternary plot of interstage flow (a-d) and elasticity (e-h).Semelparous herbs, are plotted in (a) and (e).Iteroparous herbs are plotted in (b) and (f).Shrubs are plotted in (c) and (g).Trees are plotted in (d) and (h).Colour shading indicates population growth rates.Dashed lines indicate a value of 0.5 for each axis.Filled squares indicate the spatial median(Vardi & Zhang, 2000).F I G U R E 5 Dependence of interstage flow related to stasis, fecundity and growth on the logarithmic population growth rate.(a) semelparous herbs, (b) iteroparous herbs, (c) shrubs and (d) trees.The matrix dimensions used in the regression were the mean values of each functional group.is needed to understand how spatial variation in environmental conditions across the introduced range or periodic variation in conditions (such as associated with disturbance) affect the demography and growth of invasive populations.4.1.2| When are interstage flows useful?Elasticity describes the proportional capacity of different life history processes to influence population growth.Interstage flow describes how that capacity is realized in the supply of individuals transitioning between life stages.Patterns of elasticity and flow do not necessarily align across matrix elements.For instance, the contribution to population growth made by the supply of individuals transitioning between life stages (interstage flow) is distinct from the relative capacity of different life history processes to influence population growth (elasticity).For instance, a population could have a large supply of individuals transitioning between a specific life stage even if the relative capacity of the life history process to drive population growth is low.Conversely, there can be stage transitions that have a large capacity to influence population growth even if the supply of individuals making the life stage transition is low.We think that this distinction could be important in several circumstances.1. Patterns of interstage flow likely vary among populations within and between species in ways that reflect demographic responses to differing environmental conditions or different demographic contexts, such as those associated with the introduction of non-native populations.Interstage flow may, therefore, be a useful descriptor of demographic variation across populations and as a metric for testing the association between environmental variation and components of population growth. tribution, n * (y), is proportional to a normalized eigen-function, u(y): n * (y) = u(y), where ∫ u(y)dy = 1.Then, Integrating both sides of Equation (9) and using ∫ u(y)dy = 1, we obtain The total sum of interstage flows in IPM is exactly same as PGR, as we proved in Equation (4) in MPMs.This proposed population statistic, the interstage flow matrix, describes the contribution of individual transitions between developmental stages to population growth.We suggest that future research uses the interstage flow matrix to decompose PGR into population projection matrix elements in order to specify contributions to PGRs., x)u(x)dxdy. Dependence of elasticity related to stasis, fecundity and growth on the logarithmic population growth rate.(a) semelparous herbs, (b) iteroparous herbs, (c) shrubs and (d) trees.The matrix dimensions used in the regression were the mean values of each functional group.Summary of the results of Dirichlet regression for summed elasticities for (1) stasis, (2) fecundity and (3) growth.FG_Semelparous, FG_Shrub and FG_Tree are categorical variables for semelparous herbs, shrubs and trees, respectively.
v3-fos-license
2019-03-15T02:58:03.273Z
2019-02-05T00:00:00.000
76664599
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1155/2019/4204907", "pdf_hash": "d44a305914181d13719e4535cbf84e9a39195449", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:346", "s2fieldsofstudy": [ "Medicine" ], "sha1": "88da0a7cf5fb44ac139aba3c9752848a9b1f5082", "year": 2019 }
pes2o/s2orc
Gitelman Syndrome: A Rare Cause of Seizure Disorder and a Systematic Review Gitelman syndrome is one of the few inherited causes of metabolic alkalosis due to salt losing tubulopathy. It is caused by tubular defects at the level of distal convoluted tubules, mimicking a thiazide-like tumor. It usually presents in late childhood or in teenage as nonspecific weakness, fatigability, polyuria, and polydipsia but very rarely with seizures. It is classically associated with hypokalemia, hypomagnesemia, hypocalciuria, hyperreninemia, and hyperaldosteronism. However, less frequently, it can present with normal magnesium levels. It is even rarer to find normomagnesemic patients of GS who develop seizures as the main complication since hypomagnesemia is considered the principal etiology of abnormal foci of seizure-related brain activity in GS cases. Interestingly, patients with GS are oftentimes diagnosed during pregnancy when the classic electrolyte pattern consistent with GS is noticed. Our case presents GS with normal serum magnesium in a patient, with seizures being the main clinical presentation. We also did a comprehensive literature review of 122 reported cases to show the prevalence of normal magnesium in GS cases and an overview of clinical and biochemical variability in GS. We suggest that further studies and in-depth analysis are required to understand the pathophysiology of seizures in GS patients with both normal and low magnesium levels. Materials and Methods Two different databases (PubMed and Scopus) were searched for all case reports and review articles previously published on GS syndrome. Moreover, after taking informed consent from our patient, we included data from the electronic medical record system to use this information for publication purposes. A 22-year-old female was brought to the hospital with the complaint of vomiting, generalized weakness, and two episodes of witnessed generalized tonic-clonic seizures 24 hours prior to the time of admission. She had about 5 episodes of nonbloody nonbilious vomiting. She was nonverbal at baseline but was reported to be more lethargic than usual and had a poor oral intake for the last 2 days and appeared to be in pain. Review of the system was negative for any previous episodes of seizures in the past, fever, diarrhea, abdominal pain, history of diuretic or laxative abuse, any periorbital puffiness, and extremities swelling. She was given lorazepam followed by successful resolution of seizures. Case Presentation On physical examination, she was having borderline low blood pressure close to her baseline (105/56) with HR of 80, RR 18, O 2 sat. 100% on room air. Systemic examination was otherwise unremarkable without any overt signs of dehydration. EKG showed U waves and nonspecific T wave changes. Pertinent labs showed serum blood urea nitrogen (BUN) and creatinine (Cr) of 16 and 0.77, respectively. Serum electrolytes showed serum sodium (Na) of 150 mEq/L, serum potassium (K) of 1.4 mEq/L, serum magnesium (Mg) of 2.8 mg/dL, and serum bicarbonate (HCO 3 ) of 35 mEq/L. Urine electrolytes included urine K 22 mEq/L, urine Na 121 mEq/L, and urine Cl 146 mEq/L. Her transtubular Case Reports in Medicine potassium gradient (TTKG) was 6.82. Complete blood count and liver function panel were within normal limits. Plasma renin activity (PRA) was 0.33 ng/ml/hr, serum aldosterone/ K ratio of 1/1.4, and aldosterone/plasma renin ratio of 3. Differential included primary hyperaldosteronism, vomiting, and Bartter/Gitelman syndrome. EEG showed abnormal epileptiform activity in the brain consistent with seizure. Low normal BP, high urine Cl with urine Ca, and history negative for laxative/diuretic intake made GS the more likely differential. Later on, biallelic identification of inactivating SLC12A3 mutation confirmed the diagnosis of GS. Patient's condition improved with aggressive K replenishment and antiepileptics in the medical ICU. She was later discharged in a medically stable condition and advised to follow-up with nephrologist and neurologist as an outpatient. Literature Search e available literature was systematically searched by three authors independently to retrieve all available material on variable clinical and metabolic presentations in Gitelman syndrome. ere was no language filter placed, and articles were collected from their inception till May 2018, using the MEDLINE, Cochrane, Embase, and Scopus databases. Different MeSH terminologies such as "Gitelman," "Gitelman syndrome," "Gitelman disease," and "GS" were combined using the Boolean operators "AND" and "OR" with the terms "hypomagnesemia," "low magnesium," "serum magnesium," "plasma magnesium," and "magnesium levels." Another author collected few articles through manual search using the reference list of all retrieved publications through the aforementioned search strategy. Results and Statistical Analysis 4.1. Literature Retrieval and the Results. After a thorough computer literature search, careful verification of references, and screening based upon the titles and abstracts, 122 cases of GS patients from 100 articles were identified for selection . It was ensured that repetitive cases in these articles were excluded. Out of these 100 articles, data were also extracted from articles published in languages other than English. Patients Description. ere were a total of 122 patients including 45% (n � 55) males and 65% (n � 77) females. e age of female patients ranged from 4.8 months to 79 years (mean age 28.5 years), whereas for males, it ranged from 7 months to 80 years (mean age of 27.8 years). e description of patients included in this study is listed in Table 1. Spectrum of Clinical Presentation and Associations. Clinical presentation of Gitelman syndrome was found to be highly variable in the reported patient population. About 30% (n � 36/122) of the patients, including 14% (n � 17/122) pregnant patients, were having nonspecific muscle cramps, weakness, fatigability, and anorexia, as the main presentation. ese were likely due in part to hypokalemia and hypomagnesemia. About 12% (n � 15/122) of the patients had extremities weakness out of which 7% (n � 9/122) presented with bilateral lower limb weakness/ paralysis and the rest of them had quadriplegia as initial presentation. Interestingly, 10.6% (n � 12/122) of patients had perioral numbness and symptoms related to tetany/ carpopedal spasm as first signs of Gitelman. About 6% of patients had polydipsia, polyuria/enuresis, and salt craving as presenting complaint; however, almost half of the total reported patients had some degree of polydipsia and polyuria in addition to main presenting clinical symptoms. Seven percent (n � 9/122) of patients were completely asymptomatic and were diagnosed with routine lab work, either during routine clinical visits or perioperatively. Only 5.7% of patients (n � 7/122) had GI-related issues such as anorexia, vomiting, constipation, abdominal pain, and weight loss as the main complaint. About 7 cases had no mention of the presenting complaints. Rest of the patients had their own unique features as seen in Table 1. Our patient had a unique presentation of generalized tonic-clonic seizure despite normal serum Mg levels, which has not been previously reported in the literature. GS was found to be most commonly associated with pseudogout and CPPD crystal deposition in about 10% of patients. Other associations included but not limited to Sjogren's syndrome in 4%, chondrocalcinosis in 3%, and diabetes mellitus (both type 1 and type 2) and primary hyperaldosteronism in about 2% each. A less common association is seen with empty sella syndrome in 2 patients. Seizure disorder as a possible association with GS was previously reported in only one case by Beltagi et al., most likely due to hypomagnesemia [15]. Our patient, however, was unique with no prior history of epilepsy and had a seizure as the very first presentation with normal magnesium levels. Complications Related to Gitelman Syndrome. Complications related to renal, cardiac, and endocrine systems have frequently been reported in the previous cases. Cardiac manifestations ranged from electrolytes related, asymptomatic ECG changes including prolonged Qtc, nonspecific T and U waves to pericardial effusion, and ventricular fibrillation. Reported renal pathologies included glomerulonephritides such as MPGN, FSGS, membranous nephropathy, and also cases of tubulointerstitial nephritis and renal tubular acidosis (RTA). yrotoxic periodic paralysis and hypokalemic periodic paralysis were also seen in a few cases. However, it must be noted that it is rare for two different renal entities to occur at the same time, and several of the studies did not confirm the diagnosis of GS by identifying the inactivation gene mutation leaving open the possibility that underlying pathology may not have been actually Gitelman's. Long-term follow-up is usually required to observe for these complications; our patient, however, had no further follow-up in our hospital and was referred to the neurologist care. Diagnosis and Management with Outcomes. Except for one case (n � 1/122), where there is no mention of the diagnostic method, genetic testing was utilized in 42% (n � 52/122) cases, to definitively diagnose GS. e specific mutations to help make the diagnosis can be seen in Table 1. Almost 56% of patients (n � 68/122) were diagnosed based on the presenting electrolytes abnormalities including serum and urine Na, K, Mg, and Ca used adjunctively with PAR concentration. Although the supportive testing with electrolytes and supplementary tests were highly suggestive of GS in these 68 cases, genetic tests were not done for various reasons. ese included lack of resources, nonavailability of genetic test, and loss of follow-up by the patients to be the major ones. Of note is the serum Mg level in the reported cases. Considering the normal range to be between 0.7 and 1 mmol/L (1.5-2 mEq/L; 1.7-2.4 mg/dL), 55% (n � 66/122) patients had hypomagnesemia, i.e., <0.7 mmol/L, whereas 20% (n � 25/122) had levels 0.7 mmol/L and above. In 31 cases, serum magnesium levels were not reported. ese levels were important as the clinical severity of presentation was reflected by the degree of hypomagnesemia. Electrolytes replacement, NSAIDs, and potassiumsparing diuretics with and without ACE In/ARB's were the mainstay of treatment in almost all of the cases. Outcomes and prognosis were remarkable, and patients fully recovered from their acute presenting symptoms with exception of a few cases. ese few cases reported persistent electrolytes abnormalities such as hypokalemia, metabolic alkalosis, hypomagnesemia, occasional paralysis and neurological symptoms, and treatment-related complications (indomethacin-related GI upset and bleeding). Recovery in the other cases is being defined as a sustained increase in electrolytes with magnesium >2, potassium >4, and significant improvement in the symptoms. Around 22% (n � 28/122) cases did not comment on the outcomes. Discussion However, GS can also present with normal serum magnesium levels, and in one case, it has been reported to be in around 20-40% of GS cases [101]. From our review of around 122 cases, 20% (n � 25/122) patients had serum magnesium levels >0.7 mmol/L. Both the groups of GS patients with normal and low magnesium levels largely stay asymptomatic and present later in life. Most present in teenage or adulthood with nonspecific generalized weakness or muscle cramps/fatigability, polyuria, and polydipsia [103]. However, seizure disorder has very rarely been reported as one of the main presenting complaints. Hypomagnesemia and metabolic alkalosis have been proposed as the pathophysiological basis of these rarely reported seizure disorders. Our case reports are unique in this sense that the patient of GS presented with seizure despite having normal serum magnesium levels. In our literature review, only one patient who was reported by Beltagi et al. [15] presented with somnolence and altered mental status and had a focal seizure as a complication. Even in that case, hypomagnesemia can be considered as the cause of epileptiform activity on EEG. is observation prompts us to consider causes other than hypomagnesemia as a culprit of seizure disorder, whenever evaluating the patient with GS. e final diagnosis of GS is based on the triad of clinical symptoms, biochemical abnormalities, and genetic testing [103]. Genetic testing is recommended for all patients, and the diagnosis is confirmed with the biallelic identification of inactivating SLC12A3 mutations [104]. We emphasize after this literature review that contrary to common clinical practice, overall clinical picture with more emphasis on genetic testing is a better strategy to clinch the diagnosis, and the diagnosis of GS can still be made even with normal serum magnesium levels. Treatment. Most patients with GS remain untreated. e observation that chondrocalcinosis is due to magnesium deficiency argues clearly in favor of magnesium supplementation [15]. Most asymptomatic patients with GS remain untreated and undergo ambulatory monitoring, once a year, generally by nephrologists. Lifelong supplementation of magnesium and potassium is mandatory [105]. Cardiac workup should be performed to screen for risk factors of cardiac arrhythmias. All GS patients are encouraged to maintain a high-sodium diet. In general, the long-term prognosis of GS is excellent. Health education with annual regular nephrologist follow-up to evaluate for any developing complications seems to be a reasonable approach. As mentioned in the abstract, GS can be first identified during pregnancy when classic electrolyte abnormalities are noticed on the lab work [106]. Successful pregnancy is possible in majority of the patients; however, miscarriages have also been reported in the literature, which alludes to regular nephrologist follow-up during pregnancy. Conclusion (i) GS with variable biochemical presentation, i.e., normal serum magnesium level is a rare but potentially possible finding seen in various clinical settings (ii) Although exceedingly rare, seizure disorder can be the main clinical presentation of GS (iii) Causes other than low magnesium levels should be sought for the explanation of seizure disorder in GS (iv) Further studies are recommended to better understand the pathophysiology of abnormal epileptiform activity in GS (v) Successful pregnancy is possible in majority of the patients; however, miscarriages have also been reported in the literature, which alludes to regular nephrologist follow-up in the pregnant GS patient Conflicts of Interest e authors declare that they have no conflicts of interest.
v3-fos-license
2018-04-03T03:51:56.917Z
2013-06-11T00:00:00.000
14029140
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.12080", "pdf_hash": "906bb1e0c7bbfaf3aa075bf4e445825d70c6378c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:349", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "906bb1e0c7bbfaf3aa075bf4e445825d70c6378c", "year": 2013 }
pes2o/s2orc
Impaired surface expression and conductance of the KCNQ4 channel lead to sensorineural hearing loss KCNQ4, a voltage-gated potassium channel, plays an important role in maintaining cochlear ion homoeostasis and regulating hair cell membrane potential, both essential for normal auditory function. Mutations in the KCNQ4 gene lead to DFNA2, a subtype of autosomal dominant non-syndromic deafness that is characterized by progressive sensorineural hearing loss across all frequencies. Despite recent advances in the identification of pathogenic KCNQ4 mutations, the molecular aetiology of DFNA2 remains unknown. We report here that decreased cell surface expression and impaired conductance of the KCNQ4 channel are two mechanisms underlying hearing loss in DFNA2. In HEK293T cells, a dramatic decrease in cell surface expression was detected by immunofluorescent microscopy and confirmed by Western blot for the pathogenic KCNQ4 mutants L274H, W276S, L281S, G285C, G285S, G296S and G321S, while their overall cellular levels remained normal. In addition, none of these mutations affected tetrameric assembly of KCNQ4 channels. Consistent with these results, all mutants showed strong dominant-negative effects on the wild-type (WT) channel function. Most importantly, overexpression of HSP90β, a key component of the molecular chaperone network that controls the KCNQ4 biogenesis, significantly increased cell surface expression of the KCNQ4 mutants L281S, G296S and G321S. KCNQ4 surface expression was restored or considerably improved in HEK293T cells mimicking the heterozygous condition of these mutations in DFNA2 patients. Finally, our electrophysiological studies demonstrated that these mutations directly compromise the conductance of the KCNQ4 channel, since no significant change in KCNQ4 current was observed after KCNQ4 surface expression was restored or improved. Introduction The KCNQ family of voltage-gated potassium channels (K V 7) plays important roles in brain, heart, kidney and the inner ear. Mutations in four of the five KCNQ channels cause inherited human diseases, including cardiac arrhythmia, epilepsy, deafness, etc. [1][2][3]. The expression of KCNQ4 (K V 7.4) is predominantly detected in the inner ear and the central auditory pathways [4][5][6][7]. In the inner ear, KCNQ4 is highly expressed in the basal membrane of sensory outer hair cells, where it mediates the M-like potassium current I K,n [4][5][6][7][8][9][10]. The activity of the KCNQ4 channel is crucial for the maintenance of hair cell membrane potential and for the K + recycling in the cochlea [10][11][12] For the latter, KCNQ4 provides a major K + efflux pathway from outer hair cells [11,12]. In the mouse model, loss of KCNQ4 function leads to degeneration of outer hair cells and progressive sensorineural hearing loss without vestibular phenotypes [13]. KCNQ4 expression was also detected in the vestibular sensory epithelium and certain tracts and nuclei of the brainstem, however, the physiological significance of the KCNQ4 channel in these cells is currently unknown [4][5][6]. In human, mutations in KCNQ4 cause DFNA2, a subtype of autosomal dominant non-syndromic deafness that is characterized by progressive sensorineural hearing loss [7,13,14]. At young ages, hearing loss in DFNA2 patients is moderate and predominantly affects high frequencies. The hearing loss progresses, usually in less than 10 years, to more than 60 dB with middle and low frequencies also involved [14,15]. By the age of 70, all affected individuals in DFNA2 families have severe to profound hearing loss across all frequencies [14,16,17]. There are currently no therapeutic treatments to prevent progressive hearing loss in these patients. Development of such treatments has been hampered by the lack of understanding of the molecular aetiology of DFNA2. Over the last two decades, various pathogenic KCNQ4 mutations have been identified in DFNA2 patients (DFNA2 mutations) [7,15,[18][19][20][21][22][23][24][25][26][27][28][29][30][31]. Among them, the missense mutations L274H, W276S, L281S, G285C, G285S, G296S and G321S are loss-of-function mutations [7,19,29,32,33]. Specifically, electrophysiological studies in Xenopus laevis oocytes and various cell lines have shown that these mutations lead to loss of KCNQ4 currents [7,19,29,32]. Yet, the molecular mechanisms by which these mutations lead to loss of KCNQ4 currents are not well understood. Using immunofluorescent and biochemical approaches, Mencia and colleagues demonstrated that the mutation G296S led to diminished cell surface expression of the mutant channel with a strong dominant-negative effect on WT KCNQ4 channels [29]. Trafficking deficiency of G296S was further confirmed by a separate immunofluorescent study [32]. In the latter, Kim et al. also found that five other DFNA2 mutants, L274H, W276S, L281S, G285C and G321S, are trafficking deficient, although no experiment was conducted to determine whether or not the trafficking phenotype was dominant [32]. However, contrary to these findings, a recent biochemical study demonstrated that the DFNA2 mutations, W276S and G285C, had no significant effects on cell surface expression of the KCNQ4 channel [19] Therefore, it is not clear whether trafficking deficiency is a common mechanism for loss of KCNQ4 function in DFNA2 patients. Like other voltage-gated potassium channel, KCNQ4 channels consist of six transmembrane domain, a pore-forming region, and two intracellular termini. Most of the DFNA2 mutations are clustered around the pore region of the KCNQ4 channel [19,27,31]. Specifically, L274H, W276S and L281S are located within the pore helix; G285C and G285S are substitutions of the first glycine in the signature sequence of the K + -filter (GYG); G296S is located in a group of five highly conserved amino acids connecting the pore-loop and the transmembrane domain S6; G321S is at the junction of the S6 domain and the C-terminal of the KCNQ4 channel. Given the structural and functional significance of the amino acids affected, it is not surprising that these mutations lead to loss of KCNQ4 function. However, the finding that loss of KCNQ4 currents may be caused by decreased cell surface expression is striking and opens a series of important questions in the molecular aetiology of DFNA2. What are the underlying mechanisms for the loss of KCNQ4 currents in DFNA2? Is it because of decreased cell surface expression or impaired conductance of KCNQ4 channels or both? Is decreased surface expression a consequence of degradation or intracellular retention of DFNA2 mutants? Is it possible that DFNA2 mutants are intracellularly retained otherwise functional? Could surface expression of DFNA2 mutants be restored? Most importantly, could restoration of KCNQ4 surface expression rescue the function of these mutant channels? The answers to these questions are fundamental to our understanding of the molecular mechanisms underlying hearing loss in DFNA2 and will lay important ground work for a rational design of therapeutic treatments. Generation of KCNQ4 channels on the cell surface requires proper folding, assembly and trafficking of KCNQ subunits. Despite the functional significance of the KCNQ4 channel, little is known about the molecular mechanisms that control these processes. We have recently demonstrated that the cellular level of the KCNQ4 channel is regulated by the HSP90 chaperone pathway [34]. HSP90 is an evolutionally conserved molecular chaperone that plays a central role in the structural maturation and trafficking of numerous proteins involved in signal transduction, including protein kinases, steroid hormone receptors, transcriptional factors, endothelial nitric oxide synthase (eNOS) etc. [35]. HSP90 is also required for the folding and trafficking of various membrane proteins, such as the cystic fibrosis transmembrane conductance regulator (CFTR), the ClC-2 chloride channel [36], the voltage-gated potassium channel hERG and the ATP-sensitive potassium channel (K ATP ) [36][37][38][39]. Most importantly, recent studies have shown that manipulating HSP90 function can be used to rescue folding and trafficking of various mutant proteins for the treatment of human diseases [40][41][42][43][44]. Our previous study in HEK293T cells showed that HSP90a and HSP90b are key players in the molecular chaperone network regulating the cellular level of the KCNQ4 channel [34]. While overexpression of HSP90b promotes KCNQ4 biogenesis, overexpression of HSP90a facilitates ubiquitin-dependent degradation of the channel. Moreover, overexpression of HSP90b dramatically increased the abundance of not only the WT but also the mutant KCNQ4 channels on the cell surface. Cell surface expression of the KCNQ4 channel in HEK293T cells mimicking heterozygous conditions of two DFNA2 mutations, L274H and W276S, could be restored by overexpression of HSP90b [34]. In this study, we investigated the effects of seven loss-of-function DFNA2 mutations, L274H, W276S, L281S, G285C, G285S, G296S and G321S, on the overall cellular level and on the cell surface expression of the KCNQ4 channel using both immunofluorescent and quantitative biochemical approaches. We tested whether these mutations affect subunit interaction and whether they have dominant-negative effects on the WT KCNQ4 function. We also explored the potential of HSP90 overexpression in rescuing cell surface expression of DFNA2 mutants L281S, G296S and G321S. Finally, we tested whether the function of DFNA2 mutations can be rescued by restoration of KCNQ4 surface expression in HEK293T cells mimicking the heterozygous KCNQ4 condition of DFNA2 patients. Chemicals and reagents All chemicals were from Sigma-Aldrich (St. Louis, MO, USA); media and reagents for cell culture were from Invitrogen (Grand Island, NY, USA), unless otherwise indicated. Expression constructs KCNQ4 (NM_004700) was cloned in pCMV6-XL5 vector and then tagged with a Myc or a modified HA epitope in the first extracellular loop of the KCNQ4 channel as described previously [29,32]. These tagged KCNQ4 channels (referred to as Myc-KCNQ4 or HA-KCNQ4) exhibited normal channel properties [29,32]. Constructs of the mutant KCNQ4 channels were generated from the tagged WT constructs using the QuikChange Lighting Site-Directed Mutagenesis Kit (Stratagene, Santa Clara, CA, USA) and verified by DNA sequencing. For immunofluorescent microscopy and electrophysiological recordings, the WT and the mutant KCNQ4 channels were subcloned into the pIRES2-DsRed2 vector. In addition, molecular chaperones, HSP90b (NM_007355) was cloned in pCMV6-XL5. Cell culture and transfection HEK293T cells (Sigma-Aldrich) were used for all experiments. These cells were maintained according to the manufacturers' instruction. All transfection were carried out using Lipofectamine 2000 as described by the manufacturer (Invitrogen). Following transfection, the cells were incubated at 37°C for 24 hrs. Immunofluorescent microscopy HEK293T cells were cultured on glass cover slips and transfected with KCNQ4 channels in pIRES2-DsRed2 (0.4 lg per well in 6-well plates). Twenty-four hours after transfection, the cells were fixed in 4% paraformaldehyde for 5 min., followed by three washes in PBS, and blocked in StartingBlock blocking buffer (Fisher Scientific, Pittsburgh, PA, USA) for 10 min. For permeabilization, the blocking buffer was supplemented with 0.1% Triton X-100. The cells were then treated with mouse monoclonal anti-HA antibody (1:500 dilution) for 1 hr and washed three times with PBS before incubation with goat-antimouse IgG-FITC (1:600 dilution, 1 hr). After a brief wash with PBST (PBS plus 0.05% Tween 20), the cells were incubated in the dark with DAPI for 5 min., followed by three washes with PBST, and then mounted in ProLong Gold Antifade Reagent (Invitrogen) on glass slides. Fluorescent images were captured using Carl Zeiss LSM510 confocal microscope. All procedures were performed at room temperature. Co-immunoprecipitation Transfected cells were lysed in NP40 lysis buffer supplemented with protease inhibitor cocktail (P8340, Sigma-Aldrich), on ice for 30 min. Cell lysates were then cleared by centrifugation at 18,400 9 g for 10 min. at 4°C and incubated with primary antibodies as indicated at 4°C for 16 hrs. The protein complexes were isolated and purified using Dynabeads Protein G (100-04D; Invitrogen) following manufacturer's protocol and analysed by Western blot (see below). Isolation of surface KCNQ4 proteins Twenty-four hours after transfection, cells were washed twice with PBS in situ in 6-well culture plates and treated with mouse monoclonal anti-HA or anti-Myc antibodies (1:500 dilution) for 30 min. to label KCNQ4 channels on the cell surface. Following three washes with PBS, the cells were lysed with NP40 lysis buffer supplemented with protease inhibitor cocktail (P8340; Sigma-Aldrich), on ice for 30 min. The cell lysate was transferred to fresh tubes and centrifuged at 14,000 rpm for 10 min. at 4°C. Surface KCNQ4 proteins were captured using Dynabeads Protein G as instructed by the manufacturer. Proteins eluted from Dynabeads were analysed by Western blot (see below). Western blot Proteins were separated on Criterion TM TGX Precast Gels (Bio-Rad Life Science, Hercules, CA, USA) and transferred to a nitrocellulose membrane (Bio-Rad Life Science). The membrane was probed with appropriate primary antibodies followed by incubation with HRP-conjugated secondary antibodies and visualized by SuperSignal West Pico Chemiluminescent Substrate (34077; Fisher Scientific). Chemiluminescent signals were collected by ChemiDoc XRP Imaging System and analysed by Quantity One software (Bio-Rad Life Science). Each band was quantified as the total pixel value after subtraction of the background and normalized to the loading control protein GAPDH. Whole-cell voltage clamp recording HEK293T cells were trypsinized 24 hrs after transfection, seeded onto poly-L-lysine-coated glass coverslips, and maintained under normal growth condition for about 4 hrs. Before recording, cells were extensively washed with external solution (10 mM NaCl, 4.5 mM KCl, 2 mM CaCl 2 , 1 mM MgCl 2 , 10 mM HEPES, pH7.4, and osmolarity of 303 mmol/kg). Only healthy looking attached cells expressing DsRed fluorescent marker were used for recordings. Glass electrodes with resistance ranging from 1.5 to 3.0 MΩ and filled with internal solution (2.5 mM Na 2 ATP, 135 mM KCl, 3.5 mM MgCl 2 , 5 mM EGTA, 2.41 mM CaCl 2 , 5 mM HEPES pH7.2, and osmolarity of 300 mmol/kg) were used. Data acquisition was performed using a HEKA EPC-10 amplifier and HEKA PatchMaster software (HEKA, Bellmore, NY, USA) with high band Bessel filter set to 10 kHz and low band filter set to 0.2 kHz. The protocol consisted of holding for 1 sec. at À80 mV, followed by depolarization from À80 to +50 mV in 10 mV steps for 1.5 sec., À50 mV for 1 sec. followed by 1 sec. at À80 mV, with 5 sec. in between each testing sequence. Whole-cell current densities (pA/pF) were calculated as the maximal current (pA) divided by the cell capacitance, C-slow (pF). Each day at least five cells expressing WT KCNQ4 were recorded as internal control group. All recordings were performed at room temperature. Statistics Biochemical results were presented as mean AE SD of at least three independent experiments. The measurements were statistically analysed using two-tailed unpaired Student's t-test (MS Excel 2010). Significance was reported as *P ≤ 0.05 and **P ≤ 0.01. Whole-cell current data were exported from PatchMaster and current traces were plotted using Origin v8.6 software (OriginLab Corp, Northampton, MA, USA). Results Effects of DFNA2 mutations on KCNQ4 expression KCNQ4 gene. Both the WT and mutant KCNQ4 channels were tagged with a HA epitope at the extracellular loop between the S1 and S2 domains and expressed in HEK293T cells. Immunofluorescent analyses were carried out using a primary antibody specific to the HA tag and a FITC-conjugated secondary antibody in both permeabilized and non-permeabilized cells. Confocal microscopy demonstrated strong green fluorescence with similar cellular distribution along the secretory pathway in all permeabilized cells expressing the WT or a mutant channel (Fig. 1A), suggesting that none of these mutations had an effect on overall KCNQ4 expression. However, in non-permeabilized cells, striking differences were observed between the WT and the mutant groups. Green fluorescent signal corresponding to cell surface KCNQ4 was robust in cells expressing the WT channel, but was significantly weaker in those expressing KCNQ4 mutants (Fig. 1B), indicating profound effects of these mutations on KCNQ4 surface expression. To confirm these results, we further conducted quantitative biochemical analysis. Surface KCNQ4 channels were affinity-purified from HEK293T cells expressing either the WT HA-KCNQ4 or a mutant HA-KCNQ4 and analysed by Western blot. Compared to WT control, the relative amount of KCNQ4 mutants on the cell surface was significantly reduced, respectively, to 7.55 AE 2.32% (L274H), 26.48 AE 6.03% (W276S), 7.16 AE 1.93% (L281S), 35.77 AE 6.08% (G285C), 18.47 AE 5.30% (G285S), 1.78 AE 1.11% (G296S) and 31.51 AE 5.72% (G321S) (Fig. 1C and D). In contrast, no significant changes were detected in total KCNQ4 protein levels ( Fig. 1C and E). These data indicated that all seven DFNA2 mutations tested disrupt the trafficking of the KCNQ4 channel to the cell surface, but not its overall cellular level. Effects of DFNA2 mutations on KCNQ4 subunit interaction DFNA2 patients are heterozygous for the KCNQ4 alleles, encoding equal amount of the WT and mutant KCNQ4 subunits. To examine whether DFNA2 mutations affect heteromeric assembly between the WT and mutant subunits, mutant HA-KCNQ4 channels were coexpressed individually with the WT Myc-KCNQ4 in HEK293T cells at a ratio of 1:1 to mimic the heterozygous condition of DFNA2 patients. The cell lysates were analysed by reciprocal co-immunoprecipitation assays. In forward experiments, interactions between the WT and mutant subunits were assessed by the amount of the WT Myc-KCNQ4 subunit co-precipitated with mutant HA-KCNQ4 subunits ( Fig. 2A). In (Fig. 2B). Compared with the WT control (Fig. 2, second lanes), all mutant subunits showed normal abilities to interact with WT subunits, suggesting that tetramerization between the WT and these mutant subunits was not disrupted. Dominant-negative effects of DFNA2 mutations on WT KCNQ4 function Since DFNA2 mutants retained their ability to form heteromeric channels with the WT subunit and their overall expression levels remained unaffected, it was important to determine whether trafficking deficiencies of these mutants have dominant-negative effects on the WT subunit. We assessed the abundance of the WT KCNQ4 subunit on the surface of HEK293T cells mimicking the heterozygous condition of DFNA2 patients. Five DFNA2 mutants, L274H, W276S, L281S, G296S and G321S, were co-expressed individually with the WT KCNQ4 at a ratio of 1:1 and all of these mutants showed strong dominant-negative effects on the trafficking of the WT subunit, which was reduced, respectively, to 32 Fig. 3A and B). Moreover, trafficking deficiencies of the mutant subunits could not be rescued by co-expression with the WT KCNQ4 subunits in these cells (Fig. 3A and B), even though the abundance of the mutant subunits on the cell surface increased significantly in four of the five cases ( Fig. 3A and C). The total surface KCNQ4 in these cells, including homomeric WT tetramers, homomeric mutant tetramers and heteromeric tetramers containing both the WT and mutant subunits, were decreased to 39.6 AE 4.0% (WT/L274H), 60.9 AE 5.1% (WT/ W276S), 76.3 AE 8.3% (WT/L281S), 37.7 AE 5.3% (WT/G296S) and 60.4 AE 8.2% (WT/G321S) of the normal WT level (Fig. 4B-D and [34]). These in vitro data suggested that KCNQ4 surface expression in the sensory hair cells and neurons of heterozygous DFNA2 patients might be significantly lower than the normal level. Rescue of KCNQ4 surface expression by HSP90 Various approaches have been developed to rescue trafficking deficiency of pathogenic mutant channels. We have recently found that overexpression of the molecular chaperone HSP90b significantly improved cell surface expression of the WT KCNQ4 and mutant channels L274H and W276S [34]. In this study, we first tested the effects of HSP90b overexpression on cell surface expression of homomeric mutant channels L281S, G296S and G321S in transfected HEK293T cells. Western blot showed that surface expression of the mutant (Fig. 4A). Then, we tested whether cell surface expression of the KCNQ4 channel could be restored by HSP90b overexpression in HEK293T cells mimicking the heterozygous condition of DFNA2 patients. In this case, each mutant channel was co-expressed with the WT KCNQ4 at a ratio of 1:1 plus various amount of HSP90b. Our data showed that KCNQ4 surface expression in cells expressing WT/L281S or WT/G321S could be restored to the level comparable to that of the WT; in the cells expressing WT/ G296S, significant improvements were also observed ( Fig. 4B-D). Effects of DFNA2 mutations on KCNQ4 conductance Whole-cell currents were recorded in transfected HEK293T cells to evaluate whether improved cell surface expression was sufficient for functional rescue of DFNA2 mutants. For each DFNA2 mutant, we conducted four sets of electrophysiological recordings from cells transfected with, (i) a DFNA2 mutant alone; (ii) a DFNA2 mutant plus HSP90b; (iii) a mutant and the WT KCNQ4 at a ratio of 1 to 1 to mimic the heterozygous condition of the DFNA2 patients; (iv) a mutant and the WT KCNQ4 at ratio of 1:1 plus HSP90b. Whole-cell currents were also recorded from cells expressing the WT KCNQ4 under the same conditions as control groups. Outward currents recorded from cells expressing the WT KCNQ4 alone were similar to whole-cell currents reported previously [7,19,29,32,45] . Compared with these currents (Fig. 5A), whole-cell currents recorded from cells transfected with the DFNA2 mutant L281S, G296S or G321S were much smaller and similar to the background levels (Figs 5B, 6A and E). Specifically, the average current density was 31.37 AE 1.20 pA/pF (n = 30) for the WT channel, but 15.51 AE 2.01 pA/pF (n = 15) for the mutant L281S, 11.82 AE 2.7 pA/pF (n = 15) for G296S, 14.89 AE 1.77 pA/pF (n = 12) for G321S, and 6.31 AE 1. 01 pA/ pF (n = 5) for non-transfected cells (Figs 5H, 6K and L). In cells mimicking the heterozygous condition of the DFNA2 patients, the average current densities were 18.26 AE 2.40 pA/pF (n = 11) for WT/L281S, 17.58 AE 2.7 pA/pF (n = 12) for WT/G296S and 17.28 AE 2.64 pA/pF (n = 7) for WT/G321S (Figs 5H, 6K and L), significantly smaller than the WT level (31.37 AE 1.20 pA/pF, n = 30). Our data demonstrated that the function of DFNA2 mutants could not rescue by co-expression of the WT KCNQ4 subunit; instead, the mutants had dominant-negative effects on the WT KCNQ4 currents (Figs 5B and C, 6A, C, E, and G). Overexpression of HSP90b increased cell surface expression of the WT KCNQ4 channel by 26 AE 10.93% (Fig. 4A). In agreement with this result, the average current density in cells co-expressing the WT KCNQ4 and HSP90b was higher (42.07 AE 5.13 pA/pF, n = 6 compared to 31.37 AE 1.20 pA/pF, n = 30 for WT alone), although the difference did not achieve statistical significance (Fig. 5A, D and H). However, overexpression of HSP90b had no obvious effect on the whole-cell current of the DFNA2 mutant L281S, G296S, or G321S (Figs 5B, E and H, 6A, B, E, F, K, and L). Similarly, overexpression of HSP90b did not lead to higher current densities in cells mimicking the heterozygous condition of these mutants (Figs 5 and 6); despite significant improvements in KCNQ4 surface expression in these cells (Fig. 4). Notably, the aver-age current density in cells expressing WT/G296S/HSP90b was significantly higher than in cells expressing G296S alone (22.15 AE 3.63 pA/ pF, n = 8, compared with 11.82 AE 1.2 pA/pF, n = 12), but it is still much smaller than the WT level (Fig. 6K). Little or no increase in the average current densities after restoration of KCNQ4 surface expression indicates that the DFNA2 mutations L281S, G296S or G321S disrupt the conductance of the KCNQ4 channel. Discussion To become functionally active, KCNQ4 subunits must fold and assemble to tetrameric channels in the endoplasmic reticulum (ER) and translocate to the plasma membrane. In human cells, these processes are controlled by a sophisticated molecular network and monitored by the protein quality control system [46][47][48]. According to the current view of the ER quality control, only the proteins that have attained their native structures in the ER are exported efficiently into later compartments of the secretory pathway and transported to the plasma membrane; misfolded or misassembled proteins are retained in the ER, then dislocated across the ER membrane and degraded through ubiquitination-proteasome pathway, a process known as the ER-associated degradation [49,50]. Mutations in KCNQ4 channels may cause structural changes in KCNQ4 channels that affect their abilities to pass the ER quality control. However, our data demonstrated that none of the missense mutations tested, including L274H, W276S, L281S, G285C, G285S, G296S and G321S, affect the cellular level of the KCNQ4 channel, despite their profound effects on cell sur-face expression (Fig. 1). Consistent with these findings, previous biochemical studies by others also showed that three of these mutations, W274S, G285C and G296S, had no effect on the cellular level of the KCNQ4 channel [19,29]. Taken together, these data suggest that the DFNA2 mutations L274H, W276S, L281S, G285C, G285S, G296S and G321S do not cause major changes in the KCNQ4 channel structure that are recognizable by ER quality control system [50]. Therefore, these mutant channels evade ER-associated degradation, but fail in trafficking to the plasma membrane [47]. Intracellular retention, instead of degradation, of the mutant channels provided a molecular basis for the dominant-negative effect of DFNA2 mutations on the WT KCNQ4 function. Various missense mutations and deletions in the KCNQ4 gene have been identified in DFNA2 families. A genotype-phenotype corre- lation has been observed in which missense mutations are associated with a younger onset and severe to profound hearing loss across all frequencies; while deletions are associated with a later onset and milder hearing loss that mainly affects high frequencies [15,21,23,24,26,29,51]. To date, our understanding of the molecular basis underlying this genotype-phenotype correlation is limited. Two small deletions identified in DFNA2 patients are frameshift mutations and result in non-functional KCNQ4 subunits that are truncated before the first transmembrane domain and unable to form tetrameric channels with the WT subunits [15,23]. Because half of the KCNQ4 subunits are normal in these heterozygous DFNA2 patients, hearing loss caused by these deletions is most likely because of haploinsufficiency. On the other hand, there is increasing evidence supporting a dominant-negative mechanism for missense mutations and a small inframe deletion c.664_681del [19]. Electrophysiological studies have demonstrated that the missense mutations L274H, W276S, L281S, G285C, G285S, G296S and G321S cause loss of KCNQ4 currents with strong dominant-negative effects on the WT KCNQ4 currents, as does the inframe deletion c.664_681del [7,19,29,32]. In addition, this and two previous studies have demonstrated that the missense mutations L274H, W276S, L281S, G296S and G321S also lead to a dramatic decrease in KCNQ4 surface expression with strong dominantnegative effects on the WT KCNQ4 subunit [29,32]. Most importantly, our biochemical data showed that none of these DFNA2 mutations affect KCNQ4 subunit interaction, indicating that the mutant KCNQ4 subunits are able to form heteromeric channels with the WT subunit (Fig. 2). As the vast majority of KCNQ4 channels in heterozygous DFNA2 patients would contain at least one mutant subunit (15 of the 16 in theory) [52], it is conceivable that the functional consequence of a missense mutation might be much more profound than that caused by haploinsufficiency. In addition, KCNQ4 channels are expressed in the inner ear and in the central auditory system, where other KCNQ subunits are also present [4,6,7,53]. Because KCNQ2, KCNQ3 and KCNQ5 form functional heteromeric channels with KCNQ4 subunits [7,54,55], it is likely that a dominant-negative KCNQ4 mutant may also affect the functionality of these KCNQ channels through heteromerization. Collectively, our biochemical data in this study refined our understanding of the molecular mechanism underlying profound hearing loss associated with missense KCNQ4 mutations in the DFNA2 patients. Sensorineural hearing loss in DFNA2 patients progresses over years [14]. The slow progression of DFNA2 implies that KCNQ4 dysfunction in these patients might lead to degenerative processes as observed in many other genetic diseases affecting the nervous system [2]. Indeed, studies in mouse models have shown that genetic alterations in KCNQ4 result in progressive hearing loss that is paralleled by a selective degeneration of the hair cells and spiral ganglion neurons [13]. It has been proposed that progressive hearing loss in DFNA2 may result from an increasing load of DFNA2 mutants with age [5]. In concordance with this hypothesis, a recent association study in two different human populations has linked KCNQ4 to age-related hearing loss, which may be attributable to the elevated expression of a causative KCNQ4 splice variant during ageing [56]. The cellular capacity of protein quality control declines during ageing, which is a determining factor in the development and severity of many age-related diseases, such as neurodegenerative diseases, amyotrophic lateral sclerosis, cardiac diseases, cystic fibrosis, type II diabetes, etc. [57][58][59]. Thus, it is possible that the increasing load of DFNA2 mutants during ageing may exceed the capacity of the waning protein quality control system; as a result, the mutant KCNQ4 proteins may accumulate excessively and become cytotoxic species. Our findings in this and previous studies support this hypothesis. First, we found that DFNA2 mutants can escape the protein quality control system; they are apparently as stable as the WT channel under normal growth condition (Fig. 1). Second, we have demonstrated that the molecular chaperone HSP90 plays a central role in regulating the cellular level of the KCNQ4 channel [34]. HSP90 is well known for its ability in promoting folding and stabilization of mutant proteins and thereby buffers their functional effects. Depletion of HSP90 promotes phenotypic manifestations of genetic and epigenetic variations and has marked effects on the development of various human diseases [35,[59][60][61][62]. Conceivably, declining protein quality control, especially the decreased cellular level of HSP90, may accelerate the accumulation of cytotoxic DFNA2 mutants in the ageing hair cells and neurons, which in turn lead to cell death and then hearing loss. Another important but unsolved issue in the molecular aetiology of DFNA2 was whether DFNA2 mutations disrupt the conductance of the KCNQ4 channel, in addition to their effects on KCNQ4 surface expression. Unlike the mutations G285C and G285S, mutations L281S, G296S and G321S do not result in alterations in the signature sequence of the K + -filter in the KCNQ4 channel. Thus, it was uncertain whether these intracellularly retained KCNQ4 mutants could assemble channels with normal K + conductance. To answer this question, whole-cell currents were recorded from cells in which the KCNQ4 surface expression had been restored by overexpressing HSP90b. We found that the conductance of the KCNQ4 channel was compromised by these three mutations, as no significant improvement in KCNQ4 current density was observed in these cells. Therefore, our data confirmed that decreased cell surface expression and impaired conductance of the KCNQ4 channel are two independent mechanisms underlying hearing loss in DFNA2; that restoration of KCNQ4 surface expression by overexpression of HSP90 was not sufficient to rescue the channel function in HEK293T cells mimicking the heterozygous condition of DFNA2 patients. These findings together laid important frame work for future studies towards functional rescue of pathogenic KCNQ4 mutations.
v3-fos-license
2023-07-12T05:38:24.104Z
2023-07-01T00:00:00.000
259692977
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-2615/13/14/2248/pdf?version=1688889754", "pdf_hash": "b1ba55a4ac0adfb5540383c48a0a98b9698c75cd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:350", "s2fieldsofstudy": [ "Biology" ], "sha1": "80ee852c685d1a428dfb36cce5814ef5a6083681", "year": 2023 }
pes2o/s2orc
Insights into the Response in Digestive Gland of Mytilus coruscus under Heat Stress Using TMT-Based Proteomics Simple Summary High-temperature stimulation can lead to severe stress response and affect the normal physiological function of animals. Generally, animals respond to the stimulation of extreme environments by regulating their physiological functions, including energy metabolism, biofactor synthesis, and degradation. The results of this study suggest that marine animals may respond to heat stress by regulating oxidative stress-related enzymes and basic metabolic levels. Abstract Ocean warming can cause injury and death in mussels and is believed to be one of the main reasons for extensive die-offs of mussel populations worldwide. However, the biological processes by which mussels respond to heat stress are still unclear. In this study, we conducted an analysis of enzyme activity and TMT-labelled based proteomic in the digestive gland tissue of Mytilus coruscus after exposure to high temperatures. Our results showed that the activities of superoxide dismutase, acid phosphatase, lactate dehydrogenase, and cellular content of lysozyme were significantly changed in response to heat stress. Furthermore, many differentially expressed proteins involved in nutrient digestion and absorption, p53, MAPK, apoptosis, and energy metabolism were activated post-heat stress. These results suggest that M. coruscus can respond to heat stress through the antioxidant system, the immune system, and anaerobic respiration. Additionally, M. coruscus may use fat, leucine, and isoleucine to meet energy requirements under high temperature stress via the TCA cycle pathway. These findings provide a useful reference for further exploration of the response mechanism to heat stress in marine mollusks. Introduction Temperature is a crucial environmental factor that affects the survival of marine life. A suitable ambient temperature is beneficial to the growth, development, and reproduction of marine life [1][2][3]. However, if the temperature exceeds a certain range, it will have various negative effects on organisms, including decreased protein levels, decreased fertility, and inhibited immune systems [4,5]. With rising global temperatures [6], marine life, especially bivalves, are experiencing stress and death. In the warmest months of the southern summer, aquacultured New Zealand Perna canaliculus have suffered from mass die-offs [7]. In addition, due to the effects of high temperatures, mass mortality of oyster and the extensive infection of M. edulis with Marteilia pararefringens are common phenomena in summer [8,9]. M. coruscus is an economically important aquatic shellfish that is widely distributed in the coastal waters of the Yellow Sea and the East China Sea, especially in the coastal areas of Zhejiang [10,11]. In farmed Mytilus, compared to another aquatic shellfish, M. galloprovincialis, M. coruscus has the higher protein, crude fat, DHA, and trace element content, as well as a higher economic value [12]. With increasing temperatures, the yield and performance of M. coruscus' byssus weaken, leading to shedding [13]. Additionally, the absorption and excretion rates of M. coruscus are reduced as the temperature increases [14]. Moreover, high temperatures reduce the growth and survival rate of M. coruscus' larvae [15]. Therefore, the rise in temperature is detrimental to the survival of M. coruscus. Currently, studies on the effects of high temperature on M. coruscus mainly focus on physiological functions and specific functional genes [16], but the literature currently lacks studies on its heat resistance mechanism. Recently, we investigated the changes in transcription levels of mussels and speculated based on RNA-sequence analysis that mussels may respond to high temperature through ubiquitination and lysosome pathways [17]. However, it is well known that regulation of mRNA levels does not exactly correspond to regulation of protein levels, and changes in protein levels can reveal the true molecular regulatory process in the cell. In aquatic animals, the key organ of metabolism is the digestive gland, which can respond to the stress of biological and abiotic factors through a series of physiological reactions [18][19][20][21][22]. To explore the regulation mechanism of M. coruscus in response to hightemperature stimulation, the enzyme activities related to immune stress and proteomic profiling of digestive gland were examined in M. coruscus growing at 18 • C, 26 • C, and 33 • C. Our results provide insights into the relative importance of these proteins and related biological pathways, which provides direction for research into the heat tolerance mechanisms of M. coruscus. Animal Culture and Heat Treatments M. coruscus specimens were sourced from an aquatic product market located in Zhoushan City, Zhejiang Province, China. The specimens were acclimated to the natural environment for seven days in an aerated water tank with a pH of 7.5-8.0, salinity of 30 ‰, and temperature of 18 • C. Three temperature groups of 18 • C, 26 • C, and 33 • C were set in this study as described by previous study [17]. After 24 h of exposure to different temperatures, the digestive glands of M. coruscus from each temperature group were collected in triplicate, and their proteins were immediately extracted. Determination of Biological Enzymes Each digestive gland sample was mixed with physiological saline at a ratio of 1:9. The resulting mixture was centrifuged at 465× g for 10 min at 4 • C. The supernatant obtained after centrifugation was stored at −20 • C for subsequent experimental analysis. All procedures were conducted on ice. The activities of SOD, ACP, and LDH, as well as LZM content [23,24], were determined in the supernatant using assay kits according to the reagent guidelines. All the reagents used were from Nanjing Jiancheng Bioengineering Institute, Nanjing, China. Labeling and Sequencing of Proteins Each experimental group was composed of three replicates, with each replicate consisting of three mussels. The digestive gland tissue was ground into powder using liquid nitrogen, and proteins were extracted following previously established methods [25]. The protein extract was then centrifuged to obtain the protein sample, which was stored at −80 • C. We added 0.5 M of TEAB to 100 µg of protein sample to dilute it and reduce the urea concentration to less than 2 M. Trypsin was added to the sample at a ratio of 1:20 (enzyme-to-protein), followed by mixing and centrifugation. The resulting mixture was incubated at 37 • C for 4 h, then desalted and dried. The peptides were labeled with tandem mass tag (TMT) reagents (BGI Genomics Co., Ltd., Shen Zhen, China) per the reagent guidelines. The peptides were subsequently mixed, desalted, and dried. Subsequently, the labeled peptide samples were subjected to liquid chromatography using a Shimadzu LC-20AB system. The dried peptide samples were then separated by elution with mobile phase A and mobile phase B. The components of the two mobile phases were 5% acetonitrile and 95% acetonitrile, respectively, and the pH was 9.8. The components were collected at a wavelength of 214 nm and freeze-dried. The peptides were suspended using mobile phase A, and the resulting solution was centrifuged at 2000× g for 10 min. The supernatant obtained after centrifugation was separated using an Easy-n LC 1200 system (Thermo Fisher Scientific, San Jose, CA, USA). The sample was loaded into a C1 self-packed column with mobile phase B in the column, and the liquid chromatography system was linked to the mass spectrometer. Following separation via liquid chromatography, the peptide was ionized via nanoelectrospray ionization (nano-ESI) and detected using the Orbitrap Exploris 480 tandem mass spectrometry in DDA mode (Thermo Fisher Scientific, San Jose, CA, USA). The parameters of the instrument were as follows: the ion source voltage was set to 2.1 V, while M1 and M2 had mass ranges of 350-1600 m/z and 100 m/z, respectively. The M1 and M2 also had resolutions of 60,000 and 15,000, respectively. The selection criteria for ion fragmentation in MS2 were based on an ion charge of 2+ to 7+ and a first parent ion peak intensity greater than 50,000. The ion fragmentation method was set to HCD, and broken ions were detected using Q-Exactive Orbitrap. Additionally, dynamic exclusion was set to 30 s, and the AGC values were set to MS1 1E6 and MS2 1E5. Expression and Function Analysis The software program IQuant [26] was used to analyze labeled peptides with isobaric tags and identify confident proteins based the "simple principle" (the parsimony principle). The protein false discovery rate (FDR) of 1% was estimated at the protein level to control the rate of false positive results [27]. If the fold change was greater than 1.2 and p-value less than 0.05, then the protein would be considered significantly upregulated. Conversely, if the fold change was less than 0.833333 and p-value less than 0.05, then the protein would be considered significantly downregulated. The software program Proteome Discoverer (https://thermo-proteome-discoverer. software.informer.com/, accessed on 20 February 2022) was used to search MS/MS spectra against a database of transcriptome-based protein sequences [17] for the identification of parent proteins. The highest score for a specific peptide mass was considered the best match to that predicted in the database. Parameters for protein searching were set by considering tryptic digestion with two missed cleavages, carbamidomethylation of cysteines as fixed modification, and oxidation of methionines and protein N-terminal acetylation as variable modifications. Peptide spectral matches were validated based on q-values at a 1% false discovery rate (FDR), which indicates the probability of identifying false positives in the dataset. Next, the identified protein IDs were converted to UniProt IDs and mapped to GO IDs using the software program InterProScan (v.5.14-53.0, http://www.ebi.ac.uk/interpro/, accessed on 21 February 2022) for annotation of protein function. KAAS (v.2.0 http:// www.genome.jp/kaas-bin/kaas_main, accessed on 21 February 2022) was used to identify KEGG pathways significantly enriched in the differentially expressed proteins to explore the pathways implicated in the response to heat stress. In addition, functional enrichment analysis including GO and KEGG were performed using Goatools (https://github.com/ tanghaibao/goatools, accessed on 21 February 2022) and KOBAS (http://kobas.cbi.pku. edu.cn, accessed on 21 February 2022) to identify DEGs which were significantly enriched in biological pathways at a Bonferroni-corrected p-value ≤ 0.05 compared with the wholetranscriptome background. The TMT-based proteomics datasets have been deposited in ProteomeXchange with identifier PXD035618. Quantitative Real-Time PCR A total of five genes were selected for expression quantification analysis by using quantitative real-time PCR (qPCR) to verify TMT-based proteomics results. For each sample, 1-2 µg total RNA was used in cDNA synthesis using GoScript TM Reverse Transcription System (Promega, Clontech, Madison, WI, USA). Specific primers were designed (Table A1) based on mRNA sequences and synthesized by Sangon Biotech Co., Ltd, Shanghai, China. qPCR was performed using Go Taq ® qPCR and qPCR Systems (Promega, Clontech, Madison, WI, USA). The reaction was carried out in a total volume of 20 µL. Statistic Analysis In order to evaluate the statistical significance of the differences observed between control and heat treatment groups, analysis of variance was carried out using the software program IBM SPSS 22.0 at a significance level of 0.05. All data presented are means with standard deviation calculated from the triplicated samples from each group. Activity of Cellular Enzyme The results of the study indicated that the activities of SOD ( Figure 1A) and ACP ( Figure 1B) were significantly higher in the two high-temperature treatment groups compared to the control group. Additionally, the LDH activity in the 26 • C treatment group was significantly higher than that in the control group, but the opposite was observed in the 33 • C treatment group ( Figure 1C). Moreover, the cellular content of LZM was significantly increased in the two treatment groups compared to that of the control group ( Figure 1D). These results suggest that the antioxidant and immune systems respond to heat stress. Additionally, the opposite responses of LDH activity in the two hightemperature treatment groups suggest that anaerobic respiration may be affected by different temperature conditions. significantly enriched in biological pathways at a Bonferroni-corrected p-value ≤ 0.05 compared with the whole-transcriptome background. The TMT-based proteomics datasets have been deposited in ProteomeXchange with identifier PXD035618. Quantitative Real-Time PCR A total of five genes were selected for expression quantification analysis by using quantitative real-time PCR (qPCR) to verify TMT-based proteomics results. For each sample, 1-2 µg total RNA was used in cDNA synthesis using GoScript TM Reverse Transcription System (Promega, Clontech, Madison, WI, USA). Specific primers were designed (Table A1) based on mRNA sequences and synthesized by Sangon Biotech Co., Ltd, Shanghai, China. qPCR was performed using Go Taq ® qPCR and qPCR Systems (Promega, Clontech, Madison, WI, USA). The reaction was carried out in a total volume of 20 µL. Statistic Analysis In order to evaluate the statistical significance of the differences observed between control and heat treatment groups, analysis of variance was carried out using the software program IBM SPSS 22.0 at a significance level of 0.05. All data presented are means with standard deviation calculated from the triplicated samples from each group. Activity of Cellular Enzyme The results of the study indicated that the activities of SOD ( Figure 1A) and ACP ( Figure 1B) were significantly higher in the two high-temperature treatment groups compared to the control group. Additionally, the LDH activity in the 26 °C treatment group was significantly higher than that in the control group, but the opposite was observed in the 33 °C treatment group ( Figure 1C). Moreover, the cellular content of LZM was significantly increased in the two treatment groups compared to that of the control group (Figure 1D). These results suggest that the antioxidant and immune systems respond to heat stress. Additionally, the opposite responses of LDH activity in the two high-temperature treatment groups suggest that anaerobic respiration may be affected by different temperature conditions. Figure 1. Activity of SOD (U/mgprot, (A)), ACP (u/gprot, (B)), LDH (U/L, (C)) and LZM (U/mL, (D)) in M. coruscus exposed to heat stress. * indicates significant differences. Differential Protein Expression Profiles In total, 1,101,958 spectrums were generated, and 67,221 spectrums found matches in the database (Table A2). In all, 7559 mapped proteins from our proteomic analysis, namely collagen type VI alpha, glucuronosyltransferase, protein transport protein SEC61 subunit gamma and adenylyltransferase, were highly expressed proteins under normal conditions. Compared to the control group, the high-temperature conditions resulted in the identification of 1652 to 1878 differentially expressed proteins (DEPs), which accounted for 21.8% to 24.8% of all the detected proteins (Table A3). The expression pattern and number of differentially expressed proteins (DEPs) were similar in both the 26 • C and 33 • C treated groups. Compared with the control group, a total of 897 DEPs were identified and shared between the two groups (Figure 2A,B). Specifically, the 26 • C treated group had 763 significantly upregulated DEPs and 889 significantly downregulated DEPs, while the 33 • C treated group had 1051 significantly upregulated DEPs and 827 significantly downregulated DEPs ( Figure 2C,D). Of these, the DEPs in both the 26 • C and 33 • C treated groups were enriched in similar KEGG pathways, including protein digestion and absorption, p53 signaling pathway, longevity regulating pathway, lysosome, fatty acid metabolism, DNA replication, fat digestion and absorption, and apoptosis ( Figure 3A,B). Differential Protein Expression Profiles In total, 1,101,958 spectrums were generated, and 67,221 spectrums found matches in the database (Table A2). In all, 7559 mapped proteins from our proteomic analysis, namely collagen type VI alpha, glucuronosyltransferase, protein transport protein SEC61 subunit gamma and adenylyltransferase, were highly expressed proteins under normal conditions. Compared to the control group, the high-temperature conditions resulted in the identification of 1652 to 1878 differentially expressed proteins (DEPs), which accounted for 21.8% to 24.8% of all the detected proteins (Table A3). The expression pattern and number of differentially expressed proteins (DEPs) were similar in both the 26 °C and 33 °C treated groups. Compared with the control group, a total of 897 DEPs were identified and shared between the two groups (Figure 2A,B). Specifically, the 26 °C treated group had 763 significantly upregulated DEPs and 889 significantly downregulated DEPs, while the 33 °C treated group had 1051 significantly upregulated DEPs and 827 significantly downregulated DEPs ( Figure 2C,D). Of these, the DEPs in both the 26 °C and 33 °C treated groups were enriched in similar KEGG pathways, including protein digestion and absorption, p53 signaling pathway, longevity regulating pathway, lysosome, fatty acid metabolism, DNA replication, fat digestion and absorption, and apoptosis ( Figure 3A,B). riched in the gene ontology (GO) categories. In terms of molecular function (MF), the majority of proteins were involved in "binding," followed by those related to "catalytic activity." In terms of cellular component (CC), the largest group of proteins was involved in the "membrane." With respect to biological process (BP), the majority of proteins participated in "metabolic processes" ( Figure 3C). Furthermore, the expression patterns of DEPs in the 26 °C treatment group were similar to those in the 33 °C treatment group ( Figure 3D). Nutrients Digestion and Absorption High-temperature treatment significantly changed the expression of 28 proteins, and the expression patterns of these DEPs were similar in both heat-treatment groups ( Figure 4). Among these DEPs, 9 were involved in protein digestion and absorption, 5 were involved in fat digestion and absorption, and 3 were involved in carbohydrate digestion and absorption. In addition, 6 proteins secreted by the pancreas, including muscarinic acetylcholine receptor M3 (CHRM3), cholecystokinin A receptor (CCKAR), carbonic anhydrase 2 (CA2), secretory phospholipase A2 (PLA2G), Ras-related C3 botulinum toxin substrate 1 (RAC1), and Ras-related protein Rab-3D (RAB3D), were significantly changed under heat conditions. The differentially expressed proteins (DEPs) in the 26 • C treatment group were enriched in the gene ontology (GO) categories. In terms of molecular function (MF), the majority of proteins were involved in "binding", followed by those related to "catalytic activity". In terms of cellular component (CC), the largest group of proteins was involved in the "membrane". With respect to biological process (BP), the majority of proteins participated in "metabolic processes" ( Figure 3C). Furthermore, the expression patterns of DEPs in the 26 • C treatment group were similar to those in the 33 • C treatment group ( Figure 3D). Nutrients Digestion and Absorption High-temperature treatment significantly changed the expression of 28 proteins, and the expression patterns of these DEPs were similar in both heat-treatment groups (Figure 4). Among these DEPs, 9 were involved in protein digestion and absorption, 5 were involved in fat digestion and absorption, and 3 were involved in carbohydrate digestion and absorption. In addition, 6 proteins secreted by the pancreas, including muscarinic acetylcholine receptor M3 (CHRM3), cholecystokinin A receptor (CCKAR), carbonic anhydrase 2 (CA2), secretory phospholipase A2 (PLA2G), Ras-related C3 botulinum toxin substrate 1 (RAC1), and Rasrelated protein Rab-3D (RAB3D), were significantly changed under heat conditions. Animals 2023, 13, 2248 7 of 14 duction by isoleucine degradation, was upregulated in the heat treatments, while which catalyzes the pathway from leucine to acetyl-CoA, was downregulated in t treatment groups (Figure 4). ALT was increased in the heat treatment groups alo pathway from alanine, aspartate, and glutamate metabolism to produce fumarate are substrates for reactions in the TCA cycle ( Figure 4). These findings suggest that metabolism-related pathway proteins respond to heat stress in M. coruscus. Signaling Pathway of Stress Response to Heat Stress The proteomic data also showed significant regulation of proteins associate environmental stress. When compared to the control, caspase 3 proteins were signi upregulated in the p53 signaling pathway in the 26 °C and was significantly dow lated in the 33 °C (Figure 4). In the MAPK signaling pathway, p38 showed no sig change in the 33 °C treatment group, but was downregulated in the 26 °C treatmen ( Figure 4). Additionally, the apoptosis-related proteins, such as CHEK2, Bcl-x caspase family proteins, were significantly changed after treatment with high te ture. Specific Regulation of Metabolism Pathways After treatment with high temperature, Acetyl-CoA synthetase (ACS) was upregulated in both the 26 • C and 33 • C treatment groups. Moreover, several enzymes in the citric acid (TCA) pathway, such as 2-oxoglutarate dehydrogenase E1 component (OGDH), malate dehydrogenase (MDH), and succinate dehydrogenase (SDH), were similarly regulated in response to acclimation in both the 26 • C and 33 • C groups. Notably, no enzyme in the TCA pathway was downregulated under either treatment (Figure 4). In the fatty acid degradation pathway, CPT-1, ACO, and EHHADH were upregulated in both the 26 • C and 33 • C treatment groups ( Figure 4). Overall, several enzymes involved in amino acid metabolism were differentially regulated in response to the two heat treatments. ACAA, which catalyzes acetyl-CoA production by isoleucine degradation, was upregulated in the heat treatments, while ACAT, which catalyzes the pathway from leucine to acetyl-CoA, was downregulated in the heat treatment groups (Figure 4). ALT was increased in the heat treatment groups along the pathway from alanine, aspartate, and glutamate metabolism to produce fumarate, which are substrates for reactions in the TCA cycle ( Figure 4). These findings suggest that energy metabolism-related pathway proteins respond to heat stress in M. coruscus. Signaling Pathway of Stress Response to Heat Stress The proteomic data also showed significant regulation of proteins associated with environmental stress. When compared to the control, caspase 3 proteins were significantly upregulated in the p53 signaling pathway in the 26 • C and was significantly downregulated in the 33 • C (Figure 4). In the MAPK signaling pathway, p38 showed no significant change in the 33 • C treatment group, but was downregulated in the 26 • C treatment group (Figure 4). Additionally, the apoptosis-related proteins, such as CHEK2, Bcl-xL, and caspase family proteins, were significantly changed after treatment with high temperature. Validation of Proteomic Results Using Quantitative Real-Time PCR To confirm the expression patterns of differentially expressed proteins (DEPs) following heat stress in M. coruscus, changes in transcript levels were evaluated. Five heat shock protein (HSP) genes, including HSP1A, HSP16.1, HSPBP1, HSP110, and CRYAB, were selected, and their expression profiles were measured via qPCR analysis. The mRNA expression trends of all examined genes were consistent with the protein levels observed in the proteomic analysis ( Figure 5). Validation of Proteomic Results Using Quantitative Real-Time PCR To confirm the expression patterns of differentially expressed proteins (DEPs) following heat stress in M. coruscus, changes in transcript levels were evaluated. Five heat shock protein (HSP) genes, including HSP1A, HSP16.1, HSPBP1, HSP110, and CRYAB, were selected, and their expression profiles were measured via qPCR analysis. The mRNA expression trends of all examined genes were consistent with the protein levels observed in the proteomic analysis ( Figure 5). β-actin was set as reference gene, analysis of significance was carried out using t-test at the significance level of 0.05; * indicates significant differences. Discussion The normal life activities of mussels are vulnerable to external environmental stressors, including hypoxia, sea warming, and acidification [28][29][30]. Exposure to high-temperature conditions significantly downregulates the organism's energy reserves and weakens its immune function [31,32]. As the molecular response mechanism related to injury and death in mussels after high-temperature stress remains unclear, we examined the responses of M. coruscus to high-temperature exposure to investigate the molecular and cellular mechanisms involved. This study found that high temperatures regulate the antioxidant system, immune system, and anaerobic respiration of M. coruscus. Additionally, proteomic analysis showed that fatty acid metabolism, amino acid metabolism, the p53 signaling pathway, and the MAPK signaling pathway are related to the high-temperature response of M. coruscus ( Figure 6). β-actin was set as reference gene, analysis of significance was carried out using t-test at the significance level of 0.05; * indicates significant differences. Discussion The normal life activities of mussels are vulnerable to external environmental stressors, including hypoxia, sea warming, and acidification [28][29][30]. Exposure to high-temperature conditions significantly downregulates the organism's energy reserves and weakens its immune function [31,32]. As the molecular response mechanism related to injury and death in mussels after high-temperature stress remains unclear, we examined the responses of M. coruscus to high-temperature exposure to investigate the molecular and cellular mechanisms involved. This study found that high temperatures regulate the antioxidant system, immune system, and anaerobic respiration of M. coruscus. Additionally, proteomic analysis showed that fatty acid metabolism, amino acid metabolism, the p53 signaling pathway, and the MAPK signaling pathway are related to the high-temperature response of M. coruscus ( Figure 6). Validation of Proteomic Results Using Quantitative Real-Time PCR To confirm the expression patterns of differentially expressed proteins (DEPs) following heat stress in M. coruscus, changes in transcript levels were evaluated. Five heat shock protein (HSP) genes, including HSP1A, HSP16.1, HSPBP1, HSP110, and CRYAB, were selected, and their expression profiles were measured via qPCR analysis. The mRNA expression trends of all examined genes were consistent with the protein levels observed in the proteomic analysis ( Figure 5). β-actin was set as reference gene, analysis of significance was carried out using t-test at the significance level of 0.05; * indicates significant differences. Discussion The normal life activities of mussels are vulnerable to external environmental stressors, including hypoxia, sea warming, and acidification [28][29][30]. Exposure to high-temperature conditions significantly downregulates the organism's energy reserves and weakens its immune function [31,32]. As the molecular response mechanism related to injury and death in mussels after high-temperature stress remains unclear, we examined the responses of M. coruscus to high-temperature exposure to investigate the molecular and cellular mechanisms involved. This study found that high temperatures regulate the antioxidant system, immune system, and anaerobic respiration of M. coruscus. Additionally, proteomic analysis showed that fatty acid metabolism, amino acid metabolism, the p53 signaling pathway, and the MAPK signaling pathway are related to the high-temperature response of M. coruscus ( Figure 6). Antioxidant and Immune Function under High Temperature Sharp changes in water temperature can have significant effects on aquatic organisms, with excessive reactive oxygen species (ROS) and free radicals being the most common phenomena in heat stress responses [33]. Thermal stress directly impacts the metabolism of organisms, leading to metabolic disorders and ROS accumulation [34,35]. In this study, the activity of the antioxidant enzyme SOD was significantly increased after heat stress in two heat-treated groups, indicating that the enzyme responds to ROS production during heat stress in the digestive gland tissues of M. coruscus. Similar observations of increased antioxidant enzymatic activities have been reported in mussels after thermal exposure [36,37]. The effective induction of antioxidant enzymes in tissues may contribute to clearing accumulated peroxides under heat stress. For species with weak antioxidant capacity, the activities of SOD will gradually reduce with an increase in the number of free radicals, resulting in a greater degree of oxidative damage to the body [38]. In this study, SOD activities increased gradually with the increase of temperature, indicating that M. coruscus responds to heat stress by increasing its antioxidant capacity. Ambient stress, such as heat shock, can decrease the oxygen concentration in organisms [39]. As a result of oxygen limitation, animals typically employ anaerobic glycolysis to meet their energy demand [40,41]. This upregulation of anaerobic pathways is typically indicated by an increase in the activity of the metabolic enzyme LDH [42]. However, the present study indicates that the activity of LDH decreased in M. coruscus when exposed to extreme heat, suggesting a decrease in anaerobic glycolysis function. This could be a result of M. coruscus responding to the extreme temperature conditions. ACP is an important phosphatase in marine organisms, participating in degradation of foreign protein, carbohydrates, and lipids [43]; taking part in dissolution of dead cells; and acting as an ideal stress indicator in biological system [44]. In this study, the activities of ACP increased significantly in the digestive gland, which may indicate that M. coruscus responds to heat stress via the hydrolysis of high-energy phosphate bonds to liberate phosphate ions to combat stressful conditions or high metabolic rates [45]. In this study, the subjects' content of LZM was found to have significantly increased. This could be a result of the activation of the immune system, which is an additional stress response. The present study showed that LZM significantly increased at both 26 • C and 33 • C. More relevant literature further showed that there was a tendency for the content of LZM to decrease with increasing temperature or increasing treatment time, with a trend of increasing and then decreasing [46,47]. Effects of High Temperature on Nutrient Digestion Bivalves' digestive gland, also called the hepatopancreas, is comprised of digestive cells [48]. A recent analysis of physiological indicators revealed that the digestive enzyme activities of M. coruscus are affected by temperature [15]. In the present study, we observed that the biological pathways related to nutrient digestion and absorption proteins were significantly changed under heat conditions. Notably, most of the proteins involved in digestion were found to be downregulated under heat stress. Previous studies have indicated that the activity of amylase, maltase, trypsin, and pancreatic lipase can be enhanced in response to a rise in temperature [49][50][51]. These results suggest that the digestive function of M. coruscus' hepatopancreas may be weakened under heat stress. Effects of High Temperature on Signaling Pathway of Stress Heat stress can activate multiple signaling pathways and downstream responses to cope with the effects of heat stress on the body. In this study, we observed that proteins involved in the p53 signaling pathway, MAPK signaling pathway, and apoptosis pathway were significantly changed in M. coruscus exposed to high temperatures. The p53 pathway, which is critical for cell apoptosis, was significantly changed, indicating that high temperatures might harm M. coruscus cells. Additionally, the MAPK signaling pathway was also significantly changed. Mitogen-activated protein kinase (MAPK) is a group of conserved protein kinases that can transmit extracellular signals into the cell, resulting in various cellular responses such as cell proliferation, differentiation, and apoptosis [52]. The MAPK family includes the ERK, JNK, and p38MAPK signaling pathways. We observed a reduction in p38 in the p38MAPK signaling pathway in response to temperature stimulation under heat stress. This is consistent with previous studies that have shown that reducing p38 can eliminate damaged cells in response to stimuli [53,54]. Overall, M. coruscus may respond to heat stress by regulating apoptosis. Effects of High Temperatures on Metabolism The primary challenge for an organism under harsh environmental conditions is to optimize its energy requirements to maintain basic physiological processes. Acetyl-CoA synthetase was found to be upregulated in M. coruscus at 26 • C and 33 • C. In high-temperature environments, M. coruscus may activate the TCA cycle to increase ATP production by upregulating Acetyl-CoA synthetase. Furthermore, we observed that key enzymes involved in fatty acid metabolism and isoleucine metabolism were upregulated after exposure to high temperatures, indicating that these pathways may aid in adaptation to heat stress. Through these modifications, the supplementation of acetyl-CoA from D-glyceraldehyde-3P and long-chain lipids were enhanced [55]. It is noteworthy that many proteins involved in the TCA cycle were upregulated in the heat-treated group, indicating that the energy demand for M. coruscus's metabolism increased with increasing temperature within a certain temperature range. M. coruscus may respond to environmental stress by upregulating protein factors in the TCA cycle [56], while the enzyme activity in glycolytic and TCA cycle may be increased to enhance adaptation to environmental stress [57]. Glycolysis-related protein factors have been found to increase significantly under air exposure stress in the Crassostrea gigas [23]. When encountering variable temperatures, fatty acid metabolism provides advantages to facilitate variable resistance. In this study, the key proteins of amino acid metabolism in M. coruscus were upregulated, and the results suggest that enhanced fatty acid metabolism is conducive to adaptation to heat stress. We also observed that alanine production pathways in amino acid metabolism were upregulated, and M. coruscus may enhance environmental adaptation by entering the TCA cycle through alanine. Conclusions To investigate the response mechanism of M. coruscus to heat stress, we conducted an analysis of enzyme activity and TMT quantification in the digestive gland tissue of M. coruscus after exposure to high temperatures. Changes in four enzymes showed that M. coruscus can respond to damage caused by high-temperature stress through the antioxidant system, immune system, and anaerobic respiration. Proteomic analysis and HSP expression levels showed that M. coruscus can maintain normal cell morphology and function through the process of apoptosis mediated by p53 and MAPK signaling pathways. Additionally, M. coruscus may use fat and amino acids to meet energy requirements under high-temperature stress via the TCA cycle pathway. These results will provide a useful reference for further understanding response mechanism to heat stress in marine invertebrates such as mollusks.
v3-fos-license