text
stringlengths
330
20.7k
summary
stringlengths
3
5.31k
the total power is around 31%.Consequently, a novel structure of series EDFA is introduced using backward pumping distribution technique as depicted in Fig. 4.A single 980 nm LD is utilized to pump the EDFs in both stages according to the distribution ratio.An optical fiber coupler operating at 980 nm wavelength with splitting ratio of 70/30 is spliced at the output end of the LD.A 70% of LD power which is around 224 mW, is distributed to pump the HB-EDF in the first stage.Meanwhile, the remaining power which is around 96 mW, is distributed through 30% coupler output to pump the Zr-EDF in the second stage.Fig. 5 and show the performance of series EDFA pumping with one LD, and that pumping with two LDs, for two input signal powers of −30 dBm and −10 dBm.It can be observed that the gain and noise figure characteristics obtained by the EDFA pumping with one LD, show a similar performance as compared to the EDFA that pumping with two LDs, for both input signal powers.However, a very small gain decrement is obtained at the EDFA pumping with one LD due to the coupling loss in the coupler.At input signal power of −30 dBm, the average gain decrements are 0.3 dB and 0.6 dB in the C-band and L-band regions, respectively.The gain decrement is higher in the L-band region owing to the coupling ratio, where the 30% of the total optimum power is around 96 mW, which is lower than the optimum power at LD2 for the EDFA pumping with two LDs.At input signal power of −10 dBm, the average gain decrement is 0.5 dB within the gain flatness region.Overall, the proposed EDFA using backward pumping distribution technique demonstrates an efficient performance as well as a cost reduction as only one LD is utilized to pump two stages.The output laser spectrum at 1550 nm wavelength when the input signal power is equal to −10 dBm is shown in Fig. 6.It is clear that the output peak power is around 4.8 dBm which is almost the same at all wavelengths in the flat gain region.In summary, an efficient wideband and flat gain EDFA is experimentally investigated and developed, utilizing dual series stages.The proposed amplifier comprises a 0.5 m long HB-EDF and 4 m long Zr-EDF as a hybrid active fiber to fulfill amplification in C- and L-telecommunication bands, respectively.The performance of the proposed EDFA is examined in both forward and backward pumping schemes.It can be concluded that the backward pumping amplifier performed better than the forward pumping amplifier.It is found that the optimum laser diodes powers to achieve higher flat gain and lower noise figure are 220 mW and 100 mW for LD1 and LD2, respectively.At the optimum laser diodes powers, the backward pumping amplifier attained a gain flatness of 14.6 dB with maximum gain variation of ±1.8 dB, throughout a wide bandwidth of 70 nm, that is from 1530 nm to 1600 nm.Within the gain flatness band, the noise figure fluctuates from 4.3 dB to 7.9 dB.Using the backward pumping distribution technique, the proposed amplifier demonstrates not only an efficient performance, but also a cost reduction since only one laser diode is utilized to pump two stages.
A modern wideband and flat gain erbium-doped fiber amplifier (EDFA) is suggested and accomplished, by employing a recently fabricated hafnia-bismuth-erbium doped fiber (HB-EDF) and zirconia-erbium doped fiber (Zr-EDF) as a hybrid active fiber.The performance of the proposed EDFA is examined in both forward and backward pumping schemes, using 0.5 m long HB-EDF and 4 m long Zr-EDF in series structure to fulfill a wideband amplification that cover C- and L-telecommunication bands, respectively.At the optimum laser diodes powers, the backward pumping amplifier attained a gain flatness of 14.6 dB with the maximum gain variation of ±1.8 dB, throughout a wide bandwidth of 70 nm, that is from 1530 nm to 1600 nm.The noise figure fluctuates from 4.3 dB to 7.9 dB within the gain flatness band.Using the backward pumping distribution technique, the proposed amplifier demonstrates not only an efficient performance, but also a cost reduction since only one laser diode is utilized to pump two stages.
This paper presents a generic framework to tackle the crucial class mismatch problem in unsupervised domain adaptation for multi-class distributions. Previous adversarial learning methods condition domain alignment only on pseudo labels, but noisy and inaccurate pseudo labels may perturb the multi-class distribution embedded in probabilistic predictions, hence bringing insufficient alleviation to the latent mismatch problem. Compared with pseudo labels, class prototypes are more accurate and reliable since they summarize over all the instances and are able to represent the inherent semantic distribution shared across domains.Therefore, we propose a novel Prototype-Assisted Adversarial Learning scheme, which incorporates instance probabilistic predictions and class prototypes together to provide reliable indicators for adversarial domain alignment. With the PAAL scheme, we align both the instance feature representations and class prototype representations to alleviate the mismatch among semantically different classes. Also, we exploit the class prototypes as proxy to minimize the within-class variance in the target domain to mitigate the mismatch among semantically similar classes. With these novelties, we constitute a Prototype-Assisted Conditional Domain Adaptation framework which well tackles the class mismatch problem.We demonstrate the good performance and generalization ability of the PAAL scheme and also PACDA framework on two UDA tasks, i.e., object recognition and synthetic-to-real semantic segmentation.
We propose a reliable conditional adversarial learning scheme along with a simple, generic yet effective framework for UDA tasks.
Understanding how people represent categories is a core problem in cognitive science, with the flexibility of human learning remaining a gold standard to which modern artificial intelligence and machine learning aspire.Decades of psychological research have yielded a variety of formal theories of categories, yet validating these theories with naturalistic stimuli remains a challenge.The problem is that human category representations cannot be directly observed and running informative experiments with naturalistic stimuli such as images requires having a workable representation of these stimuli.Deep neural networks have recently been successful in a range of computer vision tasks and provide a way to represent the features of images.In this paper, we introduce a method for estimating the structure of human categories that draws on ideas from both cognitive science and machine learning, blending human-based algorithms with state-of-the-art deep representation learners.We provide qualitative and quantitative results as a proof of concept for the feasibility of the method.Samples drawn from human distributions rival the quality of current state-of-the-art generative models and outperform alternative methods for estimating the structure of human categories.
using deep neural networks and clever algorithms to capture human mental visual concepts
that a more sensitive process requires a stronger shielding from distraction as evident in a stronger inhibition of the ventral attention network.Considering all evidence, we propose that alpha power increases in right-parietal cortex reflect a gradual response corresponding to the strength of task-focused attention or task shielding, rather than merely indicating the direction of attention.This notion is supported by the finding that the AU task showed significantly higher task-related alpha power than the FS task even in the HIP condition where no relevant external information was available and attention can only be focused on internal processes in both tasks.Moreover, alpha power increases cannot simply be attributed to higher task load, since the FS task involved a higher task load due to the letter-based processing of the stimulus.The AU task hence may represent a more sensitive process that requires a stronger focus of attention.Specifically, the AU task is known to involve different strategies such as the retrieval of old uses from memory, or imagining disassembling of the object for using or recombining parts of it.These imaginative processes include the generation and manipulation of mental images of possible uses.The generation of ideas in form of mental images) can be conceived as a very sensitive cognitive process that may be easily interfered by irrelevant sensory stimulation coming from the visual stream, and thus to benefit from task-focused attention.In contrast, the generation of original sentences in the FS task probably did not rely on figurative representations, but rather on the retrieval of relevant semantic information.Along these lines, alpha power increases in right parietal cortex could also be considered as an indicator of the depth or elaborateness of an ongoing process of mental imagination, and thus represent a valid indicator of a cognitive process specific for creative cognition.This may not only explain alpha effects between tasks involving higher and lower amounts of divergent thinking, but also apply to individual differences in the ability to become immersed in a process of imagination.Effective executive processes are thought to be highly relevant for creative thought and this may particularly involve the ability to keep attention focused on demanding internal processes such as idea generation and imagination.
This study investigated the functional significance of EEG alpha power increases, a finding that is consistently observed in various memory tasks and specifically during divergent thinking.It was previously shown that alpha power is increased when tasks are performed in mind-e.g., when bottom-up processing is prevented.This study aimed to examine the effect of task-immanent differences in bottom-up processing demands by comparing two divergent thinking tasks, one intrinsically relying on bottom-up processing (sensory-intake task) and one that is not (sensory-independence task).In both tasks, stimuli were masked in half of the trials to establish conditions of higher and lower internal processing demands.In line with the hypotheses, internal processing affected performance and led to increases in alpha power only in the sensory-intake task, whereas the sensory-independence task showed high levels of task-related alpha power in both conditions.Interestingly, conditions involving focused internal attention showed a clear lateralization with higher alpha power in parietal regions of the right hemisphere.Considering evidence from fMRI studies, right-parietal alpha power increases may correspond to a deactivation of the right temporoparietal junction, reflecting an inhibition of the ventral attention network.Inhibition of this region is thought to prevent reorienting to irrelevant stimulation during goal-driven, top-down behavior, which may serve the executive function of task shielding during demanding cognitive tasks such as idea generation and mental imagery.© 2014 The Authors.
There is a growing interest in the mechanisms underlying the “two-heads-better-than-one” effect, which refers to the ability of dyads to make more accurate decisions than either of their members.One study, using a perceptual task in which two observers had to detect a visual target, showed that two heads become better than one by sharing their ‘confidence’, thus allowing them to identify who is more likely to be correct in a given situation.Sharing of confidence as a strategy for combining individual opinions into a group decision has also been established in non-perceptual domains.This tendency to evaluate the reliability of information by the confidence with which it is expressed has been termed the ‘confidence heuristic’.A recent study has shown that a simple algorithm based on the confidence heuristic – always opt for the opinion made with higher confidence – can yield a 2HBT1 effect in the absence of any interaction between individuals.Intrigued by this finding, we tested whether this algorithm could in practice replace interaction in collective decision-making.Importantly, such a formula for collective choice – if effective – would not be susceptible to the egocentric biases that may impair interaction, and could readily be used by decision makers, such as jurors, medical doctors or financial investors, who have to combine different opinions in limited time.Indeed, the implementation of heuristics inspired by individual decision-making has proved very useful within professional contexts.Building on Bahrami et al.’s study, Koriat asked isolated observers to estimate the degree of confidence in their perceptual decisions.Participants, all of whom had received the same sequence of stimuli, were afterwards paired into virtual dyads so that they matched each other in terms of their ‘reliability’.To remove individual biases in confidence, their confidence estimates were normalised, so that they shared the same mean and standard deviation, before being submitted to the Maximum Confidence Slating algorithm, which selected the decision of the more confident member of the virtual dyad on every trial.While circumventing interaction, the MCS algorithm yielded a robust 2HBT1 effect.Interestingly, isolated observers’ confidence estimates are negatively correlated with their reaction times when responses are given in the absence of speed pressure, raising the possibility that a Minimum Reaction Time Slating algorithm may be sufficient to yield a 2HBT1 effect.In this study, we tested the efficacy of the MCS and MRTS algorithms without matching dyad members in terms of their reliability, and compared the responses advised by the algorithms with those reached by the dyad members through interaction.In particular, we addressed three questions.First, does the success of the MCS and MRTS algorithms depend on the similarity of dyad members’ reliabilities?,Bahrami et al. found that the success of interactively sharing confidence was a linear function of the similarity of dyad members’ reliabilities.For similar dyad members, two heads were better than one.However, for dissimilar dyad members, two heads were worse than the better one.Interestingly, Bahrami et al. found that these discrepant patterns of collective performance could be explained by a computational model in which confidence was defined as a function of the reliability of the underlying perceptual decision.We predicted that the efficacy of the MCS and the MRTS algorithms would also depend on the similarity of dyad members’ reliabilities.Second, do the algorithms fare just as well as interacting dyad members?,People vary in their ability to estimate the reliability of their own decisions; this ability is typically referred to as ‘metacognitive’ ability and, in social contexts, determines the credibility of people’s confidence estimates.While the algorithms are prone to error when people misestimate the reliability of their own decisions, interacting individuals may take such misestimates into account.We predicted that interacting dyad members would take into account the credibility of each other’s confidence estimates when making their joint decisions, and that interaction would be relatively more beneficial than the algorithms for dissimilar dyad members; they have more to lose from following the more confident but less competent of the two.Third, what is the effect of normalising confidence estimates before selecting the decision made with higher confidence?,Koriat reported that the MCS algorithm performed equally well when using raw and normalised confidence estimates as its input.However, this analysis was limited to dyad members of nearly equal reliability.Even though people vary in the ability to evaluate the reliability of their own decisions, confidence estimates are rarely uninformative about underlying performance.As a consequence, normalising confidence estimates may remove statistical moments that reflect actual differences in underlying performance.We therefore predicted that submitting normalised confidence estimates to the MCS algorithm would be relatively more costly for dissimilar dyad members.To test our predictions, we analysed data from an experiment in which dyad members estimated their confidence in individual decisions on every trial, but were also required to make a joint decision whenever their individual decisions conflicted.We used data from two experimental conditions: a ‘non-verbal’ condition in which dyad members made their joint decisions only having access to each other’s confidence estimates, and a ‘verbal’ condition in which dyad members also had the opportunity to verbally negotiate their joint decisions.In total, fifty-eight participants took part in the non-verbal and the verbal conditions.All participants were healthy adult males with normal or corrected-to-normal vision.The members of each dyad knew each other before taking part in the experiment.We describe the key experimental details below.Dyad members sat at right angles to each other in a dark room, each with their own screen and response device.For each trial, dyad members viewed two brief intervals, on which six contrast gratings were
In a range of contexts, individuals arrive at collective decisions by sharing confidence in their judgements.We found that the success of these heuristics depends on how similar individuals are in terms of the reliability of their judgements and, more importantly, that for dissimilar individuals such heuristics are dramatically inferior to interaction.
around 50% of the variance, R2 = 49.8, F1,28 = 26.74, p < .001.For similar dyad members, the MCS algorithm yielded a collective benefit.However, for dissimilar dyad members, the MCS algorithm yielded a collective loss.The similarity of dyad members’ sensitivities also significantly predicted the collective outcome of the MRTS algorithm, b = .59, t = 4.69, p < .001, and explained around 45% of the variance, R2 = 44.9, F1,28 = 22.02, p < .001.However, the MRTS only yielded a collective benefit for five dyads.The MCS algorithm was superior to the MRTS algorithm, both when normalised confidence estimates, t = 4.99, p < .001, and raw confidence estimates, t = 6.26, p < .001, were used as its input.To test whether interacting dyad members used the credibility of each other’s confidence estimates to guide their joint decisions, we regressed the fraction of disagreement trials in which the dyad eventually followed the decision of dyad member A instead of that of dyad member B against the ratio of the AROC for dyad member A relative to that of dyad member B.The AROC ratio significantly predicted the choice ratio, b = .70, t = 3.58, p < .001, and explained around 30% of the variance, R2 = .32, F1,28 = 12.78, p < .001.The similarity of dyad members’ sensitivities significantly predicted the performance of the empirical dyads relative to that of the MCS algorithm, b = −.43, t = −4.21, p < .001, and explained around 40% of the variance, R2 = .40, F1,28 = 17.73, p < .001.For similar dyad members, there was no relative benefit for interaction over the MCS algorithm.However, for dissimilar dyad members, there was a relative benefit for interaction over the MCS algorithm.The similarity of dyad members’ sensitivities also significantly predicted the performance of the empirical dyads relative to that of the MRTS algorithm, b = −.651, t = −2.20 p = .037, and explained around 15% of the variance, R2 = .15, F1,28 = 4.83, p = .037.However, the MRTS algorithm only outperformed two of the empirical dyads.The similarity of dyad members’ sensitivities significantly predicted the effect of normalising confidence estimates, b = .17, t = 2.70, p = .012, and explained around 20% of the variance, R2 = .21, F1,28 = 7.31, p = .012.For similar dyad members, there was a relative benefit for normalising confidence estimates.However, for dissimilar dyad members, there was a relative cost to normalising confidence estimates.We note that this effect was relatively subtle, and that the relative benefit of interaction over the MCS algorithm was not affected using raw, instead of normalised confidence estimates, as input to the MCS algorithm.Sharing of confidence as a strategy for combining individual opinions into group decisions has been established in a wide range of contexts.This tendency to evaluate the reliability of information by the confidence with which it is expressed has been termed the ‘confidence heuristic’.In this study, we tested two simple ways of implementing the confidence heuristic in the context of a collective perceptual decision-making task: the MCS algorithm, which opts for the decision made with higher confidence, and the MRTS algorithm, which opts for the faster decision, exploiting a negative correlation between confidence and reaction time.Our findings have important implications for the use of heuristics for collective choice and for models of confidence and collective decision-making.As expected, the MCS and the MRTS algorithms only yielded 2HBT1 effects for dyad members of nearly equal reliability.However, despite a negative correlation between confidence and reaction time, the MCS algorithm markedly outperformed the MRTS algorithm.The superiority of the MCS algorithm to the MRTS algorithm could be due to differences in the ‘pre-processing’ of their input.While we could submit both normalised and raw confidence estimates to the MCS algorithm, we could only submit raw reaction times to the MRTS algorithm because of a programming error.Since individuals vary with respect to their average response speed, pooling raw reaction times from two individuals might corrupt the link between confidence and reaction time.However, the MCS algorithm outperformed the MRTS algorithm even when raw confidence estimates were used as its input, suggesting that reaction time may be a very noisy substitute for confidence.The sign of the correlation between confidence and reaction time has been found to depend on response demands, with a negative correlation in the absence of speed pressure and a positive correlation under speed pressure.While no speed pressure was enforced in the current task, dyad members may have paced their responses on a subset of trials, thus corrupting the negative correlation between confidence and reaction time.We note that, outside the context of the MCS and the MRTS algorithms, the normalisation of confidence estimates has a more straightforward interpretation than the normalisation of reaction times.The normalisation of confidence estimates is intended to remove biases in the use of a scale – here, how an internal variable is mapped onto a confidence scale – and could potentially capture important aspects of collective decision-making.It is less clear how the normalisation of reaction times would translate into other contexts.Taken together, our findings show that heuristics for collective choice are susceptible to individual differences in reliability, and suggests that reaction time cannot be substituted for confidence without incurring a considerable collective accuracy cost.In this light, we will limit the remainder of the Discussion to the MCS algorithm.If the assumptions of the WCS model were satisfied, the responses advised by the MCS algorithm should be just
This tendency to evaluate the reliability of information by the confidence with which it is expressed has been termed the 'confidence heuristic'.We tested two ways of implementing the confidence heuristic in the context of a collective perceptual decision-making task: either directly, by opting for the judgement made with higher confidence, or indirectly, by opting for the faster judgement, exploiting an inverse correlation between confidence and reaction time.
as accurate as those reached by dyad members through interaction.However, the ability to estimate the reliability of one’s own decisions shows substantial individual differences; this ability is typically referred to as metacognitive ability and quantified as metacognitive accuracy.While the MCS algorithm is prone to error when people misestimate the reliability of their own decisions, interacting individuals may take such misestimates into account.For example, one study has shown that mock jurors find witnesses who are confident about erroneous testimony less credible than witnesses who are not confident about it.We predicted that interacting dyad members would take into account the credibility of each other’s confidence estimates when making their joint decisions, and that interaction would be relatively more beneficial than the algorithms for dissimilar dyad members, because they have more to lose from following the more confident but less competent of the two.As for the first prediction, the fraction of disagreement trials in which the dyad followed dyad member A instead of dyad member B depended on their relative metacognitive accuracy, indicating that dyad members used the credibility of each other’s confidence estimates to guide their joint decisions.As for the second prediction, interaction was more robust than the MCS algorithm to differences in reliability.For similar dyad members, the decisions reached through interaction were no more accurate than those advised by the MCS algorithm.However, for dissimilar dyad members, the decisions reached through interaction were considerably more accurate than those advised by the MCS algorithm; this was true irrespective of whether normalised or raw confidence estimates were submitted to the MCS algorithm.While models of collective decision-making have identified the ‘arbitration’ of confidence estimates as key to collective performance, our findings suggests that the ‘weighting’ of confidence estimates is equally important for collective performance.Without taking the credibility of confidence estimates into account, the MCS algorithm cannot fully replace interaction in collective decision-making.Our study highlights the social heterogeneity of credibility as an interesting avenue for computational research: how do we estimate the credibility of each other’s opinions, and how good are we at doing so?,We note that the relative benefit of interaction over the MCS algorithm need not necessarily result from dyad members discounting the opinion of the dyad member with lower metacognitive accuracy.More specifically, while dyad members may assign less weight to the opinion of the more confident but less competent member, they may also assign more weight to the opinion of the less confident but more competent dyad member.We believe that computational models of social learning are needed to tease apart such decision strategies.All participants received feedback about the accuracy of each decision, and could thus directly evaluate the credibility of each other’s confidence estimates.Indeed, previous research has shown that diagnostic feedback helps groups of individuals to identify their more accurate members.The relative benefit of interaction over the MCS algorithm might therefore not persist in the absence of diagnostic feedback.However, using the same visual perceptual task, Bahrami et al. found that diagnostic feedback was not necessary for the accumulation of a 2HBT1 effect – diagnostic feedback only appeared to accelerate the process – indicating that individuals may rely on other signals when they learn the credibility of each other’s confidence estimates.The identification and incorporation of these signals will pose a major challenge for dynamic models of collective decision-making.Here, for each dyad, one participant was recruited, and then asked to bring along a friend to the study.Dyad members might thus have used their interpersonal history to establish the credibility of each other’s confidence estimates.However, using a similar task, but in the domain of approximate numeration, Bahrami, Didino, Frith, Butterworth, and Rees found that familiarity had little impact on collective performance, suggesting the dyad members evaluated the credibility of each other’s confidence estimates in the context of their current task performance.While there is no evidence for a main effect of familiarity on collective performance, it may be the case that familiarity matters more for dissimilar than similar dyad members.Research has shown that individuals are ‘overconfident’ about the accuracy of their knowledge-based judgements but ‘underconfident’ about the accuracy of their perceptual judgements.These discrepant patterns of confidence have led to the hypothesis that different types of information determine confidence in knowledge and perception.For example, Juslin and Olsson propose a model of confidence in which perceptual judgements are dominated by ‘Thurstonian’ uncertainty, internal noise such as stochastic variance in the sensory systems, whereas knowledge-based judgments are dominated by ‘Brunswikian’ uncertainty, external noise such as less-than-perfect correlations between features in the environment.This dissociation raises issues as to whether confidence can be used as a proxy for the reliability of decisions in non-perceptual domains.However, direct comparison of knowledge-based and perceptual judgements has found evidence for a common basis of confidence, suggesting that the efficacy of the MCS algorithm will generalise to non-perceptual domains.This work was supported by the Calleva Research Centre for Evolution and Human Sciences, the European Research Council Starting Grant NeuroCoDec 309865, the Danish Council for Independent Research – Humanities, the EU-ESF program Digging the Roots of Understanding DRUST, the European UnionMindBridge Project, the Gatsby Charitable Foundation, and the Wellcome Trust.
Interaction allows individuals to alleviate, but not fully resolve, differences in the reliability of their judgements.We discuss the implications of these findings for models of confidence and collective decision-making.© 2014 The Authors.
and it is reported that the compound induces apoptosis in breast cancer cells.In another study, widdrol-a natural sesquiterpene- from Juniperus species have been reported to inhibit the growth of HT-29 colon cancer cells by inducing apoptosis.Erenler studied isolation, identification, and anticancer activity of J. excelsa berries.In the study, four known and one new diterpenes were isolated besides a known sesquiterpene.The hexane extract was tested against BC1, LU1, COL2, KB, KB-V, KB-V, LNCaP, and ASK cells.ASK found to be inactive to the BC1 and LU1 cells, while it was moderately active COL2, and very active to the KB, KB-V, KB-V, LNCaP, and ASK cells.Hydro-alcoholic extracts of fruits of Juniperus sabina were examined on normal and cancer cell lines.The results showed that the IC50 of hydro-alcoholic extract of J. sabina on the 4 cell lines decreased according to the rank order of cells CHO < Fibroblast < HepG2 < SKOV3.Cytotoxic and antitumoral effects of leaf and berry extracts from two different Juniperus species on DU145 and PC-3 prostate cancer cell lines were studied and it was determined that selective cytotoxic effects of the extracts on CRPC cell lines was increased dose and time dependent manner.According to the morphological and immunohistochemical analysis of cell death, while an increase was detected in early apoptosis after 48 h, for 72 h application higher rate of late apoptotic and lower rate of necrotic cell death were detected.More cytotoxic effect was revealed in J. excelsa extracts between two species and in leaf extracts in the same species.A comparative study was carried out for methanol extract of four Juniperus species including Juniperus oxycedrus ssp. oxycedrus, J. foetidissima, J. excelsa and J. communis plants in terms of phenolic profile and antiproliferative activities.Analysis of the phenolic compounds was carried out with HPLC-TOF/MS by comparison of the differences qualitatively and quantitatively.Results of the present study showed that the needles of Juniperus species may be evaluated as a source of natural compound with antiproliferative activities against C6 and HeLa cells, which could be suitable for applications in medicinal purposes.
In this study, the needle and cone of Juniperus species (Juniperus oxycedrus L. subsp.oxycedrus, J. foetidissima Willd., J. excelsa Bieb.and J. communis L.) that are commonly used in Turkish folk medicine were investigated in terms of their phenolic contents and antiproliferative activities.The phenolic constituents of the extracts were determined by HPLC-TOF/MS.Antiproliferative activities of the species were investigated between 5 and 100 μg/mL concentrations against human cervix carcinoma (HeLa) and rat brain tumor (C6) cell lines.Catechin and rutin were determined as the main phenolics of Juniperus species.Needle extracts exhibited better activities than cone extracts against HeLa and C6 cells.Needle of J. foetidissima exhibited high antiproliferative activities with an IC50 value of 10.65 against C6 cell lines.For HeLa cells, the highest antiproliferative activity was obtained with needle of J. communis with an IC50 value of 32.96.The results of the present study give a scientific basis to the traditional utilization of these Juniperus species, also demonstrating their potential as sources of natural anticancer compounds.
indicates the formation of a stage 2 intercalation compound.The band at 1371 cm−1 has been observed previously and its origin remains poorly understood .However recent inelastic X-ray scattering measurements have shown that the E2g2 phonon for LiC6 softens all the way to ∼1380 cm−1 as the electron doping destabilises the C–C bonds and therefore the observed band at 1371 cm−1 could be this measured phonon mode.The D band intensity decreases as the potential is lowered and by 0.2 V is challenging to accurately peak-fit, once the sloping background has been removed.The D′ and D + D″ though clearly evident at 2.7 V, by 0.2 V have decreased in intensity to be no longer discernible above the back ground noise.The 2D band changes shape and shifts with potential, as previously reported .The fitted 2D and 2D band positions remain fairly constant until 0.4 V.The 2D peak changes shape below 0.4 V, and at 0.2 V the 2D peak is no longer resolved leaving the single 2D band.Between 0.2 and 0.11 V the 2D peak shifts from 2658 to 2607 cm−1, and at 0.08 V the 2D band is no longer present.This is because the electronic doping of the systems is to such the extent that it now fully disallows the activation of the double resonance band.Fig. 8 shows the 2D peak shift measured at different points on the electrode under potential control, with the points having ID/IG ratios of 0.33, 0.66 and 1.2.The 2D band shift for each of these points are 497 ± 47, 541 ± 18 and 509 ± 26 cm−1 V−1 respectively.The similarity of the shift indicates that local graphitic disorder does not have any influence on the behaviour of the 2D band during lithium insertion.Producing few- and many- layer graphene on a large scale, where the flake sizes are greater than 1 μm, remains non-trivial.The reductive electrochemical exfoliation method offers a promising approach to overcome this challenge, and in particular tailor graphitic particles for Li-ion negative electrodes.6 μm diameters, particulate graphite were reductively electrochemically exfoliated to give flakes of which ∼90% were thinner than 10 layers.However it was found that restacking of the flakes is a major consideration when the produced material is allowed to dry out.PXRD shows significant reduction of rhombohedral fraction has occurred during the restacking of the multi and few-layer graphene flakes produced, as well as a measured lower initial specific capacity resulting from partial turbostratic formation during the restacking.In situ Raman microscopy of the exfoliated-SFG6 shows similar electrochemical and spectroscopic behaviour to SFG6, indicating that exfoliation process and subsequent restacking has had no influence on Li insertion.The 2D band shift during stage formation is not influenced by the initial level of disorder, suggesting that the measured 2D shift is predominantly due to electron doping rather than being structural in origin.Future work is focussing on the production of the electrodes from exfoliated material using routes to prevent re-aggregation at the ink stage in order to construct high rate performing graphite electrodes.
Reducing the flake thickness of microcrystalline graphite powders via electrochemical exfoliation offers a method to overcome the latter, sluggish grain boundary Li+ diffusion, thereby increasing the overall rate capability of the graphite negative electrode in a the Li-ion battery.Six micron particulate graphite was electrochemically exfoliated to give flakes of which ∼90% had a thickness of <10 graphene layers.This exfoliated material was then prepared as an ink and allowed to dry prior to forming a battery electrode.Analysis of the electrode and dried exfoliated powder using powder X-ray diffraction, scanning electron microscopy and Brunauer-Emmett-Teller isotherm analysis show that the material has, apart from a significant reduction of the rhombohedral fraction from 41% to 14%, near-identical properties to that of original starting graphite powder.Thus, once the exfoliated powder has been dried from the exfoliation process, as anticipated, major restacking of the multi-layer graphene flakes had occurred.Likewise there was no significant improvement in using the exfoliated material at high rates of delithiation and lower specific capacity, when tested within a half cell vs. lithium metal.In situ Raman analysis showed that the exfoliated material displayed similar spectral features to the pristine sample during lithiation, as did multi point measurements on differently disordered areas shown from the varying ID/IG-band intensity ratios, indicating that local surface disorder does not influence the course of lithium insertion.The re-aggregation of graphenic material is widely recognised, but seldom evaluated.This work shows the importance of keeping graphenic material dispersed at all stages of production.
The data contains phylogenetically analyzed CRBN sequences.The sequences were collected from NCBI GenBank.Fig. 1 shows phylogenetic tree of the mammalian CRBN sequence reconstructed using maximum likelihood and neighbor-joining method.On the same dataset, site-model test for detection of positively selected site was applied, represented in Fig. 2.Ancestral state of position 366 are illustrated in Fig. 3 estimated using maximum likelihood method.The result for coevolutionary analysis of vertebrate CRBN based on mirror tree method are listed in Table 1 and Fig. 3.Protein coding sequences of mammalian crbn genes were obtained from GenBank in September 2017.Partial sequences were excluded from the dataset.The sequences were aligned with ClustalW implemented in MEGA7 .The default parameters were used for ClustalW.Redundant sequences were removed manually after multiple sequence alignment, 64 sequences were further analyzed.Gene copy numbers were determined to validate the orthologous relationships of crbn genes.They were confirmed with the orthologous matrix database and the orthologues view of Ensembl .A total of 42 sequences out of 64 were registered in those databases.Among the registered sequences, 41 species had a single copy of crbn.Phylogenetic trees of the CULT domain, Lon domain and full length crbn were constructed.Trees were built using maximum likelihood estimation implemented with MEGA7.The Kimura two parameter substitution model with discrete Gamma distribution of five categories were selected based on Akaike information criterion scores .The dataset was also analyzed using the neighbor-joining method in the Tamura three parameter substitution model .Bootstrap resampling was conducted 1000 times for each method.The Selecton server was used to identify positive selection using the site-model .Briefly, the server conducts likelihood ratio test between the null hypothesis that does not allow positive selection and the alternative hypothesis that allows positive selection to determine if there is positive selection in the dataset.The MEC, which assumes positive selection, uses AICc to compare the fitness in the dataset as it is not a nested model.If there is positive selection in the dataset, the Selecton server calculates dN/dS for each site and presents sites with a dN/dS statistically significant above one as positively selected site.A Bayesian approach was used for the dN/dS calculation.To assess the reliability of dN/dS values, a confidence interval defined by the 5th and 95th percentile of the posterior distribution is used.When the lower bound of the confidence interval is larger than one, the site is defined as positively selected site .The dataset did not show statistical significance between M8 and M8a but showed statistical significance between M8 and M7.MEC fitted the dataset best as it had the lowest AICc.FEL, REL, and SLAC methods were simultaneously applied to.This server is also based on a site-model calculated with the ML approach .dN/dS > 1 is defined as positively selected site here with statistical confidence by testing whether dN is significantly different from dS .The Codon positions detected in dataset 1 are presented in Supplementary Tables 4–6.MSA colored with dN/dS value are presented in Fig. 2 for 13 representative species and for all 64 MSA species in Supplementary Table 7.Next, the ancestral sequence reconstruction was conducted in MEGA7 estimating the maximum likelihood with the MSA and CULT domain phylogenetic tree of dataset 1.Fig. 3 represents the ancestral state of codon 366, detected as positively selected site.The protein coding sequences of 11 vertebrate genes were collected from GenBank in May 2018.Proteins that are known to be the E3 complex factor or binding partners of CRBN were selected.Here, binding domain of CRBN is not restricted to CULT domain but also Lon domain.Those are DDB1: DNA damage-binding protein1, Rbx1: RING-box protein 1, AMPKα: AMP-activated protein kinase α, IKZF1: IKAROS family zinc finger 1, Meis2: Meis Homeobox 2, SQSTM1: Sequestosome 1, BK channel: Big potassium channel.Four conserved proteins were selected as negative control, GAPDH: Glyceraldehyde-3-phosphate dehydrogenase, GPI: Glucose-6-phosphate isomerase, EF-1α: Elongation factor 1α, and β-Actin.CULT domain, Lon domain, and full length CRBN were separately prepared for comparison between the domains.Partial sequences were cut from the dataset.The sequences were aligned with ClustalW implemented in MEGA7 .Default parameters were used for ClustalW.Redundant sequences were removed manually after multiple sequence alignment, which consisted of a total number of 47-55 sequences for further analysis.The composition of the sequence species are briefly described in supplementary table 2.A phylogenetic tree was reconstructed with the neighbor-joining method using the maximum composite likelihood model with 500 bootstrap replicates.The trees were uploaded for a co-evolution analysis to the MirrorTree Server .Briefly, the server generates scatter plots from a pair of corresponding species branch lengths of two phylogenetic trees.Then, correlation coefficients, which represent the similarity of evolutionary pressure from two phylogenetic trees, were derived from the plots.For test of significant difference between Lon and CULT domain, p-value was calculated after z-transformation of correlation coefficient.Fig. 4 shows 11 scatter plots derived from CRBN and its related proteins with the respective correlation coefficients.Within the 11 proteins, CRBN-related proteins tends to have higher correlation coefficient compared to conserved proteins with statistically significant value for AMPKα ."Furthermore, domain-specific co-evolution analysis is shown in Table 1, exhibiting larger Lon domain's correlation coefficient compared to that of CULT domain for CRBN-related proteins, while no inter-domain difference was observed for conserved proteins.
Cereblon (CRBN) is a substrate recognition subunit of the CRL4 E3 ubiquitin ligase complex, directly binding to specific substrates for poly-ubiquitination followed by proteasome-dependent degradation of proteins.Cellular CRBN is responsible for energy metabolism, ion-channel activation, and cellular stress response through binding to proteins related to the respective pathways.As CRBN binds to various proteins, the selective pressure at the interacting surface is expected to result in functional divergence.Here, we present two mammalian CRBN datasets of molecular evolutionary analyses.(1) The multiple sequence alignment data shows that positive selection occurred, determined with a dN/dS calculation.(2) Data on co-evolutionary analysis between vertebrate CRBN and related proteins are represented by calculating the correlation coefficient based on the comparison of phylogenetic trees.Co-evolutionary analysis shows the similarity of evolutionary traits of two proteins.Further molecular, functional interpretation of these analyses is explained in ‘Positive selection of Cereblon modified function including its E3 Ubiquitin Ligase activity and binding efficiency with AMPK’ (W. Onodera, T. Asahi, N. Sawamura, Positive selection of cereblon modified function including its E3 ubiquitin ligase activity and binding efficiency with AMPK.Mol Phylogenet Evol.(2019) 135:78-85.[1]).
and optimization requires not only fewer experiments to find a local optima, it also it reveals factor interactions and can be used for process simulations.Overall, it will lead to better understanding of a process, which assists in development and scale-up.It is a cost-effective method providing deep understanding in a process.Gabrielsson et al., 2002 reviewed multivariate methods in pharmaceutical applications, which range from factorial designs to multivariate data analysis and regression analysis, where studies reported improved process and product quality.Where DoE is frequently used to find local optima, PCA and PLS are mainly applied to gain deeper understanding and information about a process and the effect of how factors influence the responses.In this study, we have developed a statistical valid regression model, which allows for prediction of liposome sizes, polydispersity and transfection efficiencies as a function of variables in the microfluidics-based manufacturing method.Furthermore, the application of MVDA allowed for deeper understanding of process settings that will lead to increased process control with a defined product quality outcome.The combination of multivariate methods and experimental design in any pharmaceutical or biopharmaceutical process development strategies is a powerful tool towards developing new processes and finding optima within a defined region of factors by speeding up a developing process.In this paper, we have used a microfluidics-based liposome manufacturing method and varied the process parameters total flow rate and flow rate ratio to produce liposomes of defined size.Using microfluidics, homogenous liposomes suspensions can be prepared in a high throughput method setup.Liposomes manufactured by this method were shown to give reproducible transfection results in standard transfection protocols.The application of statistical-based methods revealed the mathematical relationship and significance of the factors total flow rate and flow rate ratio in the microfluidics process to the liposome size, polydispersity and transfection efficacy.We show that the here applied methods and mathematical modeling tools can efficiently be used to model and predict liposome size, polydispersity and transfection efficacy as a function of the variables in the microfluidics method.Furthermore, the advantages of microfluidics as a bottom-up liposome manufacturing method have been shown, anticipating microfluidics and associated lab-on-a-chip applications will become the choice of liposome manufacturing in future.With these studies, we have demonstrated the advantages of incorporating additionally statistical based methods into a development process.Application of statistical based process control and optimization tools like DoE and MVDA will enhance the reproducibility in a process and aid for generation of a design space.This will increase the understanding and confidence in a process setting and allow for predictive and correlative comparisons between the critical process parameters and their effect on desired critical quality attributes, leading to a desired and robust product quality
Microfluidics has recently emerged as a new method of manufacturing liposomes, which allows for reproducible mixing in miliseconds on the nanoliter scale.Here we investigate microfluidics-based manufacturing of liposomes.The aim of these studies was to assess the parameters in a microfluidic process by varying the total flow rate (TFR) and the flow rate ratio (FRR) of the solvent and aqueous phases.Design of experiment and multivariate data analysis were used for increased process understanding and development of predictive and correlative models.High FRR lead to the bottom-up synthesis of liposomes, with a strong correlation with vesicle size, demonstrating the ability to in-process control liposomes size; the resulting liposome size correlated with the FRR in the microfluidics process, with liposomes of 50 nm being reproducibly manufactured.Furthermore, we demonstrate the potential of a high throughput manufacturing of liposomes using microfluidics with a four-fold increase in the volumetric flow rate, maintaining liposome characteristics.The efficacy of these liposomes was demonstrated in transfection studies and was modelled using predictive modeling.Mathematical modelling identified FRR as the key variable in the microfluidic process, with the highest impact on liposome size, polydispersity and transfection efficiency.This study demonstrates microfluidics as a robust and high-throughput method for the scalable and highly reproducible manufacture of size-controlled liposomes.Furthermore, the application of statistically based process control increases understanding and allows for the generation of a design-space for controlled particle characteristics.
control, the most often mentioned bottleneck pertains to pasture management, even though pasture rotation has been shown to be effective in reducing levels of worm challenges.Farmers find it difficult to apply the proper pasture rotation schemes due to limited and variable grass availability and size of pastures, and probably in association with a lack of understanding the temporal dynamics of worm infections.Related to this bottleneck is that some farmers rent areas from governmental organisations who put restrictions on how and when such areas can be grazed.Such restrictions often hinder application of proper pasture rotation in view of worm control.Just a small percentage of farmers indicates that quarantine periods present difficulties.Yet, as noted above, many farmers do not follow proper quarantine procedures to prevent introduction of a resistant worm population.About 18% of the respondents mentioning bottlenecks indicates they lack sufficient knowledge on how to control worm infections.And about a quarter of the respondents mentioning bottlenecks indicated a lack of knowledge on which anthelmintics to use, and to what products AR is present, either on their farm or on farms from which they purchase sheep.Overall, answers to the question on perceived bottlenecks suggest that the major obstacle experienced concerns the use of pastures in relation to worm infections.And while about 75% of all respondents indicated having problems with worm infections, a minority mentioned bottlenecks that could be associated with lacking knowledge on aspects of parasite control.Apparently, most farmers feel they have sufficient knowledge as such but do not know how to effectively use this knowledge.When asking farmers for solutions, 90% of all respondents indicates they are willing to adjust their pasture management.This is a large percentage as 41% of all respondents indicated not perceiving any bottlenecks in parasite control.Apparently, this 41% is still seeking improvement in worm control, supported by the fact that most of these do notice clinical signs related to worm infections and require anthelmintics.The fact that 90% do look for improving pasture management, implies that many farmers indeed appear to be somehow unable to improve pasture use without expert help.Worldwide, adapting pasture management has been mentioned as one of several possibilities to control worm infections.Yet, by far most efforts are focused on targeted selective treatments, which implies that most still regard anthelmintic usage as the basis of parasite control.However, proper pasture rotation is highly effective in minimizing exposure to worm infections making extensive use of anthelmintics redundant.Farmers are also interested in other methods to reduce the risk of worm infections, such as possibilities to enhance the immune system of sheep in general, to increase specific genetic resistance to worms, and to apply anti-parasite products other than the currently available anthelmintics.About 40% of the farmers are interested in breeding sheep more resistant to worms.However, most of them do not expect to be using DNA markers in breeding for genetic resistance.Most want to use a more resistant ram for breeding or want to use FEC as a means to improve resistance to worm infections.In this questionnaire survey, the use of saliva IgA levels was not mentioned as a possibility.However, since 2014 we have been examining the use of this variable and it appears that some farmers are as interested in this parameter as they are in FEC.Overall, there is an overwhelming and wide interest in all possibilities to improve worm control.Concluding, the results of the present study can be used to support improvements in parasite control practices and transfer of knowledge on why certain actions may be wise and others not.Woodgate and Love gave a solid account of factors involved in effectively communicating changes in management and approaches in worm control.These factors include the motivation to change, seeing immediate benefits of change, and complexity of solutions and messages.Learmount et al. demonstrated that working closely with farmers over a longer period produced significant reductions in anthelmintic usage without health and production penalties.Translating this to the current results for Dutch sheep farmers, foci of attention may be: making complex scientific knowledge more accessible to farmers through simple tools and applicable in the daily farming process; changing the mindset of farmers about their current worm control practices, i.e. breaking long-standing habits in management such as treating ewes and lambs at fixed moments rather than based on actual worm infection monitoring data; demonstrating effective pasture rotation schemes on specific farms and using these in extension work; making farmers more aware that checking anthelmintic treatments for efficacy is important; effective quarantine procedures in case of purchased animals to prevent introduction of AR; and creating a wider array of applicable alternative control measures from which individual farmers can choose what fits them most.Finally, probably the most essential aspect might be the need for a fundamental change in mindset and improved mutual understanding among farmers, veterinary practitioners and parasitologists alike.Most efforts are focused on optimising the number of anthelmintic treatments within initiatives such as targeted flock and individual treatment approaches and targeted selective treatments to delay anthelmintic resistance.However, all such efforts are still built on the premise that anthelmintic treatments form the basis of parasite control, as if this is an inescapable fact of life.It might be more fruitful to focus on production systems and control measures that do not require the use of anthelmintic treatments unless absolutely necessary, which would be more in line with the notion of responsible use of medicines.Interestingly, there are a number of Dutch sheep farmers who apparently do not need to use
breaking long-standing habits such as treating ewes and lambs at fixed moments rather than based on actual worm infection monitoring data; (3) demonstrating effective pasture rotation schemes on specific farms and using these in extension work; (4) making farmers more aware that checking anthelmintic efficacy is important; (5) improving quarantine procedures; (6) creating a wider array of applicable alternative control measures from which individual farmers can choose what fits them most; and finally, (7) improving mutual understanding among farmers, veterinary practitioners and parasitologists alike.
the semantic foil from the first list or the target.Under these conditions, performance was better when distracters were similar to the target rather than the foil, indicating benefits from distraction accruing either for the target or for the foil.This pattern suggests that phonological features of auditory distraction can yield benefits for retrieval, just as semantic features do in our study.Interestingly, this pattern emerged in a paradigm using semantically rich materials such as words, which, according to the interaction-by-process framework, would suggest a prominent role of phonological information in memory for very short lists of words.One feature of the current series that merits additional discussion is the inclusion of JOLs in our novel paradigm.Experiments 1, 2, and 5 used JOLs to direct participants’ attention toward category information conveyed by related distracters.That this manipulation was effective is indicated firstly by the fact that JOLs were reliably higher for words accompanied by related distraction and secondly because a positive between-sequence semantic similarity effect was generally larger in the presence of JOLs compared to a procedure in which these judgments were not elicited.The finding of higher JOLs in the presence of related auditory distracters is important inasmuch as this self-assessment did not always reflect the results obtained in recall tests.Although the difference in JOLs correctly reflected the recall patterns observed with category-cued recall tests of Experiments 1 and 2, JOLs were dissociated from the recall pattern of Experiment 5, in which no between-sequence semantic similarity effect was observed.The dissociation observed in Experiment 5 demonstrates how conditions that do not map onto memory performance can nevertheless impact robustly upon predictions of performance.Together, the results of Experiments 1, 2, and 5 demonstrate that conditions of a test have important implications for the accuracy of metacognitive judgments.In our case, JOLs correctly reflected memory patterns under category-cued recall testing but not under free-recall testing.The observation that participants do not take into account the type of the test when making metacognitive judgments at encoding would seem trivial but for the fact that our procedure used multiple study-test cycles.One cannot reasonably expect participants to predict the type of the test in a one-cycle procedure which can contribute to inaccuracies commonly observed for JOLs.However our study shows that even prolonged experience of a given type of test, in which a certain effect does not occur, does not prevent the expectation of such an effect appearing in JOLs.This demonstrates that metacognitive judgments are not fully updated using available information.In the present series, the between-sequence semantic similarity effect tended to be larger in the presence rather than absence of JOLs.This highlights another feature of metacognitive judgments.Although such judgments are commonly used to gain insight into participants’ appraisals of their own cognitive processes, it seems that the very act of eliciting these judgments alters the cognitive process that is under scrutiny.This measurement-affects-process perspective on metacognition has been first described in reference to delayed JOLs, which trigger covert retrieval, subsequently benefiting memory performance in the way described by a testing effect mechanism.There is a growing understanding that eliciting metacognitive judgments can change not only the quantitative, but also the qualitative patterns observed in memory performance.For example, Sahakyan, Delaney, and Kelley showed that eliciting aggregate judgments in the directed forgetting paradigm for the first studied list can promote changes in encoding strategy for the second list, altering the usual pattern of directed forgetting benefits.The results of the present study are consistent with this line of research.To conclude, the present study is the first to demonstrate the benefits of studying words with related rather than unrelated auditory distracters.This pattern shows that the current perspective on the effects of distraction as ubiquitously harmful for memory performance is lacking.The novel pattern is consistent with the account of semantic distraction effect stressing categorical information conveyed by auditory distracters, as well as a revised interference/interaction-by-process framework, but it points to deficits in other postulated mechanisms, such as interference and inhibition.
The processing of the relation between targets and distracters which underpins the impairment in memory for visually presented words when accompanied by semantically related auditory distracters—the between-sequence semantic similarity effect—might also disambiguate category membership of to-be-remembered words, bringing about improved memory for these words at recall.In this series of experiments the usual impairment of the between-sequence semantic similarity effect is reversed: we show that related distracters can improve memory performance when multiple-category lists are studied and a category-cued recall test is used at retrieval.The results indicate not only that irrelevant speech distracters are routinely processed for meaning, but also that semantic information gleaned from this stream is retained until recall of the memoranda is cued.The data are consistent with a revised interaction-by-process framework.
already accepted this, with mass spectrometry being recognised as the gold standard .Nonetheless, the advantage must be clearly apparent before any MS based methodology is adopted.This can only be proven through cross validation to reference methods, where multiple samples are analysed and statistically correlated against currently acceptable methods.There are a few studies starting to complete this work for conventional UHPLC-MS/MS , but none yet for UHPSFC-MS/MS.Furthermore, comprehensive method validation and accreditation using reference standards will be required before this technique is implemented in the clinical laboratory.Given the advantages of UHPSFC-MS/MS outlined above it is clear that this technology offers an alternative to GC–MS and UHPLC-MS/MS in the clinical/research setting, possibly even bridging the gap between these two mainstay techniques.While it may be presumptuous to think that UHPSFC-MS/MS would ever replace GC–MS or UHPLC-MS/MS, this technology does offer an attractive orthogonal high throughput alternative which is well suited to routine analysis of complex steroid panels including applications such as urinary, serum and salivary steroid profiling for diagnostics and monitoring.Indeed, the simplified sample preparation and high throughput potential of this technique may greatly improve diagnostics screening methods.This may be of great value in the neonatal population as it is critically important to obtain a diagnosis of any potential endocrine disorder as early as possible as these children can present with a life threatening adrenal crisis if they are not started promptly on life-long and life-saving steroid hormone replacement treatment.The first presenting clue to a diagnosis of an inborn steroidogenesis disorder can be ambiguous genitalia from birth but patients can go on to develop adrenal crisis within the first two weeks of life.Unfortunately, with the limited availability of this modality and the time it takes from sample collection to result, GC–MS remains an inaccessible tool in the acute neonatal setting for clinicians.Consequently the mainstay for diagnosis currently remains immunoassay which carries a high false positive rate, particularly in the preterm and small for gestational age populations.UHPSFC-MS/MS has also shown promise in the quantification of conjugated steroids, an aspect of steroid metabolomics that should continue to be explored as this has the potential to further reduce analysis times and cost .As with any new technology only time and usage will reveal the true potential, but based on results thus-far UHPSFC-MS/MS has undisputedly opened new possibilities within the field of steroid analysis.
Liquid chromatography tandem mass spectrometry (LC-MS/MS) assays are considered the reference standard for serum steroid hormone analyses, while full urinary steroid profiles are only achievable by gas chromatography (GC–MS).Both LC-MS/MS and GC–MS have well documented strengths and limitations.Recently, commercial ultra-high performance supercritical fluid chromatography–tandem mass spectrometry (UHPSFC-MS/MS) systems have been developed.These systems combine the resolution of GC with the high-throughput capabilities of UHPLC.Uptake of this new technology into research and clinical labs has been slow, possibly due to the perceived increase in complexity.Here we therefore present fundamental principles of UHPSFC-MS/MS and the likely applications for this technology in the clinical research setting, while commenting on potential hurdles based on our experience to date.
The recent outbreak of Ebola virus disease in West Africa was unprecedented.Following the first case in Guinea in December 2013, 28 616 cases were reported in Guinea, Liberia and Sierra Leone resulting in 11 310 deaths.1,2,Fragile healthcare systems in the affected countries struggled to cope with the scale and complexity of the outbreak, and the World Health Organisation declared a Public Health Emergency of International Concern on 8 August 2014, resulting in the mobilisation of the international community to assist in the control of the EVD outbreak in West Africa.3,Case finding and contact tracing, combined with safe and dignified burial of corpses, are key public health measures required to control EVD outbreaks, underpinned by social mobilisation.4,Case management and isolation of cases in EVD treatment centres is the fourth key pillar of outbreak response.5,6,Whilst there were a number of investigational therapies in development and trialled for the treatment of EVD, none were routinely available in West Africa during the recent outbreak.Optimising clinical outcomes therefore was entirely dependent upon the provision of high quality supportive care.The practical difficulties of providing parenteral fluid and electrolyte therapy in EVD treatment centres, combined with the perceived risks to healthcare workers of invasive clinical procedures resulted in generally poor access to enhanced levels of care across West Africa.7,8,During the outbreak in Gulu, Uganda in 2000, higher levels of care were delivered including routine use of intravenous fluids with an overall case fatality rate of 53%.At the beginning of this outbreak some clinicians also endeavoured to challenge the historically poor EVD patient outcomes with improved supportive care delivery, including the first ETC in Conakry that sustained at a low CFR of 30%–40%.8,9,Unfortunately, despite this progress some organisations have failed to learn these key clinical lessons, and still question the benefit and evidence base of intravenous fluid and electrolyte replacement in EVD.10,As part of the international response to the EVD outbreak in West Africa, the UK built seven treatment centres in Sierra Leone,11 the first of which was commissioned on 5 November 2014 at Kerry Town, near the capital Freetown.This included 20 beds operated by the UK and Canadian Defence Medical Services specifically for HCWs with suspected or confirmed EVD.The EVD treatment unit was well resourced with dedicated laboratory facilities and was capable of providing high quality medical and nursing care, fluid and electrolyte therapy, blood component transfusion, oxygen therapy and vasopressor support.Limited data exist describing supportive care management, laboratory abnormalities and outcomes in patients with EVD in West Africa, apart from within EVD clinical trials.Patient age, baseline viral load and renal dysfunction have been highlighted as key prognostic indicators in EVD.12,13,Combined analysis of 27 exported cases of EVD to Europe and the United States of America also highlighted the prevalence of organ dysfunction in EVD and demonstrated the feasibility and importance of critical care interventions, with an associated low case fatality rate of 18.5%.14,We report cohort data from Sierra Leone which is the first description of the provision of enhanced EVD case management protocols in a West African setting, in a unique treatment centre dedicated to caring for infected HCWs.It demonstrates what can be achieved by well-resourced and committed clinical teams and pushes the boundaries of EVD supportive care levels in low-resource settings.Demographic, clinical and laboratory data were collected by retrospective review of clinical notes, nursing charts and laboratory records of patients with confirmed EVD admitted to the military EVDTU at Kerry Town, between 5 November 2014 and 30 June 2015.Clinical features, observations and laboratory test results were recorded each day on a standardised proforma.All clinical notes were electronically scanned on patient discharge or death, prior to destruction of the paper copies used in the EVDTU clinical area.All clinical samples and data were collected during routine patient care as part of the public health response and the Sierra Leone ethics and scientific review board confirmed board approval was not required.All information on individual patients was anonymized and recorded on standardised forms, which were kept securely.Patients admitted to the military EVDTU included international and national HCWs and other personnel engaged in the EVD relief effort in Sierra Leone.Local nationals not involved in healthcare were also referred and admitted on a case-by-case basis, although pregnant women and children with EVD were directed to alternative treatment centres.Patients were admitted directly from the community, or were transferred from other treatment centres once their status as a HCW was realised.Diagnosis of EVD was confirmed on admission by EBOV real-time reverse transcriptase polymerase chain reaction assay of blood.Baseline data from patients subsequently aeromedically evacuated to Europe or USA is reported, but not included in longitudinal data reported or outcome dependent regression analysis.All patients underwent EBOV RT-PCR testing on admission, provided by the onsite Public Health England laboratory utilising an existing Trombley Ebola Zaire nucleoprotein assay or the Altona RealStar Filovirus RT-PCR Kit.15,Results were reported as positive or negative, with cycle threshold values only being obtained retrospectively.The onsite military laboratory provided basic haematology, clotting, clinical chemistry and blood culture capabilities.On admission, and when clinically indicated thereafter, blood samples were assayed in the onsite laboratory using the Piccolo Express system, Horiba ABX Micros ES60 analyser and Hemochron Signature Elite.When laboratory facilities were not available, bedside i-STAT® testing was undertaken when indicated.The Bact-Alert™ system was used for blood cultures, and if positive a Blood Culture Identification Detection panel was run on the BioFire FilmArray™.A rapid diagnostic test was undertaken on admission to exclude malaria.HIV and Dengue
We report data which constitute the first description of the provision of enhanced EVD case management protocols in a West African setting.Methods: Demographic, clinical and laboratory data were collected by retrospective review of clinical and laboratory records of patients with confirmed EVD admitted between 5 November 2014 and 30 June 2015.
rapid diagnostic tests were also available.An EVD treatment bundle was developed to facilitate a protocolized approach to the provision of clinical care thus ensuring optimal utilisation of clinical contact time.Whilst acknowledging there is little evidence of efficacy of any specific intervention in the management of EVD, the elements of the EVD treatment bundle were developed by the clinical group encompassing interventions well established in the management of the critically ill and incorporating basic tenets of care provided in EVD treatment centres.The EVD treatment bundle evolved through regular review by clinical groups at the end of each 60-day roulement.In the EVD treatment bundle, clinical disease was defined by three stages of illness.The main tenets of the EVD treatment bundle include parenteral fluid & electrolyte replacement therapy, stress-ulcer prophylaxis, empirical antibiotics and anti-helminthic medication, analgesia and standardised approaches to the management of coagulopathy & haemorrhage, encephalopathy and shock.Venous access was routinely achieved by placement of a central venous cannula in all patients with stage 2 or 3 disease.Vital signs were recorded 6 hourly with continuous monitoring of heart rate, non-invasive blood pressure and pulse oximetry undertaken when required.Urethral catheterisation and placement of faecal management systems were undertaken as clinically indicated.All staff received education in the delivery of the EVD treatment bundle prior to the opening of the EVDTU and prior to each roulement of clinical staff throughout the outbreak.16,Maximum daily early warning scores were calculated retrospectively.The qSOFA score can be calculated using basic parameters of clinical assessment and aims to identify patients at increased risk of death due to sepsis.17, "NEWS is a scoring system based upon routine clinical observations reflecting the individual's physiological response to illness.When a patient is admitted to a medical facility it can be used to track changes in clinical observations thus alerting the healthcare team to any deterioration and so triggering a timely escalation in clinical care.18,Descriptive analyses are reported as frequencies and proportions for categorical variables, means or medians as appropriate for continuous variables."We used Fisher's exact test for comparing categorical variables, and Student's t-test and Mann–Whitney U test for comparing continuous variables.We assessed risk factors for mortality by univariate logistic regression and reported odds ratio of death with its 95% CI.No imputation for missing data was made due to small sample sizes.Hypothesis tests were two-tailed and we analysed data with SPSS.During the study period a total of 44 cases were admitted with confirmed EVD.Three international HCWs with confirmed EVD were initially managed in the EVDTU before being evacuated to Europe or the USA for further treatment, all of whom survived.The remaining cases were Sierra Leonean nationals of whom 71% were employed in healthcare.The majority of patients were male with a median age of 37 years.The mean time from onset of illness until admission was 5.3 days, with 23/44 patients admitted directly to the EVDTU.Excluding those who were aeromedically evacuated, the case fatality rate for those receiving care in the military EVDTU was 49%.At the time of admission to the EVDTU the most common clinical features were diarrhoea and vomiting, lethargy, anorexia, abdominal pain and fever.Only 3/44 patients had signs or symptoms of haemorrhage at admission, with 10/44 patients reporting hiccups of which 8/10 died.Vital signs at admission were within normal limits in the majority of patients admitted, apart from respiratory rate, with mean temperature 37.5°C at admission.Other signs and symptoms of EVD patients admitted to the military EVDTU are shown in Table 3.Stage 1 disease was present in 9/44 patients, stage 2 disease in 12/44 and stage 3 disease was present in 23/44 patients with case fatality rates of those not evacuated of 14%, 27%, and 70% respectively.Cycle threshold value at baseline was available for 42/44 participants with a mean of 22.7 overall, and 20.3 in fatal cases and 24.8 in survivors.Clinical findings during hospitalisation are summarised in Fig. 1.Nearly all patients had diarrhoea, lasting a median 5 days and complained of abdominal pain lasting a median 3 days.Lethargy was also very common, lasting a median 3 days, with vomiting occurring in 33/41 patients and lasting a median of 3 days.During hospitalisation fever was recorded in 33/41 patients with the median time until resolution of 9 days from the onset of symptoms.Haemorrhage occurred in 17/41 patients with encephalopathy seen in 26/41 patients.The lowest oxygen saturations recorded in fatal cases was lower than cases that survived.Blood was analysed using the onsite laboratory and point of care devices as clinically indicated and in accordance with the EVD treatment bundle.During a total of 312 EVD patient admission days in the EVDTU, the most frequently measured electrolytes were sodium and potassium, analysed in 255/312 and 240/312 patient days respectively.The majority of patients had electrolyte abnormalities at admission and during hospitalisation.Hyponatraemia occurred in 37/40 of patients and hypokalaemia in 21/40 during hospitalisation.Hyperkalaemia occurred in 12/40 patients during admission with 3 patients recording potassium levels >6 mmol/L.Hypomagnesaemia and hypophosphataemia were recognised complications in 11/31 and 8/31 patients in whom magnesium and phosphate were measured, typically occurring in patients with a protracted diarrhoeal phase of illness.Hypoglycaemia occurred in 24/40 patients, with 9 patients recording blood sugars ≤2.8 mmol/L, all of whom were in the terminal stages of illness.Renal dysfunction frequently occurred with an elevated creatinine in 26/41 at admission and 32/40 during hospitalisation.Acute kidney injury occurred in 20/40 patients through the course of their illness, with 14/20 subsequently dying, compared to 5/20 patients who did not suffer AKI dying.Elevated creatine kinase developed
Results: A total of 44 EVD patients were admitted (median age 37 years (range 17–63), 32/44 healthcare workers), and excluding those evacuated, the case fatality rate was 49% (95% CI 33%–65%).At admission 9/44 had stage 1 disease (fever and constitutional symptoms only), 12/44 had stage 2 disease (presence of diarrhoea and/or vomiting) and 23/44 had stage 3 disease (presence of diarrhoea and/or vomiting with organ failure), with case fatality rates of 11% (95% CI 1%–58%), 27% (95% CI 6%–61%), and 70% (95% CI 47%–87%) respectively (p = 0.009).Haemorrhage occurred in 17/41 (41%) patients.The majority (21/40) of patients had hypokalaemia with hyperkalaemia occurring in 12/40 patients.Acute kidney injury (AKI) occurred in 20/40 patients, with 14/20 (70%, 95% CI 46%–88%) dying, compared to 5/20 (25%, 95% CI 9%–49%) dying who did not have AKI (p = 0.01).Ebola virus (EBOV) PCR cycle threshold value at baseline was mean 20.3 (SD 4.3) in fatal cases and 24.8 (SD 5.5) in survivors (p = 0.007).Mean national early warning score (NEWS) at admission was 5.5 (SD 4.4) in fatal cases and 3.0 (SD 1.9) in survivors (p = 0.02).
was 49% overall, but direct comparison of clinical outcomes from different treatment facilities has proved difficult.Rates reported from other ETCs in Sierra Leone were similar from cohorts with lower median ages and higher Ct values.International Medical Corps28 reported a case fatality rate of 58%, and Medicins san Frontieres,29 a case fatality rate of 51%.Historical control data for 3 months preceding the JIKI trial in Guinea had similar Ct values with a case fatality rate of 57%.30,A study of the heterogeneity in the case fatality rates in West Africa,31 was limited by missing outcome data, but demonstrated that in the age group 35–39 years there was a case fatality rate of 67.5%.Analysis of supplementary data from this study also showed that in adults with confirmed EVD and known outcome status admitted to ETCs in Sierra Leone, there were 268/477 deaths.Whilst the difficulty of comparing outcomes in cohorts with significantly different demographics and baseline viral loads is obvious, the importance of survivor bias due to local factors affecting patient distribution must also be considered.32,A number of limitations of our data must be appreciated, that reflect the primary mission of the EVDTU to manage infected HCWs.As such paediatric or pregnant cases were not referred to the EVDTU, that may have different clinical syndromes and management approaches.The EVDTU was also highly resourced with its main assets being staff-to-patient ratios and laboratory support.The intensive medical and nursing care provided would be challenging to safely maintain in larger ETCs, but a compromise incorporating the tenets of our management approach is achievable.The number of HCW cases that were admitted was also lower than expected resulting in a small sample size.This reflects the timing of the epidemic and reduced nosocomial risk to HCWs as better treatment centres opened, and is consistent with the experience of other HCW ETCs in Guinea and Liberia.Small sample size has prevented us from performing multivariable analysis to looking at the risk factors of mortality at admission simultaneously.As a result, observed differences may be subject to possible confounding effects due to unknown or unmeasured factors.In resource-limited African settings, the provision of even basic supportive care to patients with EVD is difficult.Recommendations to limit treatment based upon perceived poor patient prognosis and risk to HCWs, perpetuate the cycle of limited care, poor outcomes and fear.33,Whilst the provision of renal replacement therapy and mechanical ventilation have been successfully utilised in one ETC in Sierra Leone during the recent outbreak, the logistic and practical considerations of providing level 3 care, unfortunately makes it an unrealistic proposition as an established level of care for EVD in future outbreaks.We do recognise that when resources and appropriately trained personnel exist, this capability can be safely delivered and will improve outcomes in severe EVD for limited numbers of patients.We believe that the approach pioneered at our EVDTU utilising improved resources, clinical staging of disease severity and an enhanced level of protocolized care offers a blueprint for the future management of EVD in resource-limited environments.These processes were subsequently adopted by other non-governmental organisations in West Africa, and can form a platform for future viral haemorrhagic fever clinical care augmented by specific therapeutics when available.The authors declare that they have no competing financial interests.The content is solely the responsibility of the authors.No specific funding.TF is funded by the Wellcome Trust and the UK Ministry of Defence.The PHE-led EVD laboratory operation was funded through the Department for International Development.
Background: Limited data exist describing supportive care management, laboratory abnormalities and outcomes in patients with Ebola virus disease (EVD) in West Africa.No pregnant women were admitted.We believe that the enhanced levels of protocolized care, scale and range of medical interventions we report, offer a blueprint for the future management of EVD in resource-limited settings.
was that, as the position of interest moved from the top of the weld to the bottom, there was an increasing tendency for M/A islands to decompose.It would seem likely, therefore, that in the NG-GTAW weld, the tempering effects associated with subsequent weld passes led to a gradual restoration in the toughness.Of course, all of the samples in this work have been studied in the as-welded condition, and such welds always undergo PWHT before being put into service.However, this finding is significant because of the possibility of over-tempering some regions during PWHT while under-tempering others.Indeed, such variability in toughness may make it difficult to identify a PWHT procedure that achieves the right balance.Narrow-gap welds were made in 78 mm thick sections of SA533B LAS in accordance with ASME IX, using two alternative fusion-welding technologies.The principal objective was to understand the effects of multiple thermal cycles on through-thickness variations in microstructure and mechanical properties within the joints.The principal conclusions are as follows:When NG-GTAW is employed using a single pass per layer in combination with a weaving bead configuration, shallow weld beads result.These weld beads can be expected to undergo extensive reheating during the deposition of subsequent weld passes, and indeed they are likely to be re-austenitised at least once.This was evident from an evaluation of the prior-austenite grain sizes throughout the thickness of the specimen, which revealed relatively fine austenite grains at all locations except for at the very top of the weld.When NG-SAW is used in conjunction with a stringer bead configuration and a moderate welding speed, the majority of the effects associated with reheating are likely to be confined to the preceding layer of weld metal, and the extent of re-austenitisation can be expected to be greatly reduced when compared to employing NG-GTAW in a weave deposition configuration.This was evident from the limited extent to which the prior-austenite grain size was refined by subsequent passes in the weld heat-affected zone, and also by the retention of the bead stacking pattern in the hardness map for this weld type.The toughness of the NG-SAW joint appeared to be relatively consistent throughout the thickness, generally matching that for the parent material.The principal exception to this observation arose in the weld metal at the bottom of the weld, and this was attributable to the localised employment of a different bead stacking pattern, whereby a single stringer bead was located at the weld root.A small drop in toughness was also observed at the very top of the weld.The toughness of the NG-GTAW weld, as determined from impact testing, appeared to deteriorate in later weld passes, to the point where it was noticeably inferior to the parent material toughness towards the top of the weld.The superior toughness at the bottom of the weld is likely to be attributable to the significant tempering effects that result from later weld passes.This was evident in the differing extent to which martensitic regions were tempered throughout the thickness.Conversely, the lower toughness at the top of the weld is likely to result from the absence of these pass-to-pass tempering effects.Further work is required to explore the response of welds, such as those described in this study, to post-weld heat treatment operations.It may be difficult to optimise PWHT procedures for welds such as the NG-GTAW test piece in such a way as to achieve consistent toughness throughout the thickness of the joint.This is due to significant through-thickness variations in the extent to which tempering has taken place prior to the PWHT operation.
Narrow-groove (NG) variants of these processes have reduced welding times, but thick-section welds still require a large number of passes.In this work, the effects of a large number of welding thermal cycles on the through-thickness variability in microstructure and mechanical properties have been analysed for NG-GTAW and NG-SAW joints made in 78 mm thick SA533 steel.Microstructures were characterised using optical and scanning electron microscopy, while mechanical properties were captured in cross-weld tensile tests using digital image correlation and through tests on coupons extracted exclusively from the weld metal and from the heat-affected zone.Charpy impact testing was used to assess toughness.While the toughness was relatively consistent throughout the SAW joint, significant through-thickness variations in toughness were observed in the NG-GTAW joint, which can be attributed to the varying degree to which the weldment was tempered by subsequent welding thermal cycles.
in the FeMg-900 hybrid, way above the percolation threshold .The contiguous templates guaranteed the formation of through pores.Thus, combined with the microscopic analysis results, we found that the in-situ exsolution of dual-template approach indeed created HPCs with interconnected pores.Thanks to its hierarchical porous structure, the FeMg-carbon sample showed excellent electrochemical performance in oxygen reduction reaction and supercapacitor test.The cyclic voltammograms of FeMg-carbon recorded in N2 and O2 saturated 0.1 M KOH electrolyte confirmed that the ORR triggered the reduction peak.We then ran linear-sweep voltammetry at 1600 rpm for different catalysts in Fig. 6a.The hierarchically porous FeMg-carbon showed an onset potential of 0.90 V and a half-wave potential of 0.77 V.The Koutecký–Levich plots indicated that the number of electron transferred was 3.5.A chronoamperometric response at the half-wave potential revealed that more than 80% of the current was sustained after 20 h measurement.All these performances herein ranked it among the best carbon ORR catalyst today , and were even comparable other heteroatom doped carbon materials .Conversely, the FeMg-900 catalyst, with the pores blocked by templates, performed poorly in ORR, asserting the significant contribution of hierarchical pores in this catalytic process.This also in turns precluded the contribution of iron residuals in FeMg-carbon, if any, in ORR.Similarly, the KMg-carbon control, which possessed less interconnected hierarchical pores, was less active as well.This disparity was also prominent in the electrochemical impedance spectroscopy measurement in Fig. S13 obtained at the half-wave potentials.Besides, the lower polarization value of FeMg-carbon also indicates the faster ion movement at the electrode/electrolyte interface as a result of the interconnected pore structure.Thus, we concluded that the hierarchical pore was a key factor promoting ORR activity.Since FeMg-carbon contained much more intrinsic topological defects.The abundant carbon vacancies, edges, nonhexagonal topologies, danglings, etc., along with the nitrogen moieties also contribute greatly to the good electrocatalytic performance .Its high surface area and large pore volume make the hierarchically porous FeMg-carbon an excellent candidate material for supercapacitor electrodes.To test this, we used the three-electrode configuration.The cyclic valtammetry curves in Fig. 6c are quasi-rectangular, typical of electrical double-layer capacitance with minor pseudo-capacitance contribution that comes from the nitrogen functionalities.The measured specific capacitance reached 321 F g−1 at 5 mV s−1, which is comparable with top carbon electrode in the literature.Albeit that KMg-carbon had higher specific surface area, its capacitance was apparently lower than that of FeMg-carbon.This difference became much more significant when the scan rate increased till 400 mV s−1 in Fig. 6d, indicating that good electrolyte transportation was largely sustained in FeMg-carbon thanks to the equipped the interconnected pore structure.In fact, the surface area of the KMg-carbon mainly came from the micropore which are often poorly accessible or wholly inaccessible for the relatively larger solvated electrolyte ions .In particular, pores narrower than 0.5 nm cannot support the formation of double layers .Thus, high mesoporosity was always recognized as a crucial factor governing the level of EDLC.In contrast, the high surface area of FeMg-carbon largely came from the mesopores that well connected with the macropores.This facilitated the transportation and diffusion of electrolyte ions.In conclusion, the in-situ exsolved iron-nanoparticles from the MgO offered a connected dual-template network embedded in the NTA derived N-doped carbons.It created abundant intrinsic defects and hierarchical pore structure with excellent pore interconnectivity in the carbon.Thanks to this, the resulting carbon catalyst exhibited good performances in both ORR and EDLC measurements.This approach opens new opportunities for designing porous materials with guaranteed interconnected pores, which could be useful in a variety of applications.
The rational design and preparation of hierarchically porous carbons feature high on the wish list of academia and industry alike.However, creating interconnected pores of distinct dimensions is no easy task.Starting from the precursor design, we present a novel synthesis strategy of porous carbons that much enhances the pore interconnectivity.While embedded in the carbon matrix, these pyrolysis-generated templates undergo an additional phase transformation at the sequential 900 °C pyrolysis, exsolving well-dispersed smaller Fe nanoparticles, typically sizing 5–45 nm, on the MgO surface.This offers a contiguous network of dual templates for meso- and macropores.A simple acid washing yields a hierarchically porous, N-doped carbon with a high specific surface area of 1560 m2 g−1 and a high mesopore volume of 1.9 cm3 g−1.This carbon exhibits a half-wave potential of 0.77 V vs. RHE in the oxygen reduction reaction at pH 13.Besides, it also renders a specific capacitance of 321 F g−1 at 5 mV s−1 during the capacitor measurement.
the complexity and uncertainties in a practical way, without ignoring vital elements of balancing markets and their functions.We suggest the following design principles to address this challenge:Reach agreement on key balancing market criteria and variables,Start from power system and market conditions,Consider future power system and market developments,Strive for appropriate incentives to market participants,Reduce uncertainties through in-depth analysis and monitoring of performance,Policy makers should start with creating a common understanding of the balancing market design problem, and narrow the scope of the design task through principle a. Principles b and c will further limit the scope by ruling out design options that are incompatible with current or future conditions, and identifying the options that provide appropriate incentives to BRPs and BSPs under these conditions.‘Appropriate incentives’ are those that will trigger market behaviour that leads to efficient and effective balance management.17,To find out which designs provide such incentives, empirical analysis of current balancing markets and simulations are useful.Our framework can support the balancing market design process by facilitating a systematic approach, in which all design options and performance criteria are consciously considered.Therefore, the framework may be most useful at the start of policy-making process, when prioritising the design variables and performance criteria.However, because all variables and criteria play a role in the balancing market, we recommend using the framework as a reference throughout the design process.Furthermore, the framework can help structure further research on balancing market design.In the unbundled electricity markets in Europe, the balancing market is the institutional arrangement required to maintain the balance between electricity demand and supply.Several design variables and performance criteria play a role.The framework presented in Section 3 takes these into account to provide a systematic and structured approach to the design and analysis of balancing markets.In view of the wide variety of design options, the trade-offs among performance criteria, uncertainties about the effects of design options, and the inter-linkages between the balancing market and the overall electricity market, policy makers face a substantial design challenge.This challenge can be met by addressing key criteria and variables, by considering system and market conditions and expected future developments, and by identifying the design options that provide appropriate incentives to market participants given all of these considerations.
In the unbundled national electricity markets in Europe, the balancing market is the institutional arrangement that deals with the balancing of electricity demand and supply.This paper presents a framework for policy makers that identifies the relevant design variables and performance criteria that play a role in the design and analysis of European balancing markets.We outline the full extent of the design challenge through a discussion of trade-offs among performance criteria, uncertain effects of design variables, and the many inter-linkages between the balancing market and the electricity market at large.Policy makers can address the balancing market design challenge by adopting a structured approach in which design variables, performance criteria, market conditions, system developments, and resultant market incentives are explicitly considered.
The data displayed here represent the outcome of micropurification steps and visualization techniques used for purifying T. b. brucei target proteins of artemisinin, which were previously detected at 60, 40 and 39 kDa by immunoblotting .The polyacrylamide gel and immunoblotting film presented here below reflect, on the one hand, the successful isolation of the 60-kDa protein band but, on the other hand, the difficulty to isolate both low-abundance proteins at 40 and 39-kDa.It should be noted that a two-dimensional SDS-PAGE for further purity assessment of the 60-kDa target protein band was not yet performed.In an Eppendorf tube, 100 μM of probe 5 that was previously synthesized as described in was inoculated directly into the parasite lysate and the whole preparation was incubated at 37 °C in a 5% CO2 atmosphere incubator for 5 min.3,Then, streptavidin-tagged resins were added to immobilize labeled proteins during an overnight rotation.The supernatant was removed and the resins were washed twice with trypanosome lysis buffer, and then Laemmli׳s sample loading buffer was added and mixed well with the resins by pipetting.The labeled proteins were unbound from the resins in the Laemmli׳s sample loading buffer with a heat treatment.The protein samples were separated using sodium dodecyl sulfate-polyacrylamide gel electrophoresis.Next, the electrophoresis gel was reverse stained as described in Section 1.3.The on-gel detected protein bands were excised, transferred in an Eppendorf tube, destained, and crushed in the Laemmli׳s SDS-PAGE running buffer.After elution by vigorous agitation, the filtered protein samples were concentrated by ultracentrifugation as described in Section 1.5, and then subjected to SDS-PAGE in duplicate.The first gel underwent usual Western blotting procedures and was visualized by enhanced chemiluminescence, whereas the second gel was reverse stained, thereby allowing on-gel detection.The reverse staining is a protein detection method using imidazole and zinc salts in electrophoresis gels.The principle of the method consists in selectively precipitating a white opaque imidazole–zinc complex in the electrophoresis gel except in the zones where protein bands are located, which zones remain transparent.As a procedure, the pre-treatment solution was poured in a plastic tray.The polyacrylamide gel was immersed into the pre-treatment solution and the tray was shaken smoothly for 5 min.The gel was removed from the pre-treatment solution and immersed into 100 mL of fresh ddH2O in a separate plastic tray, which was shaken smoothly for 30 s. Next, the gel was immersed into the Staining solution R-1 in 50 mL ddH2O) in a separate plastic tray that was shaken for 15 min.The gel was removed from the Staining solution R-1 and immersed into 100 mL of fresh ddH2O in a separate tray, which was shaken smoothly for 30 s. Later, the gel was immersed into the Development solution R-2 in 50 mL ddH2O) in a separate plastic tray that was shaken for 1–3 min until protein bands were visualized.The gel was washed in 100 mL of fresh ddH2O for 2 min.The water was discarded and the gel was washed a second time with fresh ddH2O for 5 min.The reverse-stained gel was placed on a plastic wrap over a dark-colored background and the on-gel detected protein bands were excised with a sterile scalpel, and then transferred in an Eppendorf tube.Laemmli׳s SDS-PAGE running buffer was added and the Eppendorf tube was shaken gently for 10 min until destaining occurred.The supernatant was removed, 500 μL of Laemmli׳s SDS-PAGE running buffer was added again, and the Eppendorf tube was shaken gently for 10 min once more.The supernatant was discarded and 100 μL of Laemmli׳s SDS-PAGE running buffer was added.The immersed electrophoresis gel was manually crushed into tiny pieces with a clean spatula.Then, 100 μL of Laemmli׳s SDS-PAGE running buffer was added for achieving a final volume of 200 μL.The whole suspension was shaken vigorously for 1 h, transferred into a centrifugal filtration tube, and then centrifuged at 14,000g for 10 min at room temperature.The filtrate solution was stored at 4 °C.The protein filtrate was transferred into a molecular weight-filter tube.The volume was adjusted up to 500 μL with Laemmli׳s SDS-PAGE running buffer and the whole preparation was centrifuged at 14,000g at room temperature for 40 min.The retentate of ca. 10 μL was recovered from the retention membrane as the concentrated protein suspension.The sample reservoir was placed upside down in a new vial and centrifuged for 3 min at 1000g for transferring protein retentate to the vial.Finally, the concentrated protein suspension was analyzed by SDS-PAGE or by Western blotting.The encouraging results in prompted us to determine purity of the candidate target proteins of artemisinin by the reverse-staining method .As a procedure, we incubated 100 μM of probe 5 directly into the parasite lysate for 5 min.3,Labeled proteins were immobilized by streptavidin-tagged resins and subsequently released in Laemmli׳s sample buffer.Following SDS-PAGE of the protein samples, the electrophoresis gel was reverse stained using a reverse-staining kit.The on-gel detected protein bands were excised, destained and crushed in Laemmli׳s SDS-PAGE running buffer.After elution by vigorous agitation, the filtered protein samples were concentrated by ultracentrifugation, and then subjected to SDS-PAGE in duplicate.The first gel underwent usual Western blotting procedures and was visualized by enhanced chemiluminescence.As a result, the molecular size of the isolated single band in lane 3 corresponded effectively to the ca. 60-kDa band in the control lane 1, while both low-abundance low molecular-sized candidate target proteins were almost undetected in lane 2.Next, the second gel was reverse stained, thereby allowing on-gel detection.As a result, a single band of the ca. 60-kDa candidate target protein
Here we describe the isolation and purity determination of Trypanosoma brucei (T.b.)brucei candidate target proteins of artemisinin.The candidate target proteins were detected and purified from their biological source (T. b. brucei lysate) using the diazirine-free biotinylated probe 5 for an affinity binding to a streptavidin-tagged resin and, subsequently, the labeled target proteins were purified by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE).We herein showed the electrophoresis gel and the immunoblotting film containing the 60-kDa trypanosomal candidate target protein of artemisinin as a single band, which was visualized on-gel by the reverse-staining method and on a Western blotting film by enhanced chemiluminescence.The data provided in this article are related to the original research article "Biotinylated probes of artemisinin with labeling affinity toward Trypanosoma brucei brucei target proteins", by Konziase (Anal.Biochem., vol.482, 2015, pp.25-31. http://dx.doi.org/10.1016/j.ab.2015.04.020).
exercise is being run alongside the Explorer Grants program.For all three streams, it is too early to tell whether they had the kinds of effects that the academic arguments for lottery suggest, but preliminary results are expected in the next few years; more comprehensive comparisons, however, may require much longer, as the value of some research projects is only revealed long after publication, in the case of so-called ‘sleeping beauty’ papers.How did we get from the world of 1998, that had one Lancet article on funding by lottery, to the world of 2017, with three funding streams implementing that policy?,The full story will need to be uncovered by historians or sociologists of science policy, but it is helpful to track the way these arguments have related to each other in the literature.While concerns about various shortcomings of grant peer review have existed for decades, there has been little systematic evaluation of the process.Given this background, integrative interdisciplinary reviews of alternatives to peer review have had a significant role to play in shaping alternative policies.Ioannidis, published in the interdisciplinary journal Nature, presents a harsh criticism of current funding practices, and surveys a range of possible measures for improvement, including funding by lottery.Guthrie, Guerin, Wu, Ismail, and Wooding presents a comprehensive policy-oriented survey of the literature on alternatives to grant peer review.The report was produced by RAND Europe, a think tank often contracted by governments and large funders with a long history of informing policy decisions."In the section of funding by lottery, the report references Ioannidis's review, as well as Graves et al.‘s results and Brezis's model. "This RAND Europe report was cited by a New Zealand government review of the Health Research Council's Explorer Grants program.The full citation graph for the works discussed in this paper is presented in Fig. 2.14,Without drawing too many conclusions from a very limited sample size, it seems that philosophy of science could be doing better in engaging the audiences that are relevant for increasing the allocation of resources to help promote scientific novelty.The world in 2018 contains three research funding streams that allocate funds randomly amongst qualifying proposals.Though their total funding amount is negligible when compared to the global basic R&D funding budget, the near-total dominance of funding by peer review as the gold standard of science funding since World War Two marks these policies as surprising changes to the landscape.This surprise is also partly attributable to the non-intuitive idea of funding by lottery, despite arguments being put forward to attempt such policies starting two decades ago.Looking at all the different arguments for randomised funding as a whole, we can see a common structure emerging that emphasises the faults and biases in peer review as a failure to accurately measure future research quality, in particular when evaluating novel research.Given this argument, a novel mechanism is proposed – randomisation of selection post initial filtering – that either cuts superfluous investment in useless evaluation or eliminates harmful bias in selection.While disagreements remain between versions of the argument, and enough uncertainties remain to support different specific implementations, it seems justifiable to run these policies as trial versions to learn more about the outcomes.This is now happening, and provides an opportunity for engagement for social epistemologists, for philosophers interested in contributing to good policy, and for the intersection of these two groups.
In 2013 the Health Research Council of New Zealand began a stream of funding titled ‘Explorer Grants’, and in 2017 changes were introduced to the funding mechanisms of the Volkswagen Foundation ‘Experiment!’ and the New Zealand Science for Technological Innovation challenge ‘Seed Projects’.All three funding streams aim at encouraging novel scientific ideas, and all now employ random selection by lottery as part of the grant selection process.The idea of funding science by lottery emerged independently in several corners of academia, including in philosophy of science.This paper reviews the conceptual and institutional landscape in which this policy proposal emerged, how different academic fields presented and supported arguments for the proposal, and how these have been reflected (or not) in actual policy.The paper presents an analytical synthesis of the arguments presented to date, notes how they support each other and shape policy recommendations in various ways, and where competing arguments highlight the need for further analysis or more data.In addition, it provides lessons for how philosophers of science can engage in shaping science policy, and in particular, highlights the importance of mixing complementary expertise: it takes a (conceptually diverse) village to raise (good) policy.
distribution and average inter-pore distance.A discrete model for diffusive transport through porous media has been proposed with the ambition to provide a bridge between the short length scale of individual pores and the long length scale of interest to engineering applications.The methodology has been developed subject to two constraints: limited experimental information about the pore space; and limited size of the modelled pore space.The former has been tackled by selection of local diffusivities as functions of pore sizes and total porosity alone.The need for connectivity data which can be unobtainable for micro- and meso-porous materials is thus avoided.The latter has been tackled by the introduction of boundary film coefficients that allow us to prescribe a far-field concentration by hypothetical repetition of the modelled pore space.In addition the idea has been extended to account for the adsorption of the diffusing species and study its effect on the long-term diffusivity of the system.The model has been applied to bentonite using a reported experimental pore size distribution.With a combination of film boundaries and sorption kinetics the model has been shown to give macroscopic diffusivity in very close agreement with reported experimental values for radioactive contaminant with small molecular radius.Whilst the agreement for this macroscopic parameter is promising, the performance of the model requires validation at a “meso-scale”, with measurements involving diffusion through several tens to hundreds of inter-pore distances.From the results presented for the invasion of the small contaminant with time, it has been concluded that diffusion experiments with tracer of comparable size would not be able to provide validation data.Therefore results for larger diffusing species, hypothetical colloidal molecules, have also been presented.Through these, the strong effect of the pore space connectivity on the system diffusivity has been demonstrated.Further, it has been concluded that diffusion experiments with a tracer of similar size could be used for the required advanced model validation at the meso-scale.Although this work has used a particular material to illustrate the methodology and its application, the model is not limited to bentonite.The proposed approach can be applied to any other micro- and/or meso-porous material after analysis of the pore size distribution and total porosity.Further, the model offers clear separation between effective diffusivity, dependent only on the pore space structure, and apparent diffusivity, resulting from local non-linear effects.This can be used in conjunction with experiments to investigate internal sorption by appropriate selection of non-adsorbing and adsorbing tracers.The approach is not limited to sorption effects and could be used for a range of other pore space changing mechanisms.For example, our immediate plans are to study the effect of micro-cracking due to thermo-mechanical loads on the diffusivity of the medium with application to long-term behaviour of bentonite in deep repository for nuclear wastes.Such a coupled problem is of significant interest to the oil and gas industries, particularly when hydraulic fracturing is used to enhance hydrocarbon recovery.In this application, the evolution of permeability rather than diffusivity of the medium is required, but the methodology developed in this work would not change.The most promising area for the application of our approach to such problems is the shale gas extraction.Further, the model can be extended to investigate possible use of different porous media for uranium separation from waste as well as for analysis of multi-component systems.Analyses with colloidal-porous media systems could be of interest to the pharmaceutical industry.An important element of future model development is the better understanding and the more realistic description of sorption strength.It is in the plans of the authors to use atomic scale modelling techniques for the analysis of sorption in order to be able to derive efficiently sorption isotherms and improve the predictive capabilities of the model.
A meso-scale model for diffusion of foreign species through porous media is proposed.The model considers diffusion as a continuous process operating on a discrete geometrical structure dictated by the pore size distribution.Local diffusivities and hence pore space connectivity are dependent on the size of the diffusing species and the sorption of those species onto the pore walls.The bulk diffusivity of the medium has been analysed to consider the effects of pore structure alone and in combination with sorption.The chosen medium is bentonite, which is being considered for use as a barrier to radionuclide transport in future deep geological repositories for nuclear waste.Results for transient diffusion of U(VI)-complex through bentonite are presented and very good agreement with experiments is demonstrated.Results for diffusion of larger chemical complexes are also presented to illustrate the effect of reduced pore space connectivity on steady-state and transient transport parameters.Diffusion of larger complexes can be used for experimental validation of the model.The proposed methodology can be used for any micro- and meso-porous material with known distribution of pore sizes.It can be extended to other pore space changing mechanisms, in addition to sorption, to derive mechanism-based evolution laws for the transport parameters of porous media.© 2013 The Authors.
for the observation of novel-syllable generalisation is that the part-word items do not occur in the speech, though for the moved-syllable generalisation task the part-word items did occur, and it is therefore harder to reject them.This is a possibility, but does not affect the overall result that generalisations depending on non-adjacencies are driving the results in the novel-syllable generalisation task.In either case, there was no significant effect of the type of part-word, indicating that a part-word containing a syllable pair that occurred during training was no harder to reject than a part-word containing no syllable pairs that occurred during training, making this an unlikely cause for the distinction between the two generalisation conditions.The current study demonstrates that segmentation and generalisation are not separable behaviourally within the same time period, where such a differentiation had previously been claimed.Thus, evidence suggesting that there was a distinction between processes for word identification and grammatical processing is shown to not be supported.There remains a possibility that the tasks are solved simultaneously but in different ways, such that non-adjacencies are utilised for segmentation using statistical learning, but that operations over the structure are still symbolic, applying to abstract generalisations of the relations between elements.We suggest that, in the absence of evidence to the contrary, the same class of mechanisms – statistical learning – should be assumed to be sufficient for driving word learning as well as structural generalisation.Previous claims of the need for symbolic, algebraic processing for generalisation of sequences rely on a narrow interpretation of statistical mechanisms that permit only computation of dependencies between elements in experienced stimuli.However, statistical processing is consistent with learning to generalise, as well as learning precise co-occurrences between experienced elements in sequences.The traditions of symbolic and statistical processing in cognitive psychology, and language acquisition research, have undergone substantial convergence.For instance, Redington, Chater, and Finch demonstrated how statistical processing of clustering can support generalisations as well as learning individual correspondences in grammatical structure.Similarly, French, Addyman, and Mareschal showed how the same statistical learning mechanism could apply to both speech segmentation studies and studies of implicit learning of rule-based sequences.Our results confirm that such results also occur behaviourally.The breadth of possible statistical processes that can support speech segmentation, grammatical categorisation, and syntactic processing reduces the requirement to stipulate that language acquisition processes may be domain-specific, rather than applications of powerful general-purpose learning mechanisms.However, exactly what statistical mechanism is being applied remains a difficult issue to resolve.In the current experiment, the distinction between word identification and grammatical processing can be understood in terms of learning dependencies between experienced sequences and learning to generalise dependencies to new sequences.However, scaling this distinction up to the dependencies observed in natural language requires explaining how long-distance dependencies between hierarchical structures may be acquired.Nevertheless, the study we present here demonstrates that, from the same input and at the same time, participants are able to identify particular sequences as words, and generalise the structure of those sequences.Any qualitative distinctions between the processes involved in these tasks as yet remain to be demonstrated.
Language learning requires mastering multiple tasks, including segmenting speech to identify words, and learning the syntactic role of these words within sentences.A key question in language acquisition research is the extent to which these tasks are sequential or successive, and consequently whether they may be driven by distinct or similar computations.We explored a classic artificial language learning paradigm, where the language structure is defined in terms of non-adjacent dependencies.We show that participants are able to use the same statistical information at the same time to segment continuous speech to both identify words and to generalise over the structure, when the generalisations were over novel speech that the participants had not previously experienced.We suggest that, in the absence of evidence to the contrary, the most economical explanation for the effects is that speech segmentation and grammatical generalisation are dependent on similar statistical processing mechanisms.
should be a primary goal for future research in the area of cross-cultural neuropsychology.In addition, the current sample was recruited from a single geographic region, and the Hispanic participants were predominantly from a Mexican American background.One specific limitation of the five-factor model pertains to the breadth of the visuospatial and motor factors.The visuospatial factor is indicated by two test scores, CLOX1 and CLOX2, while the motor factor is indicated by four test scores, dominant and nondominant hand Grip Strength, with two trial scores per hand.As such, the model provides a narrowly focused ability estimate for these two cognitive domains, which may be undesirable in some assessment contexts.Few neuropsychological tests have been designed a priori to provide unbiased estimates of cognitive functioning across diverse groups, and few existing test batteries have been subjected to post hoc validation for this purpose .Without confirmation that a test is essentially free from ethnic, racial, or linguistic bias, it is difficult to determine the relative contributions of true cognitive differences versus systematic error variance when interpreting observed test score differences.The present study therefore makes an important contribution to the literature by providing clinicians and researchers with another tool for generating valid cognitive outcome measures across two important dimensions of diversity.Systematic review: We performed a literature search in PubMed and Scopus to identify articles that have investigated the factor structure of neuropsychological test batteries in older adults, and for studies that subjected these models to measurement invariance testing or similar methodological approaches.Despite the growing ethnic and linguistic diversity in the United States, few neuropsychological tests used in dementia assessment are able to provide equally valid estimates of cognitive functioning across diverse groups.Interpretation: The present study identified a set of 19 cognitive test variables that together provide an essentially unbiased estimate of five cognitive domains in Mexican Americans and non-Hispanic whites regardless of whether the tests were administered in Spanish or English.Future directions: The current results support the use of the five-factor model reported here in future research seeking to investigate cognitive functioning in populations containing Mexican American and Spanish-speaking older adults.
Introduction: The present study sought to investigate the measurement invariance of commonly used neuropsychological tests in an ethnically (Hispanic vs. non-Hispanic) and linguistically (Spanish vs. English) diverse sample.Methods: Participants were 736 middle-aged and older adults (MAge = 62.1, SD = 9.1) assessed at baseline.Measurement invariance testing was performed using multiple-group confirmatory factor analysis.Results: A five-factor model (memory, attention/executive functioning/processing speed, language, visuospatial, and motor) fit the data well (CFI = 0.979, RMSEA = 0.047) and the composite reliability of the factors ranged from.76 (visuospatial) to.97 (motor).The five-factor model was found to possess strict measurement invariance for ethnicity and language without a decrement in fit compared to a strong (scalar) invariance model (ΔCFI =.000, ΔRMSEA =.002).Discussion: These results indicate that a five-factor model is suitable for estimating cognitive functioning in Mexican Americans and non-Hispanic whites without bias by ethnicity or language.
some West African and East African countries suffer losses in horticultural and cereals production, which are typically high value added cash crops for these countries.In recognition of soil degradation, a number of soil conservation measures are already implemented at regional scale and in many countries12 .For example, in Kenya small scale conservation tillage and terraces are used to improve water storage capacity and crop land productivity.In Ethiopia, degraded land areas have been enclosed from human and animal use and enhanced by additional vegetative and structural conservation measures, to permit natural rehabilitation.Furthermore, comparing with the CGE study of Panagos et al., these results present a markedly different picture for the EU since, unlike their study which only examines erosion in the EU, the current scenario design models simultaneous erosion effects throughout the globe.With its relatively milder erosion rates, the EU now is in a relatively more favourable production and trade position, which contrasts sharply with the negative EU production impacts reported in Panagos et al.Drilling down into the results, one also observes that even with an erosion shock corresponding to a single year, there are noticeable global shifts in agricultural production in China, India and Brazil.These changes are particularly prevalent in the production of rice, which decreases by almost 0.5% globally.Indeed, our study reveals that falling land productivity, particularly for rice production, is a major driver of increased water abstraction in Asia.From a trade perspective, the heterogeneous rates of erosion across the planet give rise to accelerating current trends where net agri-food exporters such as USA, Canada, Europe and Oceanian countries continue to improve their net trade balances at the cost of net food importers such as China and South East Asian countries.These effects call for the prioritization of soil governance and conservation strategy in all countries and international policy agenda.In this regard, the European Commission launched the Seventh Environment Action Programme, which requires that by 2020 land is managed sustainably and soil is adequately protected."Focusing on agricultural land, the EU's Common Agricultural Policy links support directly to the need to maintain agricultural land in good condition, whilst the post-2020 CAP includes as one of its main objectives, efficient soil management linked to actions to reduce soil erosion and increase soil organic carbon.In the USA, the Farm Bill extends soil conservation compliance requirements in order to qualify for the crop insurance subsidy.At global scale, the FAO and its Global Soil Partnership launched in June 2018 a new programme to reduce soil degradation for greater food and nutrition security in Africa.Other countries are implementing local measures, yet a global multilateral environmental agreement on soil protection is missing."Measures aimed at reinforcing ecosystem services, ad hoc regulation of human interventions and active farmers' participation contribute to minimize soil erosion.To this aim, protection and restoration of diverse plant communities on slopes are essential, as trees and diversified vegetation increase soil resistance to rain erosivity.Other measures such as reduced tillage, buffer strips, agroforestry, plant residues and cover crops enhance soil fertility and control water runoff.As in all modelling endeavours, there are caveats to the study.Firstly, as discussed in Section 2, there is uncertainty surrounding the soil erosion estimates from the global biophysical model and the assumption that land productivity losses occur only in severely eroded land.Secondly, the assumption of average crop productivity losses due to soil erosion is based on a literature review but in the real world it can vary from region to region.Further, physical and economic models typically work at different temporal and spatial scales.The need to interface RUSLE with MAGNET implies that the site-specific soil erosion data have to be adapted at the larger spatial scale of the CGE model.Finally, whilst the economic framework provides some insights on the biophysical implications of soil erosion, a fuller treatment of the off-site costs such as destruction of infrastructures, sedimentation, flooding, biodiversity and soil carbon losses, landslides, and water eutrophication, whilst requiring further research, are beyond the scope of this paper.Connected to this last point, future analysis could therefore seek to broaden the list of indicators beyond recognised metrics such as prices, production, trade and GDP, where the latter has been criticised as a misleading measure of success or failure.Indeed, in the context of soil erosion, a broader set of indicators is very much inspired by the realisation of the Sustainable Development Goals, particularly SDG 15, which targets indicators relating to land degradation and protection of ecosystems.The extension of soil erosion to encapsulate these cost concepts may likely reveal even greater costs than the loss of crop productivity.The views expressed are purely those of the authors and may not in any circumstances be regarded as stating an official position of the European Commission.
Under pressure to use more marginal land, abstracted water volumes are driven upwards by an estimated 48 billion cubic meters.
to inform consumers about the use of the technology and expectations in terms of safe food handling.As it is important to accommodate different consumer groups in public policy messages, how to deliver such messages to different segments of consumers are equally important, as we found in this research.One way to deliver public health messages is via a trusting source.However, as we find in this study, not all consumers perceive institutions equally trustworthy.Thus, it is important to involve different institutions and individuals in the food chain in communication strategies relating to nanotechnologies.Doing so means that public policy messaging will be effective in reaching different consumer groups.Indeed, although our results reveal that the majority of the participants trust the government institutions the most, there is a minority group who find university scientists, NGOs, farmers, butchers, and friends and family more trustworthy compared to government institutions.Results from this study signal that communication strategies targeting different consumer groups is likely to be more effective when the information is delivered via the sources consumers regarded the most trustworthy.Such targeted approaches are expected to increase awareness and decrease ambiguity among different consumer groups about the technology.This might then lead to better informed choices and safer practices regarding the new technology and, ultimately, could influence the acceptance of the technology among consumers.Targeted approaches and tailored communication strategies could also be useful in situations where information campaigns involve changing risky food handling behaviours, such as the “don’t wash raw chicken” campaign and ‘use by’ date campaigns in the UK.15,The use by date campaign targeted high risk group people to reduce the risk of food poisoning from listeria.The Food Standards Agency worked with general practioners, pharmacies and a range of community groups across the UK, especially in areas with large populations of older adults, to raise the awareness of the risks of getting Listeria, importance of ‘use by’ date, and safe food storage conditions.Our findings can be utilised to design similar information campaigns regarding the use technology, such as smart packages and safe food handling practices among certain consumer groups.This paper investigates UK consumers’ trust in sixteen institutions who may provide information about the use of nanotechnology in food production and packaging.It aims to identify differences in consumers’ perceived trust and distinguish the degree of consistency in their choices.By doing so, it contributes to the empirical literature in two main ways: by investigating trust in a large number of institutions, some of which were overlooked in the previous literature, and by explaining how the perception of trust in sixteen institutions varies with individuals’ characteristics and choice variability or consistency.Using a latent class modelling approach, we identified three different consumer groups, each of which was composed of two subgroups in terms of the level of variation in their choices.The first consumer group made up the majority of the sample and were more likely to be younger, female and to have attained higher than a high school education.This group perceived government institutions and scientists to be the most likely to provide trustworthy information on nanotechnology.While Class 2 also considered government institutions and scientists as highly trustworthy, they also deemed consumer organisations as equally trustworthy.This group was found to be comprised of consumers who acquired less than a high school education and aged between 30 and 60 years.The smallest consumer group, however, was observed to be more likely to be aged over 60 years and to place least trust in government institutions.Instead, they regarded university scientists, NGOs, farmers, butchers, media, and friends and family relatively more trustworthy.Our research also identifies areas for future research.As with all empirical trust studies, the results are a product of, and limited by, the institutions included, and those excluded.The number of institutions used in the survey design was bound by the need to make the choice tasks intuitive and cognitively manageable for the general public.While more tasks offer the prospect of more efficient trust estimates, this must be balanced with the increased risk of choice inconsistency due to increased task complexity, fatigue and associated imprecision in estimation.We recommend researchers to extend this line of research to investigate the underlying reasons for inconsistent choices.Another extension of this research should investigate whether individuals perceive an institution trustworthy due to the dimensions of trust relevant to the context, such as perceived competence and transparency.Despite these limitations and the need for further research, our findings provide insight into the development of best practices and policies in risk communication and management for novel foods produced by nanotechnologies and have policy implications.
This paper investigates UK consumers’ trust in sixteen information sources, from government institutions to food handlers and media, to provide accurate information about the use of nanotechnology in food production and packaging.The findings from this study provide insights into the development of best practices and policies in risk communication and management for novel foods produced by nanotechnologies.More specifically, they highlight how targeted approaches can be used by policymakers responsible for disseminating information relating to novel technologies.
Insect stings and sensitization to Hymenoptera venom are common in the general population.1–5,Although different methods used for assessing sensitization make it difficult to compare different populations, sting frequency and sensitization rates in warm, southern countries seem to be higher than in countries with a cooler climate3,4: Studies from Sweden3 and Denmark1 showed sensitization rates of 9% and 15% respectively, while a study from Turkey4 reported that 29% of the general population are sensitized.Similarly, individuals who spend a lot of time outside, such as those who hunt and fish, are prone to repeated stings and therefore have a very high risk of sensitization and venom allergy.6–8,However, there are few data on the prevalence of sensitization and allergy in these population groups.A previous study that determined the risk of systemic sting reactions in individuals with “hitherto irrelevant sensitization” suggested that “sensitization to Hymenoptera venoms is common, but systemic sting reactions are rare".2,Clinically irrelevant sensitization has been shown to be related to high total IgE levels9 while severe sting reactions are associated to high tryptase levels.10–13,However, discrimination between asymptomatic and potentially clinically relevant sensitization is still not possible without a history of previous reactions.In addition, there is no known parameter that allows the forecast of the severity and risk in regard to future allergic reactions to Hymenoptera venoms based on sensitization profiles.Immunotherapy is a highly effective treatment to prevent severe systemic reactions, but time-consuming and expensive.14–16,Therefore, it is only performed in patients with previous systemic allergic reactions, even though this treatment could also prevent systemic allergic reactions in individuals sensitized and prone to react.Optimized in vitro assessments of the risk for severe allergic reactions would be highly appreciated and useful for doctors as well as patients.Specific IgE to different single allergens, such as rApi m 1, rApi m 2, rApi m 3, rApi m 5 and rApi m 10, rVes v 1 and rVes v 5, have increasingly gained diagnostic importance in addition to the common honey bee venom and wasp venom extracts.17,Recombinant allergens show diagnostic advantages in sensitivity and specificity, especially concerning the problem of cross-reactivity.17–19,However, to date, the sensitization profile to different recombinant allergens cannot be used to forecast the risk of a sensitized individual.In contrast, different sensitization profiles in peanut or peach allergy are associated with different forms of allergic reactions.Studies in southern Europe showed that patients with peach allergy showing high sIgE-levels to Pru p 3 are more likely to show systemic reactions.20,21,For peanut allergy, elevated sIgE to Ara h 1, 2 and 3 have been found to increase the risk for severe allergic reactions.22,23,These findings permit a better risk assessment for patients suffering from peanut or peach allergy by their medical doctors.Analogous findings would be invaluable for Hymenoptera venom allergy.In our study, we aimed to investigate the prevalence of bee and wasp sensitization in a high-risk population for insect stings, to analyze the sensitization profile and to investigate potential correlations between symptoms and sensitization to different recombinant allergens.1.Conception and design of study: AZ, TB.2.Acquisition of data: AZ, BS, JW.3.Analysis and interpretation of data: AZ, BS, BE, KE, UD, TB.4.Drafting manuscript: AZ, BS, JW.5.Critically revising manuscript for important intellectual content: BE, KE, UD, KB, TB.6.Final approval of the version to be published: AZ, BS, JW, BE, KE, UD, KB, TB.This cross-sectional study was performed at the annual winter meetings of three different hunting associations in December 2016 and at an annually held international exhibition for hunting and fishing in January 2017, all located in the Greater Munich Area, Southern Germany.The study was approved by the local ethics committee of the Medical Faculty of the Technical University of Munich, and written informed consent was obtained from all participants prior to study inclusion.All participants had to be individuals 18 years or older who actively hunted or fished.Exclusion criterion was the inability to understand the study information and/or the paper-based study questionnaire due to language barriers or other circumstances.Subjects included in the study first had to fill out a paper-based questionnaire before blood samples were obtained and frozen for later in vitro tests.Completion of the paper-based questionnaire was a prerequisite for the blood withdrawal.Apart from general data and profession/hobby, the paper-based questionnaire included questions on known allergies, number of previous insect stings by Hymenoptera and any experienced local or generalized reactions after Hymenoptera stings.Only the total number of stings by any Hymenoptera was counted, not distinguishing between species, to minimize memory bias.In this European study, reactions to insect stings were assessed with questions on local reactions and questions according to the European Grading of Anaphylactic Symptoms According to Severity of Symptoms,24,25 allowing a classification from anaphylaxis grade 1 to grade 4:Local reactions: erythema, pain, edema,Dermal,Abdominal,Respiratory,Cardiovascular,We used the ImmunoCAP system according to the instructions of the manufacturer in order to determine total and specific IgE.Allergen-specific IgE positivity was determined using both a cutoff of ≥0.35 kU/l, which is typically used in clinical allergy, and of ≥0.1 kU/l, which is applied by the manufacturer of the ImmunoCAP assay and which has been proven to provide a higher sensitivity.26,Total IgE levels were classified in 3 groups.Serum tryptase was measured with an ImmunoCAP Tryptase assay, for which 11.4 μg/l has been adopted as the upper reference level in previous studies.27,Blood samples were obtained and stored frozen until tested.After unfreezing, all sera were tested for tryptase, total IgE and specific IgE to honey bee venom and the corresponding recombinant major allergens rApi m 1, rApi m 2,
Methods: In this cross-sectional study, paper-based, self-filled questionnaires about previous insect stings and sting reactions were obtained from individuals who hunt and fish in Bavaria, Germany.Blood samples were taken and analyzed for the levels of tryptase, total IgE and IgE to honey bee (i1) and wasp (13) venom, the recombinant allergens rApi m 1, rApi m 2, rApi m 3, rApi m 5, rApi m 10, rVes v 1, rVes v 5, and the CCD marker molecule MUXF3.
that measuring sIgE for the combination of all recombinant allergens offers a comparable or even higher sensitivity than exclusively measuring sIgE for the whole venom extracts.This is reinforced by the fact that 3.5% of our participants showed a sensitization above 0.35 kU/l for at least 1 recombinant allergen, but not to the whole venom extracts.The aforementioned method should be considered by all medical professionals, especially in the evaluation of high-risk patients with suspected Hymenoptera venom allergy.Previous studies have already established increased sensitivity to recombinant allergens, especially for rVes v 5 and rVes v 1.35–37,Consequently, spiking whole wasp venom extracts with recombinant allergens was introduced and used here for sIgE detection with a substantially higher sensitivity.38,However, in our study, the sensitivity of spiked whole wasp venom extract was still lower than the sensitivity of Ves v 1 and Ves v 5 taken together.Michel et al39 reported that sIgE testing to recombinant allergens of Hymenoptera venom provides better sensitivity in patients with mastocytosis or with otherwise elevated tryptase levels.The correlation analysis showed that sIgE to almost all recombinant antigens intercorrelate.However, consistent with previous findings,40 sIgE to rVes v 5 did not correlate with sIgE to any honey bee venom antigens and therefore seems to be quite specific for true wasp venom sensitization.Intercorrelations of the sIgE to recombinant allergens of honey bee and wasp venom were stronger, respectively, in individuals with a history of systemic sting reactions, while the intercorrelation between sIgE to allergens of the distinct species did not increase.These findings reinforce the hypothesis that component resolved diagnostics is a valid tool to detect true sensitization and reduce the problem of cross-reactivity.19,41,However, in contrast to previous reports,42 we did not find a considerable correlation between sensitization to MUXF3 with the sensitization to bee venom.Respectively, 14.6% or 37.5% of the cases with a history of anaphylaxis did not show a sensitization to whole venom extracts or recombinant allergens.It is tempting to speculate, that these potentially “hidden sensitizations” could be detected with new recombinant allergens that are still to be developed.The difference of 14.6% and 37.5% depending on the selected threshold level of ≥0.1 kU/l or ≥0.35 kU/l proves the higher sensitivity of the lower threshold which allows the detection of more clinically relevant sensitizations.This advantage was already described by Fischer et al26 in a recent publication on type 1 sensitization to alpha-gal and we therefore implemented the 0.1 kU/l threshold in our study.Due to our high risk population, the results are not fully generalizable.However, the results allow us to conclude that Hymenoptera sensitization is frequent in individuals who hunt and fish and maybe as well in other high-risk groups for insect stings.Interestingly, sensitization to recombinant allergens is common as well, but we were unable to find a correlation between reaction severity and sensitization to any of the hitherto available recombinant allergens unlike Pru p 3 for peach allergy or Ara h 1–3 for peanut allergy.Therefore, to evaluate potential Hymenoptera allergies and distinguish these from vasovagal reactions after insect stings, a meticulous assessment of the number of previous stings as well as the exact clinical symptoms is still crucial.However, our results reinforce the advantages of CRD for sensitivity in detecting Hymenoptera venom sensitization and discriminatory power for discriminating true double sensitization from cross-reactivity.The study was approved by the local ethics committee of the Medical Faculty of the Technical University of Munich under the reference number 405/15s.The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.K. Brockow has received consultation fees from Phadia -Thermo Fisher.T. Biedermann has received consultation fees from ALK Abelló, Sanofi Regeneron, Novartis, Mylan, Phadia – Thermo Fisher.All other authors declare that they have no competing interests.This study was funded by the Department of Dermatology and Allergy of Technical University of Munich and in parts financially supported by ALK-Abello Arzneimittel GmbH.
Background: Hymenoptera venom sensitization in highly exposed individuals frequently requires risk assessment for future severe sting reactions.In this study, we determined the prevalence of Hymenoptera venom sensitization in individuals who hunt and fish and analyzed possible correlations between the severity of sting reactions and the IgE sensitization profile.Odd ratios (ORs) for sensitization and anaphylaxis and Pearson's correlations for the different allergens were calculated.Results: Of 257 participants, 50.2% showed a sensitization to honey bee venom (i1), and 58.4% showed sensitization to wasp venom (i3).Anaphylaxis was reported in 18.7%, and a local sting reaction was reported in 18.3%.The highest sensitization rates were found for whole venom extracts, sensitization to any of the available recombinant allergens exceeded sIgE levels to honeybee venom (i1) in 28.5% and to wasp venom (i3) in 52.9% of participants.Participants with a history of more than 5 stings showed a higher risk for anaphylaxis.Conclusions: Sensitization to Hymenoptera venom and their recombinant allergens are present in the majority of individuals who hunt and fish.Sensitization to distinct recombinant allergens does not necessarily affect the severity of sting reactions including anaphylaxis.A meticulous medical history of the number of previous stings as well as systemic reactions remains essential.
hepatitis.Concurrently to investigating the activation and inhibitory potential of the NK cells in the varying HBV clinical phases, we chose to investigate the developmental phenotype of these cells using the specific differentiation-associated markers CD16, CD57, and NKG2C.During analysis of these differentiation-associated receptors, only significant differences were observed when comparing CD57 in the two HBeAg-positive phases, with a significant reduction of CD57 expression seen in HBV patients in the IA clinical phase relative to the IT phase.Additional analysis for all phenotypic markers was also performed on both the CD56bright and CD56dim NK cell subsets.An increase in expression of activation receptor expression observed in CD56bright NK cells of IC patients, specifically NKp30 and NKG2D expression in respect to the earlier HBeAg-positive IT and IA phases.Collectively these results suggest an increased inhibitory potential and reduced activation potential in the NK cells of ENEG patients compared to both HBeAg-positive phases, and a reduction of differentiated CD57+ NK cells during the transition from the IT phase to the IA phase.Previous modular transcriptome analysis of the whole blood of HBV patients identified NK cell/cytotoxicity activities as a distinctive marker in the elevated ALT phases, IA and ENEG, when compared to the IT phase.To build on this study and to obtain a more detailed insight into the heterogeneity among chronic HBV patients, we examined, in the detail, NK cells throughout the course of the natural history of chronic HBV infection, specifically their potential causal role in the fluctuations of liver damage markers and HBV replication observed in these clinical phases.We demonstrate that the overall composition, phenotype, and cytolytic activity of the NK peripheral cell compartment remains relatively constant across all clinical phases, with the exception of a few specific markers, CD56bright NK cells of chronic HBV patients differ in their ability to produce IFN-γ between the clinical phases pre- and post-HBeAg seroconversion.The IA and ENEG clinical phases are characterized by elevated levels of ALT and liver damage.We hypothesized that this could partially be attributed to increased direct cytolytic activity and killing of infected hepatocytes by NK cells, but were surprised to find no measurable differences in the expression of cytotoxic mediators perforin, granzyme, and TRAIL across any clinical phases.In line with this, overall similarities were observed by flow cytometry in expression of NK cell activation and inhibitory receptors when comparing the inflammatory and non-inflammatory phases.Furthermore, the lack of differences observed in the total percentage of NK cells in the lymphocyte compartment, as well as subset composition, suggests no increase in the proportion of NK cells or a phenotypic skewing toward the more cytolytic CD56dim subset.However, the significant changes observed in the proportion of NK cells expressing NKp46 and KIRs suggest an altered activation potential of NK cell compartments during specific clinical phases, markedly the ENEG phase.NK cells from patients in the terminal phase of the natural history of HBV infection, ENEG, expressed decreased levels of activating NCR NKp46 and enhanced levels of inhibitory, and markers of NK cell differentiation, KIRs, potentially resulting a decreased potential for activation for NK cells during this phase.Additionally, NKp46 has been shown to trigger NK cell activation and cytotoxicity upon ligation, but is expressed on mature NK cells irrespective of their activation in humans.Thus, the functional consequences of these observations are complex in nature, and could not be determined in the present study, since the characteristic NK cell parameters were stable during the natural history of chronic HBV.Although the NK cell compartment composition and expression of cytolytic mediators did not differ between clinical phases, one key distinction was the ability of CD56bright NK cells to produce IFN-γ upon cytokine stimulation.IFN-γ has been shown to be a trigger for the process of non-cytolytic HBV clearance and recruitment of inflammatory immune cells in both the innate and adaptive immune response to HBV infection.We demonstrate a differential capacity to produce IFN-γ in NK cells of chronic HBV patients pre- and post-HBeAg seroconversion, specifically between the IT/IA and IC phases.This differential production of IFN-γ observed could be the result of enhanced immune pressure due to high viral and antigen load present during the HBeAg-positive clinical phases, relieved after seroconversion and the subsequent viral suppression.However, we cannot say if this is a direct result of the high levels of viral particles and antigens present, or an indirect effect due to the coinciding immune activity in these phases.Collectively, by conducting a detailed analysis of the phenotype and function of NK cells using clinically well-defined patient cohorts, we were able to characterize distinctive NK cell compartmental alterations during the progression of the chronic HBV natural history.Our data, obtained using peripheral blood, provide no evidence for higher NK cell activities during the IA and ENEG phase, thereby limiting the likelihood that NK cells are responsible for observed liver damage during these specific phases as reflected by fluctuating serum ALT levels.However, our findings show subtle changes indicative of a shift in NK cell phenotype and function between the IA phase under heavy viral or immune pressure, and the IC phase, that coincides with the process of HBeAg seroconversion.These observations shed light on the differences in the NK cell compartment during the natural history of chronic HBV, which may help better understand the extensive heterogeneity of immune responses observed in chronic HBV patients.The authors of this manuscript have no conflicts of interest to declare.
Results The overall composition, phenotype, and cytolytic activity of the NK cells remained constant across all clinical phases, with the exception of a few specific markers (KIRs, NKp46).CD56bright NK cells of chronic HBV patients differed in their ability to produce IFN-γ between the clinical phases pre- and post-HBeAg seroconversion.Conclusion This depicts a shift in NK cell characteristics between the immune active, under heavy viral or immune pressure, and inactive carrier phases, that coincides with HBeAg seroconversion.Although these changes in NK cells do not appear to be completely responsible for differences in liver damage characteristic of specific clinical phases, they could provide a step toward understanding immune dysregulation in chronic HBV infection.
The data in this article show the effects of phosphosugar ligands and other variables on the phosphorylation state of the catalytic serine in several enzymes in the α-D-phosphohexomutase superfamily.Data were obtained by ESI-MS.Data on enzyme inhibition by several ligands is also presented.Leuconostoc mesenteroides glucose 6-phosphate dehydrogenase, and all bis- and monophosphorylated sugars except xylose 1-phosphate were obtained from Sigma-Aldrich.X1P was kindly synthesized by Dr. Thomas Mawhinney.Expression and purification of Pseduomonas aeruginosa PMM/PGM, Bacillus anthracis phosphoglucosamine mutase, Salmonella typhimurium PGM, Francisella tularensis PNGM, and human phosphoglucosmutase 1 were performed as described previously .Purified proteins were dialyzed into 50 mM MOPS, pH 7.4, concentrated, and stored at −80 °C until use.Ligands tested for effects on phosphorylation included two bisphospho-sugars, and various monophosphosugars, which have been reported, in different instances, to either phosphorylate or dephosphorylate these enzymes .The compounds were prepared as aqueous stock solutions at 1–200 mM, and mixed with protein to determine their effect on the phosphorylation level of the active site serine.For mass spectrometry, enzymes at 40–120 M were incubated with a 6.25 M excess of compound for 18 h at 4 °C.Samples were flash frozen and stored at −80 °C until analysis.Analysis of intact proteins by mass spectrometry was done as described previously with a NanoLC-Nanospray QTOF and C8 column chromatography.Expected and observed molecular masses of the proteins are found on Table 1.Duplicate spectra of two identical samples showed phosphorylation levels within 2% of each other, indicating good reproducibility.No degradation of the protein samples was observed during any of the conditions tested.The percentage phosphorylation was calculated by normalizing the sum of the dephosphorylated and phosphorylated peak heights to 1.0.As the proteins characterized herein are known to be phosphorylated on the conserved active site serine, and ESI-MS data confirmed a single phosphorylation site, no additional attempts were made to localize the site of phosphorylation.The exception to this was StPGM, which showed two phosphorylation sites via ESI-MS.Enzymatic activity of PaPMM was quantified by measuring the formation of glucose 6-phosphate in a coupled assay with G6PDH.The conversion of NAD to NADH was monitored by UV–vis spectrophotometry on a CARY 100 spectrophotometer at 25 °C, as previously described .Time courses of enzyme activity in the presence of glucosamine 1-phosphate and glucosamine 6-phosphate were conducted using 0.14 µM enzyme with 0.5 µM of glucose 1,6-bisphosphate, and 135 µM of substrate, glucose 1-phosphate.A broad assessment of the effects of phosphosugar ligands on the phosphorylation of the catalytic serine was conducted on PaPMM, one of the best-characterized members of the superfamily .Phospho- and dephospho-enzyme are easily distinguished in spectra.Data on the effects of ligands are shown in Table 2.The phosphorylation level can be increased to ~95% through incubation with G16P, a known activator for the superfamily.An alternate bisphosphosugar, fructose 1,6-bisphosphate, also increases the phosphorylation level of PaPMM to ~85%.Several other sequence-diverse proteins in the superfamily were examined for changes in phosphorylation: BaPNGM, FtPNGM, StPGM, and hPGM1.For all proteins, both G16P and F16P increase phosphorylation level of the active serine.For two of the proteins, FtPNGM and StPGM, F16P appears to be somewhat more effective than G16P under the conditions tested, with phosphorylation levels after incubation ranging from 90–100%.As noted in Section 2.4, ESI-MS of StPGM indicated two phosphorylation sites.To determine if one of these was the catalytic phosphoserine, the protein was subjected to proteolysis and phosphopeptides identified.A single phosphopeptide was identified by these studies, corresponding to residues 134–156, which includes the active site serine.Only one region of the protein was not covered by this analysis, which could contain the second site of phosphorylation as it includes two serines and one threonine.Incubations of PaPMM with G1P and G6P and the related sugars GlcN1P and GlcN6P were performed.Under the conditions tested, most of the monophosphosugars were associated with a reduction in the level of phosphorylation, by up to 50%.Incubation with GlcN1P, however, substantially reduces the phosphorylation level of PMM/PGM, by ~90%, consistent with previous observations .GlcN1P was also found to reduce phosphorylation of hPGM1, but had no effect on StPGM or PNGM proteins.Several concentrations of G1cN1P were tested at various time points in incubations with PaPMM.A time course of dephosphorylation in the presence of increasing molar equivalents of GlcN1P is shown in.The effect of temperature on the longevity of phosphorylation of PMM/PGM was also assessed.Samples were collected after incubation of the protein at 4° and 37 °C for various length of times, and after prolonged storage at −80 °C.The phosphorylation level was unchanged at 4 °C for at least two days and at −80 °C for 21 months.At 37 °C, a reduction in % phosphorylation was observed within 8 h, with complete loss over three days.To assess interactions between PaPMM and G1cN1P or GlcN6P, these two phosphosugars were tested as inhibitors for the PGM activity.GlcN6P shows no effect on time courses of enzyme activity at the concentrations tested, while GlcN1P shows increasing reductions in activity at higher concentrations.As a control, the substrate analog X1P, which has the same stereochemistry as glucose but lacks the O6 hydroxyl necessary for phosphoryl transfer, was also assessed as an inhibitor.The effects of X1P are consistent with those of a competitive inhibitor, with a Ki of 8.6±0.3 μM.GlcN1P shows evidence of noncompetitive inhibition with a Ki of 307±29 μM.These data provide information on ligands and solution conditions that affect the phosphorylation state and lifetime of the active site serine of enzymes in the α-D-phosphohexomutase superfamily, and may assist with preparation of homogeneous protein samples
Most enzymes in the α-D-phosphohexomutase superfamily catalyze the reversible conversion of 1- to 6-phosphosugars.They play important roles in carbohydrate and sugar nucleotide metabolism, and participate in the biosynthesis of polysaccharides, glycolipids, and other exoproducts.Mutations in genes encoding these enzymes are associated with inherited metabolic diseases in humans, including glycogen storage disease and congenital disorders of glycosylation.Enzymes in the superfamily share a highly conserved active site serine that participates in the multi-step phosphoryl transfer reaction.Here we provide data on the effects of various phosphosugar ligands on the phosphorylation of this serine, as monitored by electrospray ionization mass spectrometry (ESI-MS) data on the intact proteins.
operation of the growth mechanism on different substrates.Moreover, the strain status in nanowires is very important for their optical properties and device applications.Although strain can be relaxed through the surface, there is still no detailed study on strain for nanowires, especially in three dimensions.Answering these questions will rely on new techniques in electron microscopy and X-ray diffraction to explore the effects of strain and defects on these materials at the atomic scale.Although passivation has been studied to improve the efficiency of nanowire devices, it is necessary to develop enhanced methods to achieve electrical and dangling bond passivation, and improve light extraction using materials with proper refractive indexes.Moreover, for future nanowire device applications, such as high-performance lasers, it will be necessary to combine both MBE and MOCVD techniques to take advantage of the high growth rate for thick cladding layers of lasers using MOCVD and the high crystal quality of the active layer produced by MBE growth.The current nanowire growth technology only guarantees single color emission from a given nanowire array.Meanwhile, SAG and multi-step growth have been applied to achieve efficient and smart light mixing to produce multi-colored lights .A focus for future research will be achieving multiple wavelengths from a single growth of nanowires.Currently, most fabrication methods require complicated processes, such as filling material deposition, etching, and top contact deposition.A simpler fabrication technique for LEDs, lasers, etc., will improve the throughput for industry while ensuring high current injection efficiency with good electrical contact.To summarize, we have reviewed recent advances in III-nitride nanowire research, including different nanowire growth techniques, growth conditions and mechanisms, the adoption of various substrates, and methods of nanowire characterization.Catalyst-free growth techniques for GaN nanowires dominate the current growth technology, utilizing the MBE technique under nitrogen-rich conditions.To provide a broader understanding of the field, we have detailed various examples of MBE and MOCVD growth of nanowires.Additionally, the growth mechanisms of these materials were discussed, from nucleation to vertical growth, both in terms of kinetic and thermodynamic effects.We have further demonstrated that III-nitride nanowires can be used to construct various device platforms when grown on unconventional substrates, and have demonstrated their application in LEDs, photodetectors, lasers, solar cells, and photoelectrodes.However, more research is still necessary to improve fundamental understanding and promote the device application of these novel materials.With further development in advanced growth and characterization, nanowire arrays will find practical applications in high-power optoelectronics, displays, energy conversion and green technologies, single-photon emitters, quantum computing, and high-speed electronics.Such nanoscale devices will be essential for realizing ultra-low profile displays on both flexible and rigid substrates.Moving forward, there is potential for employing nanowires in multiple cross-disciplinary applications, such as visible-light communications, bio-sensing, and “wireless” solar water-splitting devices, as well as photodetection and sensing in high-temperature, harsh environments, by leveraging the chemical stability of high-quality nitride materials, the large specific surface areas of nanowires, and the lift-off ready nature of nanowire structures for substrate reuse.
Group-III nitrides and their alloys feature direct bandgaps covering a broad range of the electromagnetic spectrum, making them a promising material system for various applications, such as solid state lighting, chemical/biological sensing, water splitting, medical diagnostics, and communications.In recent years, the growth of strain and defect-free group-III nitride vertical nanowires has exploded as an area of research.These nanowires, grown on various unconventional substrates, such as silicon and different metals, demonstrate potential advantages over their planar counterparts, including wavelength tunability to the near infrared and reduced efficiency droop.The low-profile and low power consumption of such nanowires also make them viable candidates for emerging applications, such as the Internet of things and artificial intelligence.Herein, we present a comprehensive review on the recent achievements made in the field of III-nitride nanowires.We compare and discuss the growth conditions and mechanisms involved in fabricating these structures via metalorganic chemical vapor deposition and molecular beam epitaxy.How the unique optical, electrical, and thermal properties of these nanowires are correlated with their growth conditions on various unconventional substrates is discussed, along with their respective applications, including light-emitting diodes, lasers, photodetectors, and photoelectrodes.Finally, we detail the remaining obstacles and challenges to fully exploit the potential of III-nitride nanowires for such practical applications.
sphericity correction) and Task × Time interaction = 5.4, p < 0.0001 sphericity corrected), confirming the larger response in mTBI patients during the TD task.The larger response was not accompanied by higher distractibility, based on eye movement recordings.In particular, we did not observe significant differences in tracking accuracy after distracter presentation between mTBI patients and control subjects.Normal performance on the oculomotor task suggests that chronic mTBI patients used compensatory mechanisms and/or showed plasticity in response to injury in order to maintain performance in demanding tasks.It is important to note that the present findings relating BOLD response magnitudes in MT+/LO to mTBI status are not inconsistent with our recently published report.In the first study the analysis was voxel-wise, not regional, and therefore much more conservative due to the need for a multiple-comparison correction.Accordingly, the MT+/LO response did not show a significant different between mTBI and healthy controls.In the current paper, the MT+/LO ROI was defined from an independent set of resting-state scans allowing a more sensitive regional analysis.We then determined whether the task-evoked BOLD activity in the MT+/LO region was correlated with mTBI symptoms."Analysis of BOLD magnitudes in mTBI patients across all 3 tasks from the MT+/LO region revealed a significant positive correlation with mTBI symptoms, and with PTSD symptoms. "There was no correlation of BOLD magnitudes in mTBI patients with depression scores. "The correlation of BOLD magnitudes from the MT+/LO region with mTBI symptoms remained significant after regressing out the PCL_C total and CES-D scores, supporting the specificity of the association with mTBI symptoms. "However, the correlation of PTSD symptoms with BOLD magnitudes in MT+/LO was not significant after regressing out HISQ and CES-D scores.Patients with ‘Complex mTBI’, who have positive radiological findings and/or aPTA longer than 24 h, are indicated in Fig. 3B by green diamonds.Complex mTBI patients did not have higher HISQ 1-20 scores or higher BOLD magnitudes in MT+/LO than mTBI patients.Interestingly, mTBI patients with reported light sensitivity on the HISQ 1-20 questionnaire had higher scores on the HISQ 1-20 questionnaire p = 0.007), higher BOLD magnitudes in the MT+/LO region, and higher PTSD scores than mTBI patients without reported light sensitivity.These results suggest that hyper-activation of the MT+/LO region may underlie some of the visual-related symptoms of mTBI patients.Values from the CES-D scale were also higher in mTBI patients with light sensitivity, but did not reach significance.A comparison of patients with headache and without headache did not indicate significant differences in BOLD magnitudes from MT+/LO ROI.In summary, BOLD signals in MT+/LO showed significant differences between mTBI patients and controls, and were positively correlated with mTBI and PTSD symptoms, particularly light sensitivity.The large overlap of the abnormal ROI with white matter and the smaller FC in mTBI patients between anterior and posterior parts of the brain, suggest that white matter was damaged in our mTBI patients, consistent with earlier DTI studies.However, we previously reported that a TBSS analysis of axial diffusivity, radial diffusivity and mean diffusivity did not reveal any significant differences between mTBI patients and controls.In order to investigate whether white matter abnormalities were related to the abnormalities of the task-evoked BOLD signal in the MT+/LO region, we conducted a voxel-wise analysis in the mTBI patients to correlate BOLD magnitudes with FA values.Because recent studies have reported abnormal FA values in both gray and white matter, the analysis was not restricted to white matter voxels.Again importantly, we are not looking for mean differences in FA between patients and controls, but for regions in which the FA values correlate with the magnitude of the BOLD response in MT+/LO during visual tracking.A significant correlation was found in an area near the left optic radiation/radiation of corpus callosum/forceps major, Fig. 4A).A scatterplot is shown in Fig. 4B."The correlation was positive, indicating that higher FA values near the left OR were associated with higher BOLD magnitudes in left MT+/LO ROI.One interpretation of this result is that higher FA values near the left optic radiation resulted in hyperactivity of MT+/LO and higher sensitivity to light.Correspondingly, mTBI patients with light sensitivity demonstrated higher FA values in the ROI near the left OR."However, correlations of FA values in the ROI near the left OR with the HISQ 1-20, PCL_C, and CES-D were not significant, although there was a trend for a positive correlation of FA values and HISQ 1-20 scores.The conjunction of high FA values near the left optic radiation, abnormal BOLD magnitudes in MT+/LO, and light sensitivity indicate that some mTBI symptoms may be related to abnormalities in visual cortex.In this paper we report the results of a multimodal imaging study involving behavioral assessment, evoked and resting-state BOLD, and DTI in chronic mTBI subjects.We found that larger task-evoked BOLD activity in the MT+/LO region correlated with mTBI and PTSD symptoms, especially light sensitivity.Moreover, higher FA values near the left optic radiation were associated with both light sensitivity and higher BOLD activity in the MT+/LO region, the same region whose activity was associated with mTBI symptoms.These converging results may identify structural and physiological correlates of important symptoms following mTBI.We suggest that some PTSD and mTBI symptoms are the result of plasticity following damage to central white matter and reduced top-down control, and/or vulnerability factors that were present before the mTBI trauma, as suggested by recent studies.Unfortunately, there is no accepted theory on the origin of symptoms after mTBI.The most widely accepted theory is that diffuse axonal injury causes white matter damage and disconnection
We report on the results of a multimodal imaging study involving behavioral assessments, evoked and resting-state BOLD fMRI, and DTI in chronic mTBI subjects.We found that larger task-evoked BOLD activity in the MT+/LO region in extra-striate visual cortex correlated with mTBI and PTSD symptoms, especially light sensitivity.Moreover, higher FA values near the left optic radiation (OR) were associated with both light sensitivity and higher BOLD activity in the MT+/LO region.We conclude that mTBI symptoms and light sensitivity may be related to excessive responsiveness of visual cortex to sensory stimuli.This abnormal sensitivity may be related to chronic remodeling of white matter visual pathways acutely injured.
the conservative force quadrature FI as a force-deflection curve FTS .However, this assumption does not hold for viscoelastic surfaces which have a delay between applied force and deformation, and finite relaxation time to equilibrium.In such cases it is necessary to formulate the tip–surface interaction in terms of a model that includes the dynamics of the surface.The interaction depends not only on tip position and tip velocity, but also the position of the surface when it meets the tip.Because the force quadratures are resolved in a rotated frame where the tip motion has zero phase), the quantity 2πAFQ is simply force times velocity integrated over one fast oscillation cycle).A key point is that the amplitude A is a slowly changing quantity, so that it can be taken to be constant in the closed-cycle integral.This integral is easily seen to be the work performed by the tip-surface force during the closed cycle.For a conservative tip–surface interaction this work would be zero.Therefore 2πAFQ corresponds to energy dissipated as a result of the tip–surface interaction, during one single oscillation cycle.One can easily make a plot of this dissipated energy as a function of amplitude.Both quasi-static and dynamic methods use the AFM cantilever as a linear transducer of tip-surface force.This linearity is most easily seen when dynamic force measurement is described in the frequency domain.A good calibration of the dynamic linear response function of the cantilever is possible in a narrow frequency band covering a high-Q resonance, where dynamic measurement offers greater force sensitivity than quasi-static measurement.In contrast to the quasi-static method, dynamic force measurement has the ability to probe the viscous nature of the tip–surface interaction.When dynamic measurement is made with a properly tuned multifrequency drive force, a multifrequency lockin technique allows for extracting many Fourier coefficients of the response.The parallel nature of the multifrequency lockin measurement results in a great deal of information during a short time, enabling reconstruction of the tip-surface force at every pixel of a scanned image.The tuned multifrequency method is valid even when the tip–surface interaction is a nonlinear function of the cantilever deflection, resulting in intermodulation of the multiple drive tones.High-Q resonance concentrates the nonlinear response to a narrow frequency band, offering an intuitive way of examining the narrow-band intermodulation response in terms of dynamic force quadrature curves.The general measurement paradigm based on the analysis of intermodulation spectra has much to offer the future development of AFM.Recently the method of dynamic force quadrates was extended to lateral forces using a torsional resonance of the cantilever .Due to the high frequency of the torsional mode, one can examine friction at high velocity of order cm/s, with very high spatial resolution of order 10 nm.An improved method of electrostatic force microscopy based on intermodulation has already been introduced .This open-loop alternative to the well-known Kelvin Probe AFM offers an entirely new way to study how surface potential changes as a DC voltage is applied to the tip .These and other recent developments with intermodulation measurement and analysis techniques promise a bright future for the development of AFM and its application to the quantitative analysis of surfaces and interfaces.
We discuss the physical origin and measurement of force between an atomic force microscope tip and a soft material surface.Quasi-static and dynamic measurements are contrasted and similarities are revealed by analyzing the dynamics in the frequency domain.Various dynamic methods using single and multiple excitation frequencies are described.Tuned multifrequency lockin detection with one reference oscillation gives a great deal of information from which one can reconstruct the tip–surface interaction.Intermodulation in a weakly perturbed high Q resonance enables the measurement of a new type of dynamic force curve, offering a physically intuitive way to visualize both elastic and viscous forces.
γδ T cells seemingly make both innate-like and adaptive contributions to immunity, with increasingly appreciated relevance to clinical scenarios such as cancer surveillance.Prototypic innate-like γδ T cell subtypes include mouse dendritic epidermal T cells, which are skin-restricted, feature a canonical T cell receptor repertoire, and mediate responses to dysregulated target cells in the absence of foreign adjuvants or antigens.Indeed, DETC-deficient mice show increased susceptibility to skin carcinogenesis.In humans, a limited TCR repertoire is likewise expressed by a major subset of Vγ9Vδ2 T cells, which are preferentially enriched in peripheral blood, display an effector phenotype, and show potent cytotoxicity and cytokine production.Given that they respond en masse to microbial phosphoantigens, the Vγ9Vδ2 subset likely provides an early line of defense against certain microbial infections, such as those involving eubacterial and mycobacterial species that produce the highly potent P-Ag-4-hydroxy-3-methyl-but-2-enyl pyrophosphate.Conversely, adaptive paradigms seem most able to explain conspicuous clonal expansions and effector differentiation of subsets of human Vδ2neg T cells and Vγ9negVδ2 T cells, including after exposure to viral infection.Few direct ligands of the γδ TCRs underpinning innate-like or adaptive responses are known.Adaptive processes highlight powerful clonotypic focusing even within specific V region subsets, strongly suggesting that somatically recombined CDR3 regions are involved.Moreover, a diverse range of ligands has been proposed for such populations, including those few supported by evidence of direct TCR-ligand interaction, many of which favor roles for CDR3 residues.At the same time, molecules closely related to the B7 family of lymphocyte co-regulators have emerged as critical players in γδ T cell selection, activation, and possibly tissue-associated functions.The first of these to be identified was Skint1, a hitherto uncharacterized BTNL molecule crucial for thymic selection of Vγ5+ DETC and expressed by keratinocytes.Subsequently, expression of the human BTN3A1 molecule on target cells was established as critical for P-Ag-mediated activation of human peripheral blood Vγ9Vδ2+ T cells.More recently, mouse Btnl1 emerged as critical for the extrathymic selection of the signature Vγ7+ intestinal intraepithelial lymphocyte population.Btnl1 and Btnl6 molecules are both expressed by differentiated enterocytes, wherein they form a co-complex that can specifically regulate mature Vγ7+ IEL in vitro.Likewise, human BTNL3 and BTNL8, which are both enriched in expression in gut epithelium, are major regulators of signature human intestinal Vγ4+ T cells and are capable of inducing TCR downregulation specifically in this subset.The potential pathophysiologic significance of the Vγ4+ T cell subset was highlighted recently by a report that disruption of the Vγ4+-BTNL3.8 axis is pathognomonic for celiac disease.More generally, TCR γδ IELs have been increasingly implicated in the regulation of tissue maintenance, including protection from infection, inflammation, and internal dysregulation.Thus, deficiencies in signature, tissue-resident γδ T cell compartments have been causally linked to cancer and tissue inflammation.Soon after their discovery, when γδ T cells were first found to express a limited number of V regions, the existence of a range of host-encoded ligands that might mediate subset-specific γδ T cell selection and/or activation was hypothesized.Clearly, the observations outlined earlier highlight BTN and BTNL molecules as strong candidates for being direct, subset-specific γδ TCR ligands.Nevertheless, the only study reporting direct binding of a TCR to a BTN or BTNL molecule has been strongly disputed, with its claim of P-Ag presentation by the BTN3A1 V domain being ascribed to electron density arising from crystallization components.Indeed, other data demonstrate that BTN3A1 can directly bind P-Ag in its C-terminal B30.2 domain.Altogether, compelling evidence that any BTN, BTNL, or Btnl molecule acts as a direct ligand for the γδ TCR is lacking, leaving uncertainty about how these molecules may achieve their profound biological effects.Indeed, the possibility has remained that BTN and BTNL molecules may act indirectly, for example, as chaperones or inducers for direct TCR ligands that are as yet unidentified.Here we provide unequivocal evidence for direct binding of a γδ TCR to a BTNL protein.We show that a human Vγ4 TCR binds the BTNL3 IgV domain via germline-encoded regions, somewhat analogous to superantigen binding to an αβ TCR.In contrast, binding of a clonally restricted ligand to the same Vγ4 TCR was critically influenced by the CDR3 regions of the γδ TCR, consistent with its adaptive biology.Thus, we highlight two distinct and complementary modalities for ligand interaction: one involving a BTNL molecule in Vγ region-specific regulation of tissue-restricted γδ T cell subsets and the other involving highly specific, clonally restricted ligand recognition, underpinning adaptive γδ T cell responses.Moreover, the two binding modes may extend to BTN3A1-mediated regulation of the human blood Vγ9Vδ2 subset by P-Ags, suggesting broad significance of the BTNL modality that we outline.Previously, we demonstrated that exposure of Jurkat cells transduced with Vγ4+ γδ TCRs to 293T cells expressing BTNL3.8 heterodimers led to CD69 upregulation and TCR downregulation, consistent with TCR triggering.Moreover, soluble Vγ4 TCRs were found to specifically stain the surface of 293T.L3L8 target cells, but not control cells transduced with empty vector, suggesting that BTNL3.8 heterodimers either were Vγ4 TCR ligands or induced the display of as-yet-unidentified Vγ4 TCR ligands.Consistent with either possibility, mutagenesis of the BTNL3.8 heterodimer showed that Vγ4-mediated TCR triggering depended on the BTNL3 IgV domain.To address the hypothesis that BTNL heterodimers directly bound the TCR, we generated recombinant BTNL3 and BTNL8 IgV domains and tested interaction with a range of soluble γδ TCRs using surface plasmon resonance.We overexpressed both BTNL IgV domains separately in E. coli and then renatured them by dilution refolding, with yields broadly similar to those of other B7-like IgV domains, such as Skint1.Of note, BTNL3
Butyrophilin (BTN) and butyrophilin-like (BTNL/Btnl) heteromers are major regulators of human and mouse γδ T cell subsets, but considerable contention surrounds whether they represent direct γδ T cell receptor (TCR) ligands.Thus, the γδ TCR can employ two discrete binding modalities: a non-clonotypic, superantigen-like interaction mediating subset-specific regulation by BTNL/BTN molecules and CDR3-dependent, antibody-like interactions mediating adaptive γδ T cell biology.Butyrophilin (BTN) and butyrophilin-like (BTNL) molecules powerfully influence selection and activation of specific γδ lymphocyte subsets, but whether they directly bind the γδ TCR has remained contentious.
of Zol-treated CRA123 cells to stimulate Vγ9Vδ2 cells, it could not be excluded that this was because this construct reached the cell surface inefficiently, thereby implicating the YF motif in the stringent regulation of BTN3A1 cell surface expression, as was previously considered.Although BTN3A1 can be sufficient to support P-Ag stimulation of Vγ9Vδ2 T cells, this is greatly increased by co-expression of BTN3A2, which regulates the trafficking and cell surface expression of BTN3A1 via heteromerization.We therefore investigated whether the CFG face mutations of BTN3A1 could be complemented by WT BTN3A2, and vice versa.Thus, we generated the same CFG motif mutants of BTN3A2 and tested the expression of each combination in CRA123 cells.Again, the SSLEQ and YEMAL mutants displayed cell surface expression similar to WT BTN3A1+BTN3A2, while the DF mutants showed decreased expression, particularly when both BTN3A1 and BTN3A2 carried this mutation.However, when co-expressed with BTN3A2, the CFG face mutations in BTN3A1 negligibly affected Zol-induced CD107a upregulation by polyclonal Vγ9Vδ2 T cells.By contrast, CFG face mutations in BTN3A2 could not be rescued by co-expression of WT BTN3A1, impairing CD69 upregulation almost as much as when BTN3A1 and BTN3A2 were both mutated in the CFG faces.Moreover, similar impacts were observed when the responders were JRT3 cells expressing a WT Vγ9Vδ2 TCR.Altogether, these data show that three systems of BTN-mediated γδ T cell regulation—human Vγ4 and BTNL3.8, murine Vγ7 and Btnl1.6, and human Vγ9 and BTN3A—receive critical contributions from HV4 of the relevant TCR Vγ chain and from specific, orthologous IgV-CFG face residues of the relevant BTNs.Over the past decade, it has become clear that BTN and BTNL/Btnl members of the B7 superfamily play critical roles in γδ T cell selection and activation in mice and humans, spanning both peripheral blood and tissue-associated subsets.Nonetheless, although several studies have shed light on the mechanism underpinning this profound biology, key aspects have remained largely unresolved, in particular whether there are direct interactions of the relevant TCRs with BTN, BTNL, or Btnl proteins.Addressing this, the current study provides unequivocal evidence that the human BTNL3.8 heterodimer interacts directly with Vγ4 γδ TCRs, specifically via the BTNL3 chain.This builds on previous work demonstrating that cells expressing BTNL3.8 complexes could induce TCR-mediated stimulation of human Vγ4 gut T cells, while the counterpart murine molecules, Btnl1.6, induced TCR triggering of mouse gut Vγ7 T cells.Previously, the only evidence of BTN or BTNL directly engaging the TCR was provided by Vavassori et al., who reported direct binding of a Vγ9Vδ2 TCR to the BTN3A1 ectodomain, which was also reported to present P-Ag.However, both findings have been convincingly challenged, and our laboratory has similarly failed to reproduce Vγ9Vδ2 TCR-BTN3A1 binding.In contrast to this, the current study, which incorporates evidence from a combination of SPR, ITC, and mutagenesis, not only documents direct binding of the BTNL3 IgV domain to Vγ4 but also demonstrates an interaction modality resembling that of superantigens in its complete dependence on germline-encoded Vγ4 HV4 and its substantial reliance on germline-encoded Vγ4 CDR2.Although our study focused predominantly on the LES clonotype as a model Vγ4 TCR, the binding modality we outline fully explains both the specificity of BTNL3 binding to Vγ4 TCRs, but not to Vγ2 or Vγ3 TCRs, and our previous findings that BTNL3.8 dimers trigger TCR downregulation of essentially any TCRs that are Vγ4+, irrespective of CDR3γ or TCR Vδ usage.In this respect, it is challenging to understand the claim that a Vγ4+ TCR identified in a celiac disease gut failed to respond to BTNL3-expressing cells.Indeed, by using a cellular assay of TCR and CD3 downregulation, we show here that manipulating the length and sequence of the Vδ1-CDR3 region had negligible, if any, effects on the efficiency of BTNL3.8 recognition, even when long CDR3 regions were incorporated.Despite this, some modest differences in CD69 upregulation were observed.The failure of Mayassi et al. to detect a BTNL3-mediated response of a particular Vγ4+ TCR might reflect their use of a less sensitive assay system.This notwithstanding, we cannot formally exclude that rare CDR3δ regions might indirectly affect the interaction, for example, via effects on CDR2 loop conformation.Indeed, we note that different TCRs can transduce quantitatively different signals in response to anti-CD3 agonist antibodies, which must also reflect indirect effects.Considering that BTNL3.8 and Btnl1.6 heteromers are expressed specifically by intestinal epithelial cells of humans and mice, respectively, extrapolation of the findings presented here to Btnl6-mediated interactions with Vγ7 would readily explain how BTNL/Btnl proteins can act as tissue-specific, non-clonal selecting elements for signature γδ T cell compartments defined by discrete Vγ chains.Our demonstration in the current study that mouse Vγ7 multimers specifically stain Btnl1.6-expressing target cells, combined with mutagenesis studies, is consistent with the evolutionary conservation of the mode of action to BTNL3.8.By focusing on the LES Vγ4 TCR that we have previously shown to recognize EPCR, we were able to compare TCR recognition of BTNL3 with that of a clonally restricted antigenic ligand.The private LES clonotype expanded substantially in an individual after CMV infection, adopted an effector phenotype, and appeared to align closely with the adaptive biology that has recently emerged for some human γδ T cell subsets, including Vδ2neg and Vγ9negVδ2+ T cells, which can both demonstrate highly focused clonotypic expansion and differentiation after antigenic challenge.Our study provides unequivocal data that BTNL3 engages the TCR by a qualitatively different modality to that by which clonally restricted ligands engage the TCR.Whereas the former modality focused predominantly on germline-encoded regions, the latter modality was
We demonstrate that the BTNL3 IgV domain binds directly and specifically to a human Vγ4+ TCR, “LES” with an affinity (∼15–25 μM) comparable to many αβ TCR-peptide major histocompatibility complex interactions.show that BTNL3 directly binds to human Vγ4+ TCRs via a superantigen-like binding mode that is focused on germline-encoded TCR regions.
cloned into the self-inactivating lentiviral vector pCSIGPW after removal of the IRES-GFP and CMVp-PuroR cassettes.OE-PCR was likewise used to mutate BTN3A1 and BTN3A2, which were subsequently cloned into pCSIGPW.Lentiviral particles were produced in wild-type 293T cells by co-transfection with lentiviral plasmids encoding target proteins, HIV-1 gag-pol pCR/V1 and VSV-G env pHIT/G using PEI.Medium was replaced after 16 h and collected 48 h post-transfection, filtered through 0.45 μm nylon mesh, and used to transduce target cell lines.JRT3/J76 cells were transduced by spinoculation at 1,000 g, 20°C for 30 min.5x105 293T cells/well were plated in a 12-well plate a day prior to transduction.The following day, supernatants from the packaging cell lines were mixed 1:1 and 1.5 mL was used to transduce plated 293T cells.Culture medium was supplemented with antibiotics for selection 24 h post-transduction.Transductants were bulk-sorted on uniform GFP expression.0.5 × 105 JRT3/J76 transductants or PBMC-derived polyclonal Vγ9Vδ2 T cells were mixed in 96-well plates with 1.5 × 105 293T or MODE-K cells, followed by co-culture for 5 h. 293T transiently expressing BTNL3 and BTNL8 were used in blocking experiments.48 h post-transfection, 293T cells were harvested and pre-incubated for 30 min at 4°C with α-FLAG, α-HA or IgG control antibodies.JRT3 were subsequently added and the cells were co-cultured for 3h at 37°C in the presence of the antibodies.For blocking experiments of murine Btnl molecules, MODE-K stably expressing Btnl1+Btnl6 were pre-incubated for 60 min at 37°C with α-FLAG, α-HA or IgG control antibodies in 96-well plates.J76-mo5 were added to the wells and cells were co-cultured for 5h in the presence of the antibodies.293T-CRA123 cells were transiently transfected with BTN3 constructs.Media was replaced 24 h post-transfection with Complete DMEM supplemented with Zol.Cells were maintained for 16 h, washed twice, and co-cultured with JRT3 transductants or PBMC-derived polyclonal Vγ9Vδ2 T cell lines as described above.Flow cytometry was performed using the following antibodies from BioLegend, unless otherwise stated.Antibodies to the following human molecules were used: CD69-AF647, CD69-PE, CD3-BV421, γδTCR-PeCy7, CD45-PacificBlue, TCRVδ2-FITC.Antibodies to the following murine molecules were used: TCRδ-PerCPe710, TCRδ-APC, Vγ7-AF647.Other antibodies were as follows: DYKDDDDK-PE, HA-AF647.6xHis-APC.Data were acquired on BD Canto II or Fortessa cytometers.sTCR staining was performed as in Melandri et al.Flow cytometry data were analysed in FlowJo and Prism.Structural figures were generated in PyMOL.For SPR and ITC, data were analysed in BIAevaluation and Origin 2015.There is no data or availability to report.
Mutations in germline-encoded Vγ4 CDR2 and HV4 loops, but not in somatically recombined CDR3 loops, drastically diminished binding and T cell responsiveness to BTNL3-BTNL8-expressing cells.How these findings might broadly apply to γδ T cell regulation is also examined.Willcox et al.
development.However, RPGREx1-19 and RPGRORF15 isoforms have distinct developmental expression profiles in the murine retina.RPGREx1-19 is expressed early in development, declining as photoreceptors mature and RPGRORF15 expression increases, suggesting a specific function, although there may be redundant mechanisms ensuring correct photoreceptor development.Zebrafish with an RPGR knockdown fail to develop OS and show systemic ciliary abnormalities, supporting the view that RPGR is required for normal retinal development in lower vertebrates.Further, the OS of XLPRA2 dogs are misaligned and fragmented prior to maturation, although this might be secondary to the degeneration process.The role of RPGR in human retinal development is therefore unclear, but both vision and retinal structure appear to develop normally in patients with RPGR mutations.In contrast to its non-essential role during retinal development, RPGR has an essential function in the maintenance of mature photoreceptors.A common approach to understanding the function of a protein is to characterise its interactions.Several RPGR-containing protein interactions and complexes have been proposed.The emerging picture suggests that following its synthesis in the IS, RPGR is retained at the CC by binding to the RPGR interacting protein 1, which was identified by yeast two-hybrid screening.RPGRIP1 has a coiled coil domain and three C2-like motifs that are found in many transition zone or CC proteins, either targeting these proteins to cell membranes or facilitating their interactions.The RPGRIP1 C-terminus RPGR interaction domain forms both homodimers and elongated filaments via interactions involving its coiled-coil and C-terminal domains.RPGRIP1 is most strongly expressed in the CC of photoreceptors but is also present at the centrioles and basal bodies/transition zone of cultured cells.RPGRIP1 is essential for the localisation of RPGR to the CC and has one major retina-specific isoform, RPGRIP1α1, which has been proposed to have a scaffolding function associated with a proposed “ciliary gate” and entry to the transition zone and fibres of primary cilia or photoreceptor CC.The transition zone contains Y-shaped fibres linking the axonemal microtubule doublets of the CC with the overlying plasma membrane, representing part of the proposed ciliary gate that restricts protein entry and exit to the OS.The localisation of RPGRIP1 to the CC is in turn dependent on another ciliary protein, SPATA7, in which mutations result in rhodopsin mislocalisation to the plasma membrane, IS and outer nuclear layer.SPATA7 mutations cause the severe early-onset retinopathy Leber congenital amaurosis and juvenile RP.Mutations in RPGRIP1 also cause LCA as well as cone-rod dystrophy.A recently generated complete RPGRIP1 KO mouse produces ‘naked cilia’ which fail to form OS and shows mislocalisation of rod and cone opsins as well as other OS proteins, indicating a role both in disc morphogenesis and OS formation.A partial RPGRIP1 knockout mouse showed disorganised OS with elongated discs, partially mislocalised rod and cone opsins and normal CC, but a severe early-onset retinal degeneration also resembling LCA.RPGR has also been implicated in the trafficking or quality control of membrane proteins moving to/from the OS, since rod and cone opsins are mislocalised to the IS or plasma membrane in a variety of CC transport mutants and RP/LCA mouse models, including several RPGR disease models.The latter include a naturally occurring Rpgr mutant mouse, two gene targeted mouse models, namely Rpgr KO mice and RpgrΔEx4 mice, XLPRA1 mutant dogs and two human XLRP carriers with RPGR mutations.Transport of opsin-containing vesicles from Golgi to the OS minimally requires a rhodopsin C-terminal targeting motif, binding to a dynein motor protein subunit, vesicle docking at the base of the CC, and loading onto IFT complexes.Docking of rhodopsin carrier vesicles probably occurs at the periciliary membrane complex, a specialised apical membrane microdomain directly facing the CC.Further transport to the CC and nascent discs requires another IFT complex and the kinesin-2 motor.Defects in transport between the membrane docking and CC delivery steps should result in rhodopsin accumulation in the OS plasma membrane, as seen in several models.In contrast, mutants that are defective in vesicular transport of opsins show accumulation of vesicles near the base of the IS while mutants with absent OS show vesicle accumulation at the distal tip of the CC, neither of which were found in Rpgrip1, Spata7 or Rpgr KO mice.In addition, while opsins were mislocalised prior to the onset of apoptosis, OS disk or shuttling proteins PRPH2, ROM-1 and transducin were all correctly localised in the Spata7 KO mice, arguing for a specific opsin transport defect in these mice with presumed abrogation of the Spata7-RPGRIP1-RPGR protein complex.It has been argued that discrepancies in observing rhodopsin mislocalization in some animal models of inherited retinal degeneration may be attributed to variability in the stages of photoreceptor degeneration at the time of analysis.Indeed, due to the abundance of rhodopsin, its mislocalisation to the inner segment will inevitably occur once outer segment degeneration begins, in which case it would be a secondary consequence rather than a primary cause of disease.However, several RPGR disease models demonstrate opsin mislocalisation prior to any discernible photoreceptor degeneration.There is indirect molecular evidence linking RPGR function with vesicle trafficking, for example RPGR interactions with RAB8, whirlin and the cytoskeleton, but since proposed ciliary gate proteins such as RPGR, RPGRIP1 and CEP290 are all required for opsin localisation to the OS it is currently more plausible that this defect is secondary to defective ciliary gate functions.Recent work has gone some way towards elucidating the RPGRIP1 interaction with RPGR by showing that the RPGRIP1 interaction domain of RPGR partially overlaps with the domain interacting with PDEδ, a highly evolutionarily conserved prenyl binding protein that also binds
Mammalian photoreceptors contain specialised connecting cilia that connect the inner (IS) to the outer segments (OS).This review summarises the existing literature on human RPGR function and dysfunction, and suggests that RPGR plays a role in the function of the ciliary gate, which controls access of both membrane and soluble proteins to the photoreceptor outer segment.
and visual loss than the Rpgr KO mouse.A naturally occurring rd9 mouse was found to have a 32-bp duplication in ORF15, producing a much slower degeneration.Different strains of mice sharing the same RPGR mutation can express a different phenotype, highlighting the role of genetic background effects.RpgrORF15 overexpression partially rescues the Rpgr KO mouse, suggesting a loss-of-function effect of the protein.However, overexpression of a truncated murine-specific ORF15 variant led to more rapid degeneration compared to the KO alone, on both wild-type and Rpgr-null backgrounds, suggesting a gain-of-function role for this particular variant.A GOF phenotype is difficult to reconcile with clinical disease, since most female carriers remain asymptomatic.Carrier females are usually protected by X chromosome inactivation and/or cell autonomous mutational effects but the substantial rescue of the Rpgr KO mouse and XLPRA dogs by gene augmentation therapy also argues against significant GOF mutations in XLRP patients.Two naturally occurring RPGR disease models exist in dogs.Canine X-linked progressive retinal atrophy occurs in the Siberian Samoyed husky and in a mixed breed dog.The XLPRA1 mutation allows normal photoreceptor development and function until 6 months of age followed by a slow degeneration of rods, which die by apoptosis.The XLPRA2 phenotype is severe, with abnormal retinal development leading to disorganised OS and rapid degeneration.These dogs are excellent large animal models and provide a stepping-stone towards clinical trials for novel therapies.Finally, a zebrafish knockdown model of RPGR disease has been reported to show ciliary abnormalities.Animal models have drawbacks, not the least of which is their cost.Alternative technologies to supplement these models may therefore be useful.The prospect of reprogramming terminally-differentiated somatic cells from adult tissue into pluripotent cells was demonstrated in principle by the cloning of tadpoles and sheep and has been realised in humans.These induced pluripotent stem cells can be derived from any genetic background, including patients with RPGR disease, so ‘disease-in-a-dish’ modelling is possible if mature photoreceptors can be derived from iPSCs.Major progress has been made in pattering stem cells to produce post-mitotic photoreceptors.Exogenous molecules promote such conversion by means of many published protocols.Initially these protocols encouraged two-dimensional culture, but recent understanding of the importance of the extracellular matrix in recapitulating endogenous signalling required for human retinal development has led biologists to replace 2D modelling with 3D protocols, with improved results.Floating aggregate cultures facilitate organised, stratified neuroretina production with light-sensitive photoreceptors being generated.These cultures provide an excellent model of RPGR disease.Gene augmentation therapy appears a feasible, safe treatment strategy for at least some inherited retinal dystrophies.Recently, there was progress towards RPGR gene augmentation therapy when an adeno-associated virus 2/5 vector mediated the transfer of full-length human RPGRORF15 into photoreceptors, preventing degeneration in both canine RPGR disease models and resulting in increased numbers of photoreceptors, preserved structure and absence of rhodopsin and L/M cone opsin mislocalization.Reduced Muller cell reactivity in treated eyes also indicated that the harmful retinal remodelling found in this disease was also reduced.However, with RPE65-LCA patients, gene therapy could not prevent retinal degeneration progressing over a three year period despite substantial visual improvement at first.Similarly with the canine model of RPGR disease, degeneration continued unless treatment was initiated prior to photoreceptor loss.Improvement in visual function therefore cannot be assumed to imply protection from degeneration, suggesting the need for a combinatorial approach in treating retinal dystrophies.Further caution comes from the finding that overexpression of RpgrEx1-19 on an Rpgr null background led to a more severe phenotype than in the Rpgr null mouse.While over-expressing wildtype RpgrORF15 is better tolerated, this may still cause problems by altering Rpgr isoform ratios.Overexpression of a genomic fragment containing the entire mouse Rpgr gene resulted in flagellar defects and male infertility, with a severity correlating with Rpgr copy number.Rpgr co-localised with acetylated α-tubulin in mouse sperm flagella.These results suggest the need for careful control of RPGR expression levels.Advances in stem cell-derived retinal differentiation has led to the possibility of photoreceptor precursor transplant into diseased eyes.Embryonic stem cell-derived retinal pigment epithelium for cell replacement in RPE dystrophies is already in clinical trials and appears to be safe.The optimum developmental stage of photoreceptor progenitor cells for transplantation has been established and the procedure has led to improvement of vision in blind mice.Photoreceptor progenitor cells derived from three-dimensional ESC cultures can also integrate into rodent retina.However, whilst cell replacement may be a viable treatment option for retinal dystrophies in the future, the extent of rod loss experienced in RPGR disease raises questions as to whether sufficient numbers of rod progenitors can integrate into the retina to make an impact on visual loss.RPGR mutations are responsible for 10–20% of all RP patients and cause severe disease for which there is no treatment.This review has sought to summarise current understanding of RPGR biology, including proposed roles in the ciliary gate that regulates protein trafficking to and from the photoreceptor OS.The results of treating animal models of RPGR disease show considerable promise and suggest that gene replacement therapy and, in the future, cell replacement therapy, could lead to improved visual function in this disorder.
Dysfunction of the connecting cilia due to mutations in ciliary proteins are a common cause of the inherited retinal dystrophy retinitis pigmentosa (RP).Mutations affecting the Retinitis Pigmentosa GTPase Regulator (RPGR) protein is one such cause, affecting 10-20% of all people with RP and the majority of those with X-linked RP.Recently, there have been important advances both in our understanding of RPGR function and towards the development of a therapy.We discuss key models used to investigate and treat RPGR disease and suggest that gene augmentation therapy offers a realistic therapeutic approach, although important questions still remain to be answered, while cell replacement therapy based on retinal progenitor cells represents a more distant prospect.
The Affymetrix mRNA profile data are provided as CEL files deposited on GEO.RNA was extracted from miR-21 over-expressing Jurkat cells and matched control cell line.For each cell line Input, AGO2 IP and isotype matched IgG IP samples were analysed.Small RNAs profiled from the same cell lines are reported as collapsed reads with read counts in tab delimited unix txt format.Viral particles were obtained by co-transfection of 293T-cells with lentiviral plasmid and the PLP-1, PLP-2 and PLP-VSVG plasmids and concentrated by ultra-centrifugation.Jurkat cells were transduced at an MOI of 15 and selected with puromycin.Jurkat cells were lysed in lysis buffer.Lysates were clarified and pre-cleared by protein-G sepharose beads.An aliquot of total extract was taken out.Monoclonal anti-AGO2 and an equal amount of purified rat IgG were incubated with the pre-cleared lysate.Samples were washed with lysis buffer and wash buffer, treated with DNaseI-RNase-free and subject to proteinase K digestion.After final wash an aliquot was taken out for western.mRNAs co-immunoprecipitated with anti-AGO2 antibodies from both pRRL21 and pRRL-Ctrl Jurkat cell line were profiled by microarray technology along with total RNAs and co-IPed RNAs with rat IgG.Thus each experimental replica included the following six samples:We used the Affymetrix Human Genome HG U133 Plus 2.0 array and the Dnavision service provider to run each microarray experiment and performed it in three biological replicates.However, due to technical failure of one sample in one replica, only two complete datasets were used for subsequent analyses.sRNAs bound to AGO2 were analysed by Illumina deep sequencing.Identical reads were counted.Data are provided as tab delimited files.Each line contains two fields:number of reads and,Adaptor sequences were not removed.This work was supported by the European Commission Framework Program 6 Project “Sirocco” and AIRC Grants to G.M., by Grants from Associazione Italiana Ricerca sul Cancro, CARIPLO Foundation.
We set out to identify miR-21 targets in Jurkat cells using a high-throughput biochemical approach (10.1016/j.biochi.2014.09.021 [1]).Using a specific monoclonal antibody raised against AGO2, RISC complexes were immunopurified in Jurkat cells over-expressing miR-21 following lentiviral trasduction as well as in Jurkat control cells lines.A parallel immunoprecipitation using isotype-matched rat IgG was performed as a control.AGO2 associated mRNAs were profiled by microarray (GEO: GSE37212).AGO2 bound miRNAs were profiled by RNA-seq.
Fig. 1 shows the SDS-PAGE analysis of the Taq DNA polymerase expression in pET-28b, and Fig. 2 shows the gray scale value analysis result of the recombinant Taq DNA polymerase.A considerable increase of the protein expression can be seen.Fig. 3 is the SDS-PAGE analysis of the Taq DNA polymerase purification from pET-28b recombinant, which shows the purity of the purified protein product.The polymerase gene was amplified by PCR from the template plasmid, which contains the target gene in pTTQ18 vector.The up- and down-stream DNA primers were 5′-CATATGCGGGGGATGCTGCCCCTCTT-3′ and 5′-GAATTCTCACTCCTTGGCGGAGAGCCAGTC-3′ with restriction sites of Nde I and EcoR I respectively.The 2508 bp target gene fragment of PCR was inserted into pGEM T-Easy plasmid using T/A cloning.The target DNA was verified by sequencing.Both the recombinant plasmid and pET-28b plasmid were then digested by Nde I and EcoR I, respectively.Subsequently, the 2508 bp fragment was ligated into pET-28b plasmid by T4 DNA ligase.The pTTQ18 and pET-28b recombinant plasmid were transformed into competent E. coli bacteria BL21 and BL21 respectively for expression .Both of them were processed by the same expression procedure.1 ml of the over-night cultured medium was transferred into 200 ml of LB broth to grow.Isopropyl-b-D-thiogalactopyranoside was then added into growth medium when the OD600 of the culture was between 0.3 and 0.6.After being cultured for additional 4–6 h, cells were harvested by centrifugation.The supernatant was discarded and 20 ml of PBS was used to re-suspend the cell pellet.50 μL of the suspension was taken out for expression identification, and then the remained cells were washed by centrifugation.The suspension was analyzed by SDS-polyacrylamide gel electrophoresis .The pellets of rest cells were harvested and temporarily stored at −80 °C.The frozen cell pellets were re-suspended in 10 ml of buffer A for 15 minutes at room temperature.Then 10 ml buffer B was added, followed by an incubation at 75 °C for 1.0 h with tube inversion every 10 minutes .The samples were centrifuged and the supernatant was transferred into a new tube.Every 100 ml volume of the samples was added 30 g powered2SO4.The samples were centrifuged and the supernatant was discarded.The both precipitated and floated protein was harvested and 4 ml PBS was added to dissolve the protein.The samples were dialyzed in buffer C with 3 times of replacement of the buffer every 4 h. Glycerol was added to 33.3% in the final polymerase product which would be stored at −80 °C.20 μL of the protein was used for a SDS-PAGE analysis for purity.
Polymerase chain reaction (PCR) technique is widely used in many experimental conditions, and Taq DNA polymerase is critical in PCR process.In this article, the Taq DNA polymerase expression plasmid is reconstructed and the protein product is obtained by rapid purification, (“Rapid purification of high-activity Taq DNA polymerase” (Pluthero, 1993 [1]), “Single-step purification of a thermostable DNA polymerase expressed in Escherichia coli” (Desai and Pfaffle, 1995 [2])).Here we present the production data from protein expression and provide the analysis results of the production from two different vectors.Meanwhile, the purification data is also provided to show the purity of the protein product.
Difficulty regulating negative emotions has been linked to the onset and maintenance of anxiety and depression.More recent studies also suggest a relationship between emotion regulation difficulties and post-traumatic stress disorder.Whilst promising, these studies are limited to retrospective designs and reliant on self-report questionnaires to assess emotion regulation skills, including strategies typically used, as well as self-reports of emotion intensity.The difficulty with these designs is that they rely on participants accurately reporting and being aware of the intensity of their emotions and how they regulate them.Since discrepancies between self-report and physiological measures of emotion intensity have been found, the assessment of emotion regulation should ideally incorporate both self-report and objective measures.This is especially important because deficits in emotion regulation can manifest as chronically elevated subjective negative affect relative to physiological activity; this is regardless of the level of environmental demands.It is unclear whether PTSD symptom severity is linked to objective difficulties in regulating negative emotions as opposed to perceived difficulty regulating emotion.The current study aimed to address this gap in the literature.Gross proposed a process model of emotion regulation, linking the timing of emotion regulation strategies to their effectiveness.The strategies used early in the process of generating an emotion are known as antecedent-focused and are thought to be more effective than those employed once an emotion is already underway, known as response-focused strategies.The model outlines three categories of strategies individuals may use to regulate their emotions, both positive and negative, once in any given situation.Attentional deployment is the first and refers to strategies to direct attention, such as choosing to focus on a particular part of the situation or environment.The second category is cognitive change and includes strategies, such as cognitive reappraisal, which alter the meaning of a situation to change its emotional impact.Response modulation is the third category and includes response-focused strategies, such as the suppression of emotion or expressive suppression or drug use, to influence physiological, experiential or behavioural reactions.PTSD has been associated with emotion regulation strategies involving response modulation such as emotion suppression and expressive suppression whereas cognitive change is generally reported to be under-utilised.More recently, Boden et al. prospectively investigated the relationship between emotion regulation strategies at intake for residential group CBT for PTSD and PTSD symptom severity at discharge in a sample of military veterans.Expressive suppression was associated with greater PTSD symptom severity whereas cognitive reappraisal was associated with fewer PTSD symptoms.Additionally, change in the use of expressive suppression during treatment predicted PTSD symptom severity at discharge even after accounting for baseline PTSD symptom severity."The greater the decrease in an individual's use of expressive suppression during treatment, the lower the PTSD symptom severity scores at discharge.These results further highlight the tendency for those with greater PTSD symptoms to use response modulation strategies more often and cognitive change strategies less frequently.While the aforementioned studies suggest a link between self-reported difficulties in emotion regulation and PTSD in groups of individuals exposed to a range of trauma, including women who have experienced childhood sexual abuse, military veterans and other trauma-exposed populations, they all relied on self-report measures of emotion regulation incorporating a retrospective design.Only one study could be found which assessed the ability of trauma-exposed individuals to regulate negative emotions in real-time.Following the aftermath of the 9/11 terrorist attacks, Bonanno, Papa, Lalande, Westphal, and Coifman presented New York college students with unpleasant images on a computer screen.Participants were instructed to enhance or decrease their negative emotional responses to the images.Those who were better able to enhance and decrease their negative emotions showed less psychological distress by the end of the second year following these attacks.Although PTSD symptomatology was not measured, this study provides preliminary evidence that difficulty regulating negative emotions on-line can be measured using experimental tasks and is linked to the development of psychological distress following trauma.However, as in the other studies, these authors relied only on self-report ratings of the intensity of emotional experience during the experimental task and a physiological measure of emotion regulation was not adopted.A possible way forward in this important area of research would be to incorporate an objective physiological measure when assessing the ability to regulate negative emotions.One study has used concurrent assessment of physiology in addition to subjective reports of emotional experience in a healthy student population using an experimental task.Jackson, Malmstadt, Larson, and Davidson presented students with unpleasant and neutral images on a computer screen with instructions to enhance, decrease, or maintain their emotional responses whilst startle eyeblinks were measured.Instructions to decrease negative emotions led to smaller startle eyeblinks and instructions to enhance negative emotion led to larger startle eyeblinks.This was an important study in suggesting that an experimental task could be used to measure emotion regulation objectively through the assessment of its effects on physiological activity.Since measuring skin conductance responses is a less intrusive mode of measuring physiological arousal compared to startle eyeblinks, we chose SCR as a physiological measure of emotion regulation for our trauma-exposed participants.To our knowledge, no study has yet investigated the regulation of negative emotions and the corresponding effect on SCR in trauma-exposed participants without any training in emotion regulation being provided.We measured SCR and self-report ratings of emotion as indicators of emotion regulation ability whilst trauma-exposed participants were instructed to enhance, maintain or decrease their negative emotional responses to unpleasant images presented during a computer task.Participants were not provided with instructions or training regarding how they might regulate their negative emotions.Participants were also asked
Objectives Retrospective studies suggest a link between PTSD and difficulty regulating negative emotions.This study investigated the relationship between PTSD symptoms and the ability to regulate negative emotions in real-time using a computerised task to assess emotion regulation.The results suggest a relationship between emotion regulation ability and PTSD symptoms rather than emotion regulation and PTSD.© 2014 The Authors.This is an open access article under the CC BY license.
to record intrusions related to the computer task for the week following participation using diaries since previous research has suggested that reductions in physiological arousal whilst being shown unpleasant images in the form of traumatic films may lead to the development of trauma-related intrusive memories.Our aims were fourfold: to validate the experimental task and the use of SCR to measure the regulation of negative emotions in real-time in trauma-exposed individuals who had not been provided with any specific instructions or training, to explore whether emotion regulation was related to PTSD symptom severity, to investigate whether specific strategies were linked to greater PTSD symptom severity, and to assess the relationship between changes in arousal during attempts to regulate negative emotions and the subsequent development of intrusive memories.In relation to our first aim, it was predicted that self-reported and objective indices of emotion regulation would be greater towards unpleasant images compared to neutral images, and would be greatest following instructions to enhance, smallest following instructions to decrease, and in between following instructions to maintain initial negative emotional responses towards unpleasant images.We also hypothesised that difficulty regulating negative emotions on the computer task would be associated with greater PTSD symptom severity.Due to the novel nature of the design, we made no assumptions or hypotheses about whether emotion regulation difficulties in those with greater PTSD symptoms would be present in all or in just some conditions.Since healthy participants are able to enhance and decrease negative emotions in response to negative pictures with corresponding effects on physiology, difficulty regulating emotion in response to any of the instructions would be suggestive of emotion regulation difficulty.It was therefore hypothesised that PTSD symptom severity would be related to difficulty enhancing, decreasing or maintaining initial negative emotions in response to unpleasant images.It was further hypothesised that PTSD symptom severity would be associated with greater use of response modulation and less use of cognitive change strategies.Finally, we predicted that decreases in arousal during emotion regulation attempts would be associated with an increased frequency of intrusive memories for the unpleasant images presented during the computer task over the following week.Forty-five ambulance workers were recruited.The mean age of participants was 37 years.Forty-four of the participants were White British and one was British Asian.Sixty-two percent of participants were single, 24% were married, 11% were divorced, and one participant preferred not to specify.On average, participants had been working as ambulance workers for 8.6 years.Psychophysiological recordings were measured using the non-dominant hand.In the sample, 87% of participants were right-handed.Previous exposure to traumatic events was measured by a modified version of the trauma list in the Clinician Administered PTSD Scale.Symptoms of PTSD were measured with the Post-Traumatic Stress Diagnostic Scale.The PDS is a widely used measure of PTSD symptoms with 17 items.Possible scores range from 0 to 51.Scores of ≤10 are classified as ‘Mild’, between ≥11 and ≤20 as ‘Moderate’, between ≥21 and ≤35 as ‘Moderate to Severe’ and ≥36 as ‘Severe’.Symptoms of depression were measured with the Beck Depression Inventory, a reliable and valid 21-item measure of depressed mood.Possible scores range from 0 to 63.Scores of 0–9 indicate that a person is not depressed, 10–18 indicates mild-moderate depression, 19–29 indicates moderate-severe depression and scores over 30 signify severe depression.Emotion intensity was measured with a Likert scale ranging from 1 to 9.After responding to an emotion regulation instruction, participants were asked to rate the strength of their emotion linked to each image on this scale.To assess the emotion regulation strategies participants employed during the task, we administered the Self-Reported Emotion Regulation Strategies Questionnaire.This questionnaire asked two open-ended questions: “What strategy or strategies did you use to suppress your negative emotions?,and “What strategy or strategies did you use to enhance your negative emotions?,Participants were required to explain in their own words what strategy or strategies they had used for each condition.Participants were permitted to write down as many strategies as they believed they had used during the computer task.For each strategy participants described, they were prompted to estimate the percentage of time they had used that strategy during the computer task.In the week following the computerised task, participants were asked to record daily how often they had experienced spontaneously occurring intrusive memories of any of the unpleasant images that were presented to them during the computer task in an intrusive memory diary.Thirty unpleasant pictures and 10 neutral pictures were selected from the International Affective Picture Set.Unpleasant images included various types of bodily injury, death and violence.Ten neutral images were selected and included common household or other objects.The 40 images randomly appeared on a computer screen for 12 s in total.The methodology was adapted from Jackson et al.Each trial followed five steps: 1) a randomly selected image appeared on the computer screen and participants were instructed to simply observe the image for 4 s.This served as the initial emotional response period; 2) at 4 s post-image onset, a digitised human voice gave participants a one-word emotion regulation instruction whilst the image remained on the computer screen.For unpleasant images, participants were randomly instructed to ‘ENHANCE’, ‘MAINTAIN’ or ‘DECREASE’ their emotional response.For neutral images, they were only ever instructed to ‘MAINTAIN’ their emotional response; 3) for the remaining 8 s of the image presentation, participants attempted to regulate their emotional responses as instructed; 4) the image then disappeared from the computer screen and a Likert scale appeared.Participants were instructed to rate the strength of their negative emotion on a scale
Immediately after the computer task, participants were asked to describe the strategies they had used to regulate their negative emotions during the task and recorded spontaneous intrusions for the unpleasant images they had seen throughout the following week.Results PTSD symptoms were associated with difficulty regulating (specifically, enhancing) negative emotions, greater use of response modulation (i.e., suppression) and less use of cognitive change (i.e., reappraisal) strategies to down-regulate their negative emotions during the task.
participants were asked to decrease their negative emotions.The use of cognitive change strategies, such as cognitive reappraisal, was related to less arousal when decreasing negative emotions.This is consistent with the body of research that suggests that response modulation strategies are inefficient methods for altering physiological responding in the desired directions when attempting to suppress negative emotions, and that cognitive reappraisal is more adaptive in reducing physiological responding during attempts to decrease negative emotion.There was no relationship between time engaged in response modulation and cognitive reappraisal strategies and SCR when required to enhance emotions.However, the former finding is likely to be due to floor effects since most participants did not engage in response modulation strategies to enhance negative emotions.Whilst some studies have found that cognitive reappraisal strategies are linked to greater physiological responding when enhancing negative emotions, our failure to demonstrate this association may relate to our sample.Since ambulance workers are regularly exposed to traumatic stimuli as part of their job, they may be more adept at decreasing rather than enhancing their negative emotions and as such, physiological responses may be more evident in decrease regulation conditions in this sample.This is consistent with our overall finding that across the sample, physiological arousal during the emotion regulation period decreased following each type of instruction.However, as predicted, the rate of this decrease was smallest following instructions to enhance, greatest following instructions to decrease, and in-between following instructions to maintain initial emotional responses.Turning to intrusive memories, a greater number of intrusive memories in the week following the task was associated with lower physiological arousal when decreasing negative emotions."This relationship could not be explained by participants' initial arousal before they were instructed to regulate their emotions or current PTSD symptoms.This finding replicates Holmes et al.Engaging in emotion regulation strategies may disrupt encoding and hence, memory consolidation for the analogue trauma, which may decrease physiological arousal in the short-term but lead to more intrusive memories subsequently.However, since we did not assess the stability of memory for the analogue trauma this is a preliminary hypothesis at this stage.Subsequent intrusions were unrelated to increased physiological arousal when enhancing or maintaining negative emotions, which suggests that it is decreases in physiological arousal, as demonstrated in previous studies, when exposed to analogue trauma that is related to intrusive memory development.Our results demonstrate that emotion regulation can be assessed via SCR in trauma-exposed individuals with PTSD symptoms.Furthermore, the findings indicate that training in how to regulate emotions is not necessary in such experimental designs.SCR may be a more acceptable mode to objectively measure emotion regulation in this group compared to more invasive and expensive psychophysiological measures such as the assessment of startle eye blink responses, and suggests that this paradigm could be extended to clinical populations.Our results may suggest that difficulty enhancing negative emotions is linked to PTSD symptoms.Since increased PTSD symptoms were related to greater use of response modulation strategies and less use of cognitive change, the results suggest that these symptoms may influence the types of strategies employed to regulate negative emotion.The results add to the growing body of research that points to the adverse consequences and correlates of expressive suppression, and underscores the benefits of cognitive reappraisal in effectively modulating emotion.Future research is needed to determine the direction of causality between the use of emotion regulation strategies, such as expressive suppression and the limited use of cognitive reappraisal, and the development of PTSD.This study has limitations.First, although one third of the sample had PTSD scores in the moderate to severe range, two thirds of the sample had mild levels of PTSD symptoms, which may limit the generalisability of the findings.Future studies should aim to recruit participants with clinical levels of PTSD symptoms so that findings can be considered truly representative of the disorder.Second, ambulance workers are routinely exposed to trauma as part of their job and are likely to utilize emotion regulation strategies that allow them to be effective during emergency situations at work.That is, they are required to be adept at not enhancing their negative emotions in such circumstances.The use of ambulance workers may limit the generalisability of the current findings to other trauma populations.Third, the study may lack ecological validity in terms of the generalisability of the computer task to real-life traumatic events.However, it permitted an objective measure of emotion regulation in real-time.Fourth, the diary used to record intrusive memories perhaps lacked reliability.There was no way of checking that participants completed these accurately.However, this measure has been used frequently in prior research in the field of trauma.Finally, the design was not prospective.Subsequent research could employ a prospective design to ascertain the relationship between difficulty regulating emotion and the development of PTSD symptoms.This study suggests that emotion regulation can be measured in real-time with a computerised task in trauma-exposed individuals and that physiological measures of arousal may be a more reliable marker of emotion regulation ability compared to self-report.The results suggest that difficulty regulating negative emotions may be a feature of individuals with PTSD symptoms.PTSD symptoms were associated with greater use of response modulation strategies and less use of cognitive change strategies to regulate negative emotions.More intrusions developed in participants who had greater reductions in arousal when attempting to decrease negative emotions.
Method Trauma-exposed ambulance workers (N = 45) completed self-report measures of trauma exposure, PTSD symptoms and depression.More intrusions developed in participants who had greater reductions in physiological arousal whilst decreasing their negative emotions.Conclusions Difficulty regulating negative emotions may be a feature of trauma-exposed individuals with PTSD symptoms, which may be linked to the types of strategies they employ to regulate negative emotions.Published by Elsevier Ltd.
Precious Woods have contributed to safeguarding community access to resources.Villagers have the possibility of defining how these areas and trees therein are managed and could make decisions to conserve them.They could even decide to increase the density of such trees by planting them, although this would involve waiting for them to mature and would also require that villagers have sufficient area for their agricultural production as well.Another factor of importance in evaluating the potential for conflict between timber and non-timber uses of fruit trees is the diameter at which trees start to produce fruit, as compared to their minimum cutting diameter.For these two species, it seems that the minimum diameter for fruit production differed considerably: whereas 75% of Abam fruit collection trees were above the minimum cutting diameter, only a third of Ozigo fruit collection trees had reached this diameter.Where trees below the minimum cutting diameter produce fruit, there is no immediate conflict between fruit collection and timber production.Clearly, to evaluate policy options for conserving access to fruit from timber trees for villagers, it is important to know the geographic distribution of those trees, the potential for villagers to reach them, both physically and through access rights; and the size at which they start to produce fruit as compared to the minimum cutting diameter.At a minimum, information needs to be gathered, on a species by species basis, about abundance, size classes and physical location of the trees in question, as well as about their biology."This information can provide the foundation for forest management under Karsenty and Vermeulen's proposed Concession 2.0 model, to promote both conservation and benefits from both timber and non-timber resources, to meet multiple stakeholder interests.Similarly, more encompassing and adapted approaches to forest management were documented by Ros-Tonen et al. from the Brazilian Amazon, where a partnership among local forest users, private sector and civil society facilitated the insertion of NTFPs into timber-oriented models.Though a number of logging companies in the Congo Basin have been certified as well-managed, concessions with no management plans remain numerous.It is likely that their exploitation practices negatively impact both the forest ecosystem and livelihoods, including access to food resources from timber trees by villagers.Responsible and ecologically-sensitive logging can be a source of multiple benefits: biodiversity conservation, timber and livelihood benefits for forest communities.Many more studies are needed to evaluate whether conflicts are reducing access and to understand the best ways to address them if that is the case.
This study assessed the abundance of and access to tree species (Ozigo, Dacryodes buettneri; and Abam, Gambeya lacourtiana) that yield edible fruits to villagers and timber to the logging industry in and around a logging concession in Gabon.Participatory mapping combining GPS coordinates and interviews was carried out with 5 female and 5 male collectors in each of two villages within or adjacent to the logging concession.Precommercial and harvestable (>70 cm dbh) Ozigo and Abam trees, as well as their stumps, were also quantified on 20 five ha plots in the 2012 cutting area of the concession and on 21 five ha plots on 10 km transects from each village.Distances to 59 Abam and 75 Ozigo from which fruits were collected ranged from 0.7 to 4.46 Km from the village centres.Almost 28% of all of the collection trees were inside the logging concession boundaries but outside the village agricultural zone, 43% were inside the village agricultural zone, and 29% were outside the logging concession.Only 33% of Ozigo collection trees had reached commercial size while 75% of Abam trees had.No stumps were found on any sample plots, probably reflecting the ban on felling Ozigo which was in effect at the time; and the relatively low commercial value of Abam.Densities of precommercial Ozigo trees in the cutting area were more than double their densities around the villages (236.0 ± 20.3100 ha−1and 96.6 ± 17.2100 ha−1, respectively), while densities of harvestable Ozigo trees were 7 times higher in the cutting area than around villages (120 ± 20.2100 ha−1and 17.1 ± 3.4100 ha−1respectively).This probably reflects past and current anthropogenic pressures around the villages, including logging and land clearance for agricultural fields.Densities of precommercial Abam were almost four times higher around the village (22.3 ± 5.6 and 6.0 ± 2.9) than on the cutting area.Villagers did not record a decline in availability of or access to these fruits over the past 5 years, suggesting little or no immediate conflict between timber production and access to fruits from these trees.
Linear friction welding is a solid state joining method, in which one component is subjected to reciprocating transverse motion against a stationary component, under axial compression.Four stages of the process were defined by Vairis and Frost : initial phase: when heat is generated through sliding friction; transition phase: the formation of a plasticised layer and full contact; equilibrium phase: the plasticised material is expelled as flash, with axial shortening; and deceleration and forge phase, during which oscillations stop, under the axial forging pressure.The process offers many of the metallurgical benefits of solid state friction welding, and has found commercial application in joining titanium alloy aero-engine compressor blades to disks.The LFW process, its application to Ti alloys, and the resulting properties have recently been reviewed in depth by McAndrew et al. .A number of authors have studied LFW to examine the influence of welding parameters on: the process operation; microstructure and texture in the weld zone, TMAZ and HAZ; and final mechanical properties.One of the important factors in LFW of Ti alloys is the phase transition between two different crystal structures: modified HCP α-Ti at lower temperatures, and BCC β-Ti at elevated temperatures, with the transition over a temperature range around 1000 °C in the common Ti6Al4V alloy.Several processing characteristics and final properties have been tied to the α → β → α transformations in LFW of Ti alloys .For example, a high joint strength has been attributed to a refined microstructure, largely an outcome of rapid cooling of fully β transformed material .Strong textures in the heavily deformed region adjacent to the bond-line were also attributed to the β → α transformation in that region .Microstructural observations also suggested that temperatures exceeded the β-transus in the weld zone, but were below the β-transus in the TMAZ, which is consistent with the significant softening which occurs when α transforms to β.Numerical and occasional analytical modelling has been used to study all of the key aspects of the LFW process summarised above: the process operation, evolution of microstructure and properties, and their dependence on welding parameters.The emphasis in the present work is on modelling the heat generation directly from the processing conditions and the constitutive behaviour of the material.The objectives of this are: to improve the capability of FE modelling for predicting viable operating conditions for new component geometries and alloys, reducing the extent of empirical trials; to improve the definition of the input power history at the weld interface, as an essential input to predicting the temperature history and everything that flows from it.LFW modelling in the literature is reviewed here in two steps – first, to summarise the diverse goals of previous modelling on LFW, and second, to identify the key issues that need to be addressed in relation to modelling heat generation.Some of the earliest work on LFW was by Vairis and Frost, who followed up their experimental investigations with analytical and numerical models of LFW predicting the temperature and frictional shear stress during the process.For computational efficiency, their model comprised a single deformable specimen in contact with a rigid body representing the second workpiece, with Coulomb friction between the two, varying with interface temperature.The model included work done in friction and plasticity, and model validation used average transverse force and temperature from a single thermocouple.Comparisons with other friction processes were handled in a similar fashion in the later work of Vairis et al. .Temperature prediction forms a standard part of all subsequent modelling, often to relate weld temperatures to the α-β phase transformation.For example, from an FE analysis of LFW of Ti alloy in two conditions, Sorina-Müller et al. concluded that the weld interface reached the α-β transformation temperature, confirmed through microstructural analysis.Ji et al. also developed a thermal model to inform their interpretation of LFW of Ti alloys with different starting microstructures.Thermal FE modelling to interpret microstructure evolution is not limited to Ti alloys – Lis et al. used a thermal model for LFW of 5000 and 6000 series Al alloys to derive relationships between weld hardness and temperature.Understanding the transition from initial sliding to the full contact phase has been addressed empirically and numerically.For example, the group of Buffa, Fratini and co-workers used experimentally measured forces and a finite element model of LFW in an iterative procedure to obtain a ‘shear coefficient’ – the ratio of the average shear stress at the interface to the material shear yield stress.The shear coefficient was variously explored as constant, temperature-dependent, or time-dependent.Schröder et al. accounted for the transition from conditioning to equilibrium phase via an effective friction coefficient, with a heat input model derived from experimental force and velocity data.In their later work, Buffa et al. used experimentally measured power directly as heat input in their thermal FE model, not only to derive the coefficient of friction for their thermomechanical models.Potet et al. used an alternative method of estimating the frictional forces and heat input, calibrating the friction coefficient to match experimental thermocouple data and temperature predictions of their thermomechanical model.In a series of papers, Li et al. have applied FE analysis to LFW, for carbon steels , Ni-based superalloys and Ti alloy .At the simplest level, these models were used to show that the interface temperature was relatively uniform, with a steep thermal gradient away from the interface.The temperature field was subsequently input for the prediction of residual stresses in LFW of Ti64 .Their thermal analysis of cooling
A separate continuous thermal model of the process (Jedrasiak et al., 2018), provided the spatial temperature field as an input to each mechanical analysis.Axial shortening of the weld geometry required particular attention, and was handled by discarding thin layers of elements at discrete intervals to match the flash expulsion rate.
approach was used by the current authors , who compared experimental power obtained in this fashion, with the power history reverse-engineered from thermocouple data.This thermal FE model of LFW, with its independent estimates of power history, is used here for validation of the new deformation model.To date, numerical modelling of LFW has been dominated by fully coupled thermomechanical finite element analysis with explicit time integration.Large deformations are commonly handled by an Arbitrary Lagrangian–Eulerian kinematic description, or other remeshing techniques.The computational effort associated with these approaches has been typically minimised by reducing the problem to two dimensions, or substituting one part with a rigid body – even so, many authors reported substantial computation times.All previous approaches to thermomechanical modelling of LFW are computationally intensive, since LFW involves both large strains and many deformation cycles.The aim of the current work is to advance a computationally efficient approach for modelling LFW, building on a methodology proposed for friction stir spot welding and ultrasonic welding .FSSW is characterised by large rotary plastic strains, while USW involves small strains per cycle, but very many deformation cycles, due to the high process frequency.The aspects of this methodology adapted to LFW are: modelling the heat generation directly from a kinematic description of the workpiece interaction and the constitutive response of the alloy; analysis of the heat generation through multiple small strain analyses as ‘snapshots’ within the large strain process; modelling the heat flow and temperature field continuously throughout the process, but only finding the heat generation from single deformation cycles, at intervals through the weld time.By decoupling the timescales of the intensive deformation model from the fast thermal model, and using small strain analysis, the resulting model is computationally efficient.Fig. 2 illustrates the underlying concept behind the small strain method.Within each cycle, the temperature field from the thermal model is imposed as an input to the deformation model, while the heat generation rate distribution from the deformation model becomes an input load in the thermal model.The main novelty lies in the deformation model, which takes periodic ‘snapshots’ of the plastic heat dissipation at an instant, over a much shorter timescale than the interval between deformation analyses, but sufficient to capture the plastic strain-rate distribution.This power distribution is applied over the longer timescale of the thermal model, updating the temperature field, and the cycle repeats.The assumption is therefore that heat flow evolves over much longer timescales than deformation.There are two main sources of computational efficiency – firstly, the deformation model simulates only a small fraction of the total process time; and secondly, the strain and mesh distortions are small, so that the demanding kinematic description and remeshing associated with large strains can be avoided.The basic procedure is shown in Fig. 2.The small-strain finite element ‘snapshot’ approach has proven to be a reliable, computationally efficient method when modelling FSSW of Al and Mg alloys .Here, the method is applied to linear friction welding of Ti6Al4V alloy, which tests its applicability to large strain, high-temperature, cyclic plastic deformation, with an evolving geometry.As a proof-of-concept, the model is applied to the equilibrium stage of LFW, once full contact is established between the workpieces.Experimental data for this modelling work was obtained from instrumented LFW conducted at TWI Cambridge by collaborating researchers – the details are presented in a previous paper .As a test case of the new small-strain approach to modelling LFW, the results are presented for a single set of welding parameters: 50 Hz frequency, 2.7 mm oscillation amplitude, downforce of 100kN and 3 mm burn-off.The workpieces were made of two-phase α-β Ti-6Al-4V, with the geometry shown in Fig. 3.Temperature was measured with four k-type thermocouples, inserted into drilled holes and fixed using an epoxy resin, using a data sampling rate of 1000 Hz.Forces and displacements were measured in both oscillation and axial directions.As discussed earlier, these data were used to estimate the contribution from machine inertia and hence the interface force and power input .Difficulties with data sampling rate and small phase shifts between the recorded signals limited the accuracy of this estimate, but the form of the power history with time were consistent with those inferred by thermal modelling, as illustrated below.The finite element model for the thermal field is presented in , and outlined here as far as it impacts on the deformation model.The thermal model is two-dimensional, as the heat flow was practically one-dimensional in the axial direction, while the plastic deformation takes place through shearing in a thin layer at the welded interface, parallel to the oscillations.Due to the low thermal conductivity of titanium and the short welding cycle, the heat flow distance during the weld cycle is limited in extent.Hence the initial dimensions of the workpieces were limited to 10 mm, and remaining parts of the workpieces and clamping could be neglected.For the same reasons, heat losses to the air were neglected, with perfect thermal contact between the workpieces.In the initial stage of LFW, the weld interface at the edges is in intermittent contact over a distance equal to the oscillation amplitude.The extreme edge of the workpiece is exposed to the air for half of the cycle, which leads to a taper in the heat input at the edge to half the value associated with the area of full contact .Since the amplitude is less than 7% of the workpiece length, the extent of this edge effect is modest.Once the interface is in full contact
This was achieved by using multiple small strain analyses during one quarter cycle of workpiece oscillation, giving a snapshot of the average heat dissipation rate in a single complete cycle.This mechanical model for heat generation in a single cycle was then repeated at intervals throughout the equilibrium phase of welding.
quarter cycle were increased.The power predictions from the small-strain snapshot model are compared with the power inferred from the thermal model in Fig. 11.The power predicted by the deformation model is about 20% higher than that from the thermal model, and showed a systematic variation with a shallow minimum in the middle of the equilibrium stage.The steep temperature gradient led to uncertainty in the thermal model, but also influences the deformation model through the temperature-sensitivity of the flow stress.It is difficult therefore to resolve the discrepancy in magnitude or shape of the power variation, given the accumulated uncertainties in the models.The uncertainty could be reduced by: refining the mesh, to enable more frequent burn-off steps; an increase in the number of snapshot analyses per quarter cycle; refinement of the constitutive data; validation against a wider range of welding conditions.But overall, the agreement between the power variation using the two models suggests that the methodology provides a sound computationally efficient method for predicting the power directly from the constitutive response, to an accuracy of better than 20%.Given the cost of experimental trials, this offers potential benefits in selecting initial trial conditions, particularly for application of LFW to new alloys.To provide further insight into the material deformation, contour maps of heat generation rate and equivalent plastic strain are shown for the end of the equilibrium stage in Fig. 12.Note that the extent of deformation is predicted to be less than 0.5 mm to either side of the interface.The micrograph of a weld cross-section shows that the experimentally measured extent of the TMAZ closely matches the model prediction.An approach to visualization of the material deformation behaviour, which has proven to give valuable insight, was proposed by Colegrove et al. in their work on CFD modelling of FSW .The method is to take the plot of flow stress as a function of temperature and strain-rate, and to overlay contours showing the probability that material will experience the underlying deformation conditions.These ‘material deformation maps’ highlight the dominant temperature and strain-rate regime for the plastic regions in FS welds.This approach helps to understand material softening behaviour, and can be a practical tool for selection of parameter windows or alloys with a desirable constitutive behaviour.The present work modifies this approach, displaying the amount of heat generated in a given set of material conditions, rather than the probability that they will occur.Deformation conditions during each application of the heat generation model are interrogated increment-by-increment and element-by-element, with the heat generated in each element being sorted into “bins” defined by given ranges of flow stress and temperature.The final total in each bin is expressed as a fraction of the total heat input to the weld in that deformation step, and these are superimposed as contours on the flow stress-temperature plot.Fig. 14 shows these maps for 3 weld times on an enlarged region of the flow stress – temperature plot.In each case the deformation is concentrated within narrow temperature and flow stress bounds, where the flow stress is practically constant, and the temperature varies by around 20 °C.The predicted strain-rates in the deformation zone lie in the range 1500–5000 s−1, which is somewhat higher but of similar order to values cited in more complex models .The analysis also indicates that welding operates above the α → β transition temperature, in the region where the flow stress is relatively low.So this transition is critical in friction welding of titanium.Above that temperature the process achieves a near steady-state, in which the heat generated balances the heat conducted away from the joint line, maintaining constant temperature and deformation conditions.Deformation at temperatures below α → β leads to a high flow stress, generating more heat than necessary to maintain equilibrium.That leads to increasing temperatures and decreasing flow stress, until the conditions from Fig. 14 are achieved.This creates a self-limiting process, where temperatures will always tend to a certain range above the α → β transus.During an idealized equilibrium stage, deformation conditions self-stabilize, and depend on the welding parameters and material characteristics.This paper successfully applied a computationally efficient small-strain thermomechanical modelling approach to the linear friction welding of Ti alloy, testing the concept on the equilibrium stage.The specific techniques developed in this approach, and the principal outcomes, are:A continuous two-dimensional thermal model of LFW was coupled to a thermomechanical model running at discrete intervals during welding – this limits the computationally intensive analysis to a small proportion of the welding time.Each run of the thermomechanical model analyses only a single quarter-cycle of the oscillation, with a number of small-strain analyses during this interval, giving a faster computation by avoiding distortion of the mesh.The plastic work-rates obtained during each quarter-cycle analysis were then time-averaged to give the power over one cycle at that time during the welding process.The expulsion of the material to flash and associated burn-off were taken into account by progressively deleting layers of elements, enabling continuous prediction of the temperature field for input to the deformation model.The power history was predicted directly from a kinematic description of the workpiece motion and the material constitutive response, and the agreement was reasonable with that inferred independently from the thermal model.The predicted width of the deformation zones was close to that seen in a micrograph of the weld cross-section.Further insight was gained via material deformation maps, which indicated that the deformation was concentrated within narrow ranges of temperature and flow stress.
Heat generation in linear friction welding of Ti alloy was modelled with a computationally efficient finite element analysis.The predicted distributions of plastic strain and heat generation were concentrated within narrow windows of temperature and flow stress, corresponding to a layer of material at the interface less than 1 mm thick, consistent with weld micrographs.
costs for electricity generation of a technology, one sees that it would most economical to generate electricity by lignite.However, by integrating the external costs of GHG emissions and air pollution, wind power is the most economical electricity generation source, followed by PV, whereas the external costs of lignite make it the least economical option.The presented example is based on conservative assumptions as the lowest LCOE for lignite, which may -with an upper value of 7.98 c/kWh- be well above the LCOE of wind power and PV, is chosen.Furthermore, the emissions of lignite electricity generation are assumed to be quite low,24 which illustrates the superiority of renewable energy sources compared to lignite.Differently from other studies, the external costs of wind power generation are not considered zero, as emissions over the whole technology lifecycle are taken into account and not only energy generation, which makes the comparison more comprehensive.Comparing the employment effects, the effects of wind power enable approximately half of the effects of lignite.However, due to methodological specifics, the estimation of the regional employment effects of lignite is quite high and regions nearby are also considered, though the lignite mine is only partly situated in the Aachen district.Most of the employment effects account for PV which is due to more regional PV enterprises in the region, whereas specialised enterprises for wind power are located in other regions.Investing €1 million in wind, PV, or lignite allows a generation of 15.7 GWh of wind power, 14.9 GWh of PV and 21.8 GWh of wind power.However by taking into account the external costs, the same investment would lead to 15.4 GWh of wind power, 13.2 GWh of electricity by PV, and 10.7 GWh of lignite.The regional employment effects of the investment are 1 person year for wind, 9 for PV, and 4 person years for lignite.In summary, from an environmental and economic perspective, the development of PV and wind power is preferable than investing in lignite electricity generation, where the generation costs of wind power are the lowest.From a socioeconomic perspective, PV is the preferable electricity generation technology.Regional decision makers should therefore always opt for developing PV over lignite.The decision between developing wind or lignite is based on a trade-off between environmental economic and socioeconomic concerns, whereas, from a global view, the long term negative effects of climate change may be more important than individual regional employment.Differently, a balanced deployment of PV and wind should consider technical issues, in addition to environmental andeconomic aspects.From a regional employment perspective, the deployment of PV is more beneficial than wind power, although energy generation costs are slightly higher.In any case, to achieve a total electricity supply by renewable energies, wind power remains necessary: even by exploiting the whole PV potential in the region, the regional electricity demand could not be satisfied by PV alone.In this regard, incentives are needed to foster the transformation to a low carbon energy system in fossil fuel regions and to steer regions towards a low carbon energy policy, as local governments might prefer to support local jobs rather than reducing emissions, which is a classic “tragedy of the commons” challenge.This can be solved by fully taking into account the negative impacts of fossil energy generation.Internalising the external damage costs could be done by making emitters accountable or by re-warding operators or regions for their efforts to avoid GHG and air pollution.However, regions relying on fossil fuels have to compensate structural employment market changes by providing alternative opportunities for employees in the fossil fuel industries.In this case, job training is necessary.Ultimately, the number of jobs depends on the ability of regions to attract RES industries as well as on the renewable energy generation potentials, which both support a sustainable economy in the long term.
Wind power is an important technology in the transition towards a low carbon economy.Wind power developments with a cumulative capacity of 63.1 MW which have been installed in 2017 in the Aachen region, generating 3901 GWh electricity from 2017 to 2037 lead to a regional value added of €50.8 million (or €805/kW).From an environmental economic view, the generation of wind power is the most beneficial electricity generation technology in comparison to PV and lignite.
gives the total damage costs associated with P from MWL, which would be approximately $57 million yr− 1 in the UK.Clearly, this estimate assumes that all MWL-P remains within the environment and contributes to environmental damage.Significant uncertainty surrounds these assumptions, emphasising the need to better constrain the ultimate fate of MWL-P if more accurate assessments of the damage costs associated with this source of P are to be made.The consequence of incorporating MWL-P as an externality would be to lower the SELL and thereby to reduce P loads delivered to the environment from MWL, assuming that SELL targets were met.However, a proportion of any additional capital or operating costs associated with meeting a lower SELL target would be borne by water customers, which would require approval from the economic regulator in England and Wales and may well meet resistance from water customers.Finally, theELL framework could be broadened to encompass a sustainable environmental/economic level of P release, thereby recognising MWL-P as a source of P that must be quantified and managed as part of landscape-scale controls on P delivery to the environment.The basis for such a framework already exists in the form of the Total Maximum Daily Load approach developed in the USA to deliver the requirements of the Clean Water Act.In the UK, initial trials of catchment-wide P permits, led by the Environment Agency in collaboration with the water industry although currently focussed solely on STWs, provide a similar opportunity to incorporate MWL-P within landscape-scale controls on P export to the environment.Within either a TMDL or catchment-wide P permit, MWL-P could be quantified and subsequently allocated a proportion of the TMDL, or a proportion of the catchment P permit where a catchment permitting framework was extended beyond STWs.Where a TMDL or catchment P permit was exceeded following incorporation of current levels of MWL-P, a number of options would be available to water companies.Firstly, reductions in MWL and thereby in MWL-P could be specifically proposed by the water company in order to meet the TMDL or catchment-wide P permit.Secondly, a water company may choose to offset MWL-P by delivering an equal reduction in P load from other sources that fall within their remit, particularly through enhancing P removal at STWs.Finally, the potential to trade the reduction in P due to MWL-P required in order to meet a TMDL or a catchment P permit could be considered, for example by water companies contributing financially towards reductions in P export from agricultural land that matched this requirement.However, incorporating MWL-P within either the TMDL or catchment P permit framework would require accurate estimates of MWL-P loads that are derived from mains distribution networks.Accurate quantification of the ultimate fate of MWL-P would also be required, to constrain the proportion of MWL-P that is delivered to receiving waters as opposed to being returned to sewer or entering long-term storage within a catchment.Effective strategies to reduce phosphorus enrichment of aquatic ecosystems require accurate quantification of the absolute and relative importance of individual sources of P. Assuming that mains water supplies will continue to be dosed with PO4, MWL-P loads must be quantified more widely and the ultimate fate of MWL-P within the environment better understood.Addressing these challenges would underpin more accurate P source apportionment models, enabling policy and investment to be effectively targeted in order to protect and restore aquatic ecosystems facing the risk of eutrophication.Perhaps more fundamentally, this information will provide insight into the way in which finite P resources are used to maintain drinking water supplies, supporting optimisation of this demand for P in the future.
Effective strategies to reduce phosphorus (P)-enrichment of aquatic ecosystems require accurate quantification of the absolute and relative importance of individual sources of P. In this paper, we quantify the potential significance of a source of P that has been neglected to date.However, mains water leakage (MWL) potentially leads to a direct input of P into the environment, bypassing wastewater treatment.Our analyses suggest that MWL-P could be equivalent to up to c.24% of the P load entering the River Thames from sewage treatment works and up to c.16% of the riverine P load derived from agricultural non-point sources.We consider a range of policy responses that could reduce MWL-P loads to the environment, including incorporating the environmental damage costs associated with P in setting targets for MWL reduction, alongside inclusion of MWL-P within catchment-wide P permits.
to both L-M and Luminance contrast and most of these are sensitive to spatial structure.Although our behavioral data suggest reduced spatial frequency sensitivity in the neurons tuned to L-M contrast at threshold, once our stimulus contrast was increased to generate a reliable BOLD response, it is possible that we stimulated a population of color-luminance neurons that may have responded to both L-M and achromatic contrast in a similar manner.Likewise, it has been shown that some neurons within V1 also represent combinations of the S-cone pathway with both L-M and luminance pathways, and therefore responses from these cells may also be similar across all conditions.To summarize, we used the same stimulus parameters to measure spatial frequency sensitivity and pRF sizes of neurons driven by the luminance, L-M and S-cone isolating pathways.Effects of chromatic condition were observed for the spatial sensitivity manipulation, with S-cone isolating stimuli producing significantly lower spatial sensitivity indices than either the luminance or L-M conditions in the peripheral areas of V1.No effects of chromaticity were observed in the pRF data.We conclude that the invariance observed in pRF measurements was a result of an actual invariance in population-average receptive field sizes between these pathways.We suggest that this may be due to the prevalence of color-luminance cells as well as the presence of complex, pattern-sensitive cells in the visual cortex, which do not demonstrate a linear relationship between receptive field size and spatial sensitivity.
The spatial sensitivity of the human visual system depends on stimulus color: achromatic gratings can be resolved at relatively high spatial frequencies while sensitivity to isoluminant color contrast tends to be more low-pass.Models of early spatial vision often assume that the receptive field size of pattern-sensitive neurons is correlated with their spatial frequency sensitivity - larger receptive fields are typically associated with lower optimal spatial frequency.A strong prediction of this model is that neurons coding isoluminant chromatic patterns should have, on average, a larger receptive field size than neurons sensitive to achromatic patterns.Here, we test this assumption using functional magnetic resonance imaging (fMRI).We show that while spatial frequency sensitivity depends on chromaticity in the manner predicted by behavioral measurements, population receptive field (pRF) size measurements show no such dependency.At any given eccentricity, the mean pRF size for neuronal populations driven by luminance, opponent red/green and S-cone isolating contrast, are identical.Changes in pRF size (for example, an increase with eccentricity and visual area hierarchy) are also identical across the three chromatic conditions.These results suggest that fMRI measurements of receptive field size and spatial resolution can be decoupled under some circumstances - potentially reflecting a fundamental dissociation between these parameters at the level of neuronal populations.
a higher rate than build up due to lack of estrogen.Thus, elderly females usually over the age of 65 suffer from this disease.The reported prevalence was as high as 50% in the literature .There is limited information on higher BMI affecting insufficiency fractures with only one article discussing how a BMI over 25 was discovered in 40.4% of their patients .Yet, we feel it is an important modifiable comorbidity due to the fact that an increase in weight puts excess stress on an already defective weight bearing bone.With an aging population and an increased prevalence of osteoporosis, we also see increased usage of bisphosphonate therapy.The relationship between bisphosphonate therapy and a risk for stress fractures in the elderly are well documented in the literature.It has been shown that the increased fracture risk associated with the use of these antiresorptive meds reach upwards of 30% after 5 years of use .Our patient only recently started this therapy, shortly after the right FNSF.Thus, it is highly unlikely it played a role in the development of her FNSFs.Table 1 highlights the occurrence of risk factors from 2 case series of a total of 143 elderly patients that suffered stress fractures .Addressing issues on this table is important when prevention is considered as well as in the setting of early diagnosis.Treatment of femoral neck stress fractures is still a diagnostic challenge for surgeons.We have formulated a treatment algorithm geared toward femoral neck stress fractures in elderly patients.Additionally, it is helpful to utilize a proposed classification system for grading stress fractures with the purpose of guiding treatment as Boden et al did.16 Grade 1–2 FNSFs are typically unicortical with <50% of bone involved.Grade 3 FNSFs involve >50% of the bone and grade 4 FNSFs are bicortical.As described earlier, the first and most challenging step in treatment is identification.Most cases can be diagnosed with a thorough history, physical examination, and radiographs .In the elderly population it is important to keep a high index of suspicion due to a high-risk for progression to complete fracture.Frequently, the end goal of treatment is surgical, especially in any tension type FNSFs.The exception is the unicortical compression stress fracture that is typically a grade 1–2 FNSF where a trial of non-operative treatment may be considered.Boden et al recommend non-weight bearing in low-demand athlete with non-displaced fractures .However, in the elderly maintain a non-weight bearing status is unreasonable and difficult; therefore, we propose a more aggressive surgical recommendation for this population.Bilateral femoral neck stress fractures in the elderly are uncommon.Research is limited on the topic with vitamin D deficiency as the major determinant.This case report aims to shift attention to the importance of early recognition and more aggressive treatment, which will prevent the progression to a displaced fracture as well as initial fracture development.We suggest clinicians to go over Table 1 and keep a high-index of suspicion as signs and symptoms of stress fractures are observed.Having a high-index of suspicion will lead to prevention, non-operative treatment, or minimally invasive surgical options.No conflict of interest.No approval needed since our ethical approval was exempted by our institution since written consent was obtained.Written informed consent was obtained from the patient for publication of this case report and accompanying images.A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.Kruten Patel – Researched and wrote report.Brian Handal – Researched and wrote the report.William Payne – Reviewed and helped make revisions in the paper.Case report not a clinical study.Kruten Patel, Brian Handal, and William K Payne.Not commissioned, externally peer reviewed.
Introduction: Early intervention in femoral neck stress fractures (FNSFs) can be self- limiting, but they have an insidious presentation.High index of suspicion for an occult fracture is necessary to avoid bilateral progression and/or operative interventions.Initially the patient presented with groin pain and radiographs demonstrated a non-displaced compression type fracture of the right femoral neck without any inciting events.Discussion: Vitamin D deficiency, poor nutrition, and osteoporosis have been associated with developing stress fractures.This presents an interesting question of whether these frequently referenced risk factors play an interrelated role.Treatment algorithms are controversial, but have been successful in preventing the progression of occult stress fractures.Yet, identification of FNSFs represents a major challenge in diagnosis for clinicians.Conclusion: This case report documents an uncommon fracture pattern in the elderly population.With an aging population, it is pertinent to avoid missed opportunities for prompt diagnosis and implementation of noninvasive methods of treatment.Therefore, paying attention to the risk factors with a high index of suspicion would be ideal.
in which a system is developed in a stepwise manner.The vehicle behaviour is modelled as a B machine and the communication models as a CSP controller.The aim in is to represent a correct model of a real physical vehicle for platooning while our approach aims to verifying the cooperation between vehicles in the automotive platoon and abstract the behaviour of a real physical vehicle.A compositional verification approach for vehicle platooning is introduced in where feedback controllers and agent decision-making are mixed.It should be noted that to the best of our knowledge, there is no directly comparable work in literature where the actual agent code used in the implementation has been verified.There is clearly much future work to tackle.An obvious one is to continue efforts to improve the efficiency of AJPF.Maintaining a safe platoon in case of recoverable latency and dissolving a platoon in the case of unrecoverable latency are two procedures that are not yet implemented in our verified agent code.Adding these two procedures to the agent grows the system space to the extent that AJPF fails to verify any property.Thus, we are investigating an agent abstraction at the level of goals, beliefs and intentions in order to use AJPF for verification of more complex agents.Since we are concerned with certification of automotive platooning in practice, we are aiming to extract a more comprehensive list of formal properties from official platoon requirement documents.Related to this, we are also in the process of porting the agent architecture on to a real vehicle and so aim to continue and extend testing of the platooning algorithms in both physical, and simulation, contexts.Finally, an important aspect of our two pronged strategy is to link the models used in Uppaal to the programs that AJPF uses.We provided an algorithm to achieve this but, in this paper we refined the Uppaal models by hand.Clearly, a next step is to fully automate this process.
The coordination of multiple autonomous vehicles into convoys or platoons is expected on our highways in the near future.However, before such platoons can be deployed, the behaviours of the vehicles in these platoons must be certified.This is non-trivial and goes beyond current certification requirements, for human-controlled vehicles, in that these vehicles can act autonomously.In this paper, we show how formal verification can contribute to the analysis of these new, and increasingly autonomous, systems.An appropriate overall representation for vehicle platooning is as a multi-agent system in which each agent captures the “autonomous decisions” carried out by each vehicle.In order to ensure that these autonomous decision-making agents in vehicle platoons never violate safety requirements, we use formal verification.However, as the formal verification technique used to verify the individual agent's code does not scale to the full system, and as the global system verification technique does not capture the essential verification of autonomous behaviour, we use a combination of the two approaches.This mixed strategy allows us to verify safety requirements not only of a model of the system, but of the actual agent code used to program the autonomous vehicles.
could be robustly assessed.The high natural inter-annual variation found for each metric presents evidence for the difficulty in using pilot site data to inform the design and effort required for robust impact assessment surveys.If the power analysis was to rely on data from only one year of our study there would be a risk of either over- or under-estimating the number of samples needed to obtain acceptable power to detect a chosen effect size.The consequence could either be a lack of power in the subsequent impact analyses or collecting samples that are potentially unnecessary.The latter has the benefits of providing redundancy in the sampling effort, as has been previously recommended, but requires additional time, effort and cost, which would need to be factored into the survey program.The difference in assemblage composition, species richness and variance found between study zones might be expected given the dominant habitat types found at the Wave Hub and Cable Route study zones, and the significant difference in composition and richness found among them.The site specific nature of ecosystem components, even over a relatively small spatial area, is effectively demonstrated using these data.The characterisations these data offer also illustrate how useful consistent data collection methods can be in allowing comparison between locations.Baseline characterisation and impact studies of benthic habitats and species are required at MREIs located in coastal locations with various physical and ecological characteristics.While the energy convertor design may differ between locations the ability to understand more general or cumulative effects caused by developments would be enhanced by multi-site data, collected using the same survey techniques.Adopting standard practices and guidelines for pre- and post-development benthic survey methods, design and analysis would help optimise costs associated with EIAs and, if adopted across multi-sites and multi-years, could ultimately lead to impact models with predictive power for species, communities or ecosystems.Given there were no prior ecological data for the zones monitored during this study, the survey design performed well in providing adequate samples to detect change in species richness but not so well for relative abundance.Caution is required when interpreting the abundance results, but overall it indicates the survey would be able to detect differences between project and reference locations that could be considered large.Whether changes of this size are biologically or ecologically significant is unknown and would be highly dependent on the resilience of the ecosystem.For example, if a reduction in species richness included the loss of a functional group within the local ecosystem, then it is likely this could lead to a significant ecological impact.As part of this study the BRUV system demonstrated its value as a tool for collecting assemblage composition, species richness and relative abundance data for epi-benthic mobile species in highly dynamic conditions, and a good candidate for use as part of marine EIAs across latitudes.The system offers a cost effective and flexible method that can provide the spatial and temporal coverage that is difficult to obtain using other methods.When used with stereo cameras, BRUV can also offer size data that could help elucidate more detailed age related effects cause by introduced or altered habitat, or converted into biomass estimates providing another metric to assess impact.Although other traditional methods of sampling these communities, e.g. trawling, potting, bottom lines or nets, can provide these metrics and work in similar or worse sea conditions, they can be destructive or taxa specific, increasing the cost of survey effort and/or to the ecosystem of study.With further development, BRUV systems also have the potential to help address data collection gaps surveys often suffer from, e.g. diurnal variation.With the integration of movement sensors and/or artificial intelligent algorithms to activate equipment on species presence, 24 h deployments could be possible.BRUV biodiversity data has also been found to be complimentary to environmental DNA, suggesting a combined approach using these non-invasive methods could further enhance the effectiveness of monitoring surveys.
Detecting the effects of introduced artificial structures on the marine environment relies upon research and monitoring programs that can provide baseline data and the necessary statistical power to detect biological and/or ecological change over relevant spatial and temporal scales.Here we report on, and assess the use of, Baited Remote Underwater Video (BRUV) systems as a technique to monitor diversity, abundance and assemblage composition data to evaluate the effects of marine renewable energy infrastructure on mobile epi-benthic species.The results from our five-year study at a wave energy development facility demonstrate how annual natural variation (time) and survey design (spatial scale and power) are important factors in the ability to robustly detect change in common ecological metrics of benthic and bentho-pelagic ecosystems of the northeast Atlantic.BRUV systems demonstrate their capacity for use in temperate, high energy marine environments, but also how weather, logistical and technical issues require increased sampling effort to ensure statistical power to detect relevant change is achieved.These factors require consideration within environmental impact assessments if such survey methods are to identify and contribute towards the management of potential positive or negative effects on benthic systems.
first checking if the absorption data taken for the nearest matching coordinate is negligible, i.e. less than 1.If this is the case, then a search method begins whereby the search space grows by one mesh cell in all directions until the nearest absorption cell is found.Once the absorption data has been collated into a list corresponding to the structure coordinates, it is then written onto the TDR file.Since the coordinate list is taken from the structure post-device, it is a simple process of overwriting the carrier generation for each location.This TDR file then assumes an illuminated structure and the IV simulation can be undertaken.The key here is that the number and order of coordinates in the TDR and TXT file must match.In this work an efficient tool flow is described in-order to integrate Lumerical, an efficient FDTD optics solver, into TCAD Sentaurus, an industry standard for electrical simulation.When combined, a complete package exists for broadband optoelectronic modelling.This allows the benefits of material processing chemistry to form structures, followed by accurate and computationally efficient FDTD modelling, and finally electrical characterisation using a comprehensive database of device physics models.In order to integrate the two packages, the ability for Sentaurus to read previously solved optical generation data is manipulated.An electrical solve is undertaken with no illumination, this will output a carrier generation file with zero data.This file is used for two purposes, firstly to form the structure on Lumerical, and secondly to transfer the output data back into Sentaurus.The interface between the two packages is achieved with a combination of tool command language and Matlab script, both of which can be used in open-source platforms.
Performance predictions and optimisation strategies in current nanotechnology-based photovoltaic (PV) require simulation tools that can efficiently and accurately compute optical and electrical performance parameters of intricate 3D geometrical structures.Due to the complexity of each type of simulation it is often the case that a single package excels in either optical or electrical modelling, and the other remains a bottleneck.In this work, an efficient tool flow is described in order to combine the highly effective optical simulator Lumerical with the excellent fabrication and electrical simulation capability of Sentaurus.Interfacing between the two packages is achieved through tool command language and Matlab, offering a fast and accurate electro-optical characteristics of nano-structured PV devices.
extract in nanomicelles would improve the poor intestinal absorption of aspalathin.In addition, external preference mapping was used to establish the identity of the sensory attributes driving preference of rooibos iced tea.“Rooibos-woody” flavour seems to be the main sensory attribute driving consumer acceptance of rooibos iced tea.Furthermore, the absence of “plant-like” flavour also appears to be a deciding factor as this attribute exhibited a strong negative correlation with consumer preference.The results of this study suggest that the addition of lemon flavour can be used to improve the acceptability of green rooibos iced tea, as it has a masking effect on “plant-like” flavour.Consumer preference for rooibos iced tea with a “rooibos-woody” flavour associated with the fermented rooibos tea extract may be linked to familiarity with the product.Currently, all RTD rooibos iced teas available on the South African market are produced with this type of extract.Furthermore, fermented rooibos is the major herbal tea on the local market.The difference in colour between fermented and green rooibos extracts may also have played a role in the acceptability of these iced teas.Although the colour of the rooibos iced teas was not analysed by the trained panel, the green rooibos extract was a light yellow-orange, whereas the traditional, fermented extract was the characteristic red-brown colour associated with rooibos infusions.Commercial RTD rooibos iced teas are generally light red-brown, thus consumer preference for rooibos iced tea with a red-brown colour is likely as it may seem more familiar.The uncharacteristic colour of green rooibos iced tea may thus have influenced their preference.The use of atypical colours has been shown to induce incorrect flavour responses in solutions, and may affect their overall acceptability, acceptability of the flavour and perceived flavour intensity.The RTD rooibos iced tea with the highest total flavonoid and aspalathin content, namely G, was disliked by consumers.The high aspalathin and total flavonoid contents of the beverage lend it a better status as a functional beverage in terms of the antioxidant activity) and anti-diabetic effects) of aspalathin and green rooibos.It is a common dilemma of the food industry that an increase in the functionality of a product is paired with a compromise in its sensory properties, resulting in a decrease in consumer acceptance.A number of studies have shown that the primary condition for consumer acceptance of functional foods is the overall taste/sensory experience of the product and not its functional benefits.The question is: “Up to which point are consumers willing to accept functional foods with a worse sensory profile than the conventional product?,.Therefore, future developments for the South African iced tea market should involve the production of rooibos iced teas with a sensory profile closely resembling that of fermented rooibos beverage.We illustrated in this study that there is potential for a rooibos iced tea product containing both fermented and green rooibos solubilisate with its improved phenolic content, since consumer acceptance was as high as that of the fermented rooibos iced tea.Additional product development in terms of colour and flavour of such iced teas may also prove to be beneficial.This study does not, however, take the preferences of other markets into consideration.The Japanese have enjoyed green tea for centuries and may even exhibit a preference for green rooibos iced tea above the fermented counterpart.In future, it is possible that the preference of the South African market may change, due to the recent increased popularity of green tea.This would open up the market for green rooibos iced teas since consumers would then be more familiar with unique “plant-like” flavour profiles related to green tea beverages.The study showed that enhancement of the aspalathin content of rooibos RTD beverage is possible without a major negative impact on consumer preference.This can be achieved by combining aspalathin-enriched green rooibos solubilisate and fermented rooibos extract.Future work should focus on maximising the quantities of solubilisate or green rooibos extract that can be combined with fermented extract in an iced tea formulation.This will allow for the production of a beverage with an improved phenolic content and functional status, as well as the desired flavour profile.The role of colour in consumer preference of such a product could also be delineated in a future study, using a method such as conjoint analysis.
Traditional rooibos iced teas prepared using fermented rooibos extract contain a low concentration of the unique antioxidant, aspalathin, which also has anti-diabetic properties.In order to increase the aspalathin content an array of rooibos iced tea formulations containing aspalathin-enriched green rooibos extract or the same extract solubilised in nanomicelles in the presence of ascorbic acid (solubilisate), as well as a mixture of the solubilisate and fermented extract, were formulated with or without lemon flavour.The iced tea formulations containing green rooibos extract exhibited a “plant-like” character, which was reduced with the addition of a commercial lemon flavour.The formulations containing the solubilisate had a prominent “hay-like” flavour, which was reduced by the addition of both lemon flavour and fermented rooibos extract.The formulations containing fermented rooibos extract exhibited a prominent “rooibos-woody” flavour characteristic of fermented rooibos tea.The perceived intensity of this characteristic attribute decreased with addition of commercial lemon flavour and solubilisate.All the iced teas exhibited a measure of astringency, but the astringency of iced teas containing fermented extract was the greatest and was reduced by addition of lemon flavour.In order to gauge consumer acceptance, the four lemon flavoured variants of rooibos iced tea were evaluated by a consumer panel.Consumers preferred the iced teas with fermented rooibos extract and disliked those with green rooibos extract.Preference mapping showed that the presence of the “rooibos-woody” flavour and the absence of a “plant-like” flavour most likely drive the preference of rooibos iced teas amongst South African consumers.Despite the functional benefits associated with a higher aspalathin content of green rooibos iced tea compared to the traditional fermented rooibos iced tea, consumers disliked the product, because of its overt “plant-like” note.However, a green rooibos extract solubilised in nanomicelles in combination with the fermented extract can be used to produce an iced tea with enhanced aspalathin content that is still acceptable to consumers from a taste point of view.
no sedimentation property are discarded.Granule samples from a UASB reactor, used to treat a poultry slaughterhouse waste, previously described by Del Nery et al. , were utilised to determine the sample volume indicated for measuring the granule diameter.This step consisted of adopting the methodology proposed by Del Nery et al. , who only used 10 mL of the sample for the granulometry assay.The appropriate volume was evaluated using 5, 10 and 15 mL of the whole granule sample to identify statistically which of the volumes presented the most significant response.The assay was done in triplicate.The samples went through the washing step described in the section “Preparing the granule samples for granulometric determination”.The size of the granules was identified in two steps: the images of the Petri dishes containing the granules were captured first; then the dimensions were identified by image analysis using the Image Pro-Plus software, as described in the section on “Granulometric determination assay”.The assay performed with a volume of 10 mL of sample had a lower average of the standard deviation, variance and coefficient of variation, considering all the established diameter ranges compared to the other tests with a volume of 5 and 15 mL.However, the volume of 5 mL can also be used if the amount of sludge available is small, as is the case of bench scale reactors.The granulometric determination assay takes place in two steps: capturing and image analysis.In the first, the granules that sedimented in the washing step are placed in Petri dishes, paying attention to keeping them separate.It is of the utmost importance that the granules are separated carefully from one another because the image analyser program interprets the dark objects.Therefore, if two granules are together, the program recognises them as just one object.In addition, the granules must be handled with care to avoid damage that modifies their structure.Thus, in order to separate them, a tool that does not damage the sample should be used.The one used in this study was a plastic Pasteur pipette.The Petri dishes are positioned on a conventional scanner and then the image is obtained and saved.Each image can contain up to 6 Petri dishes, thus reducing the analysis time.In the image analysis step, the initial calibration of the software is carried out, then the analysis of the captured images described in the section “Procedure for measuring the granule size” is initiated.The success of using UASB technology to treat a wide range of wastewater is strongly related to the anaerobic sludge granulation phenomenon.The characteristics of the granule in the sludge blanket are associated with numerous factors related to the nature of the wastewater and the design and operational characteristics of the reactor as pointed out by Grotenhuis et al. and Ghangrekar et al. .The cell grouping process, which begins when microorganisms become sensitive to environmental parameters or to the stress condition, comprises an important survival strategy because the cells protect themselves against external aggression.Adequate immobilisation of microorganisms is what will make the difference between a successful high rate treatment system and the others, regardless of the treatment system considered.Biomass immobilisation is a complete and stable metabolic arrangement that enables optimal environmental conditions for all its members, MacLeod, Guiot and Costerton .The granulation phenomenon occurs continuously in UASB reactors and the increase or reduction of the granules size occurs according to the conditions imposed on the operating units.The monitoring of granular sludge size dynamics is a powerful tool in predicting reactor stability which associated with other monitoring parameters can clarify possible causes of UASB process imbalance during different operational periods as indicated by Puñal and Chamy and Lettinga et al. .One of the important characteristics of the granules in the sludge blanket is the high sedimentation velocity, which are beneficial for operating anaerobic reactors because there is no wash-out of viable biomass, Lu et al. .Usually the granules range from 0.5 to 5 mm in diameter, which due to their sedimentation property, resist wash-out of the reactor even at high hydraulic loads Show et al. .However, some researchers pointed out smaller bands of granules: Lu et al. reported initial granular sludge consisting of well-settled black granules with about 80% showing the size from 0.4 to 4.0 mm in diameter, Del Nery et al. observed smaller bands ranging from 0.1 to 3.5 mm and Gagliano et al. reported a band of 0.5 to 3 mm as being the most frequent in these systems.Although there is increasing interest in understanding the granulation process and maintenance of the sludge blanket, there are no reports in the literature in which the methodology of granulometric determination of sludge is reported in detail, whereby a significant number of granules is analysed relatively quickly and simple.
Anaerobic granule sizes from various types of anaerobic biological wastewater treatments were investigated in order to understand the influence of this characteristic on the performance of the treatment system.Therefore, the aim of this protocol is to standardise the granulometry assay that can measure granule sizes accurately and quickly.In addition, the proposed methodology comprises about 1500–3000 granules in a single sample, a representative number compared to the currently applied methodologies.
Data was made available with a well-structured household questionnaire with a unit of analysis captured at households and individuals level.A questionnaire was administered to a household to elicit information on household members.The survey covers all legally recognized household members of households in the nine provinces of South Africa.The survey does not cover collective living quarters such as student hostels, old-age homes, hospitals, prisons, and military barracks but specifically on households.The General Household Survey collects data on education, health and social development, housing, access to services and facilities, food security, and agriculture.The data in Table 1 show the socioeconomics and demographics characteristics of the household heads sampled in South Africa.The mean age was found to be 47.8 years and more than half were male.The representation in this data is typical of Sub Saharan African countries .The highest source of income is through salaries or wages or commission while just 1.1% earn income through agricultural sales.The data show that over 80% of the respondents are South African/Black race while the Indian/Asia race have the least.In Fig. 1, using Foster–Greer–Thorbecke index as well as descriptive analysis, the data show that children experienced food insufficiency more than adults in South Africa.The data reveal that over 40 percent of the children are living in household experiencing food insufficiency.In the same vein, the data in Fig. 2 show the disaggregation of food security status across the 9 provinces in South Africa with special focus on children and adult.The data show that both for children and adult in, Guateng and KwaZulu-Natal experienced highest level of food insufficiency in South Africa.The data show that 22.72% and 20.66% of adult and 17.58% and 25.57% children are food insufficient in Guateng and KwaZulu-Natal province, respectively.The dataset also revealed that food insufficiency is lowest for both children and adult in Northern Cape Province.The dataset employed is the General Household Survey, 2016.The dataset was compiled based on stratified two-stage design, and a total of rural and urban 21,218 households were interviewed containing 72,604 respondents.The dataset were coded in SPSS software 22 version which the descriptive part of the research such as mean, frequency, standard deviation were carried out.In addition, the inferencial statistics were carried out on STATA package 13 using the FGT index to classify the respondents into food secured or otherwise.The dataset was robust and representative enough to generalize on the household food sufficiency status of South Africa.
Food insecurity or insufficiency, among other factors, is triggered by structural inequalities.Food insecurity is an inflexible problematic situation in South Africa.The country has a custom of evidence-based decision making, stocked in the findings of generalized national household surveys.Conversely, the deep insights from the heterogeneity of the sub-national analysis remain a principally unexploited means of understanding of the contextual experience of food insecurity or insufficiency in South Africa.The data present the food insufficiency status with special focus on adult and children.The data also reveal the adult and children food insufficiency status across the provinces in South Africa.The data contains socioeconomic and demographic characteristics as well the living condition and food security status of the households.
to mitigate the higher temporal workloads that some people felt in the push condition.One approach would be a user-interface component that indicates when the next update is going to occur, as investigated by Neate et al.The stimulus used in this study was related to the Autumnwatch program and was created by a major international broadcaster as a genuine and varied dual-screen experience.While the participants were people with an interest in natural history, who enjoyed watching nature programmes and were used to use the mobile or tablet while watching TV, we hypothesise that the outcomes are generalisable to broader audiences interested in other factual programs with similar characteristics to the Autumnwatch program, and to the companion content format we used.Nevertheless, further research is required to confirm these recommendations for other genres of TV programs.An important limitation of this study is that we only monitored visual attention.Auditory attention is an important part of the experience, but due to the fact the companion content employed was purely visual in nature, and the methodological challenges in objectively determining the location of auditory attention in this situation, we only tracked visual attention.While the use of video analysis instead of eye-tracking increased the ecological validity of the study, it introduced another limitation because the analysis of the videos to code the locus of participants’ visual attention was only performed by a single person and is therefore potentially error-prone.The single person, laboratory setting, whilst good for monitoring reactions in a controlled environment, is different to a real world environment where there may be more than one person watching at once, as well as numerous distractions.This study explored two methods for presenting companion content to television viewers: in one the viewer receives updates automatically, at a time determined by a production team; in the other, viewers had access to all companion content throughout the show, and made their own decisions about when to view each segment.While there was no clear overall preference for push or pull, there was a polarisation: most users strongly preferred one mechanism or the other.People preferred the pull mode due to the control it provided, and the push mode because it was perceived as a more fluid and engaging experience that required less effort in terms of decision making.In the push condition, it was demonstrated that automatic updates had a strong effect on viewers’ attention, pulling it to the tablet almost immediately and keeping it there for several seconds.These outcomes are an important step in building an empirical understanding of how to produce and present companion content.They indicate that there is an appetite for the high-quality companion content, curated by the production team, and that it has a place alongside self-directed browsing.
The use of mobile devices during television viewing is now commonplace, and broadcasters are increasingly supplying programme-related ‘companion content’.To produce an optimal user experience, it is important to determine how the delivery of companion content affects and is perceived by the viewer — without this we risk distracting the viewer, leading to frustration and disengagement.We present a controlled study investigating how attention, cognitive load (as measured by the NASA TLX), and users’ preferences are affected by the provision of two different content delivery modes: pushed and pulled.We find that delivery mode affected the temporal distribution of gaze to the tablet, with a consistent viewing pattern for pushed updates, which attracted attention within a few seconds, and a more diverse set of viewing patterns when updates were pulled.Cognitive load was similar in both conditions, and there was no consensus as to which mode was preferred, but users showed strong, polarised individual preferences.The advantages of each delivery mode are presented as a set of recommendations for the delivery of companion content.
Akt-signaling pathway by targeting UBE2T.In addition, an Akt inhibitor, LY294002, was employed to block the Akt-signaling pathway, with DMSO serving as a control.The sphere and colony formation abilities of LCSCs treated with miR-1305 mimic and LY294002 exhibited no significant differences compared to LCSCs treated with miR-1305 mimic and DMSO.The sphere and colony formations of LCSCs treated with LY294002 in the presence of UBE2T were noted to significantly decrease, with an irregular shape and weak refractive index compared to those treated with DMSO in the presence of UBE2T.Similarly, the results of the CCK-8 assay showed that the proliferative ability of LCSCs treated with miR-1305 mimic and LY294002 was not significantly different compared to LCSCs treated with miR-1305 mimic and DMSO, but that of UBE2T-treated LCSCs was found to significantly decrease by LY294002 treatment.The nude mice injected with the cells that overexpressed miR-1305 or UBE2T were treated with LY294002 or DMSO, so as to explore the role of the Akt-signaling pathway in the regulation of miR-1305 and UBE2T in tumor formation and growth.As expected, there were no significant changes in tumor volume and weight in mice injected with miR-1305 mimic-transfected cells when treated with LY294002 compared to those treated with DMSO; but, the tumor volume and weight significantly decreased by LY294002 treatment in the mice injected with UBE2T-transfected cells.The degree of necrosis and infiltration of tumor in the mice injected with cells that overexpressed miR-1305 and LY294002 was not significantly different compared to those injected with cells that overexpressed miR-1305 and DMSO.The treatment of LY294002 resulted in lighter nuclear staining and reduced nuclear division of tumor cells, with extensive necrosis in the tumor tissues and proliferated fibrous tissues around the necrotic area to different degrees in the mice injected with UBE2T-transfected cells.These findings revealed that the miR-1305/UBE2T/Akt axis was involved in regulating LCSC self-renewal and tumorigenic potential as well as tumor growth in vivo.Liver cancer is the fifth most commonly diagnosed tumor and the second most frequent cause of cancer-related deaths in men.Among primary liver cancers, HCC represents the major histological subtype, accounting for 70%–85% of cases of primary liver cancer.20,Similar to normal stem cells, CSCs are assumed to possess the abilities of self-renewal and production of differentiated cells.21,Even though CSC theory has gained much attention over the last few years, and suppression of CSCs is generally believed to be highly effective in impeding tumor progression, the regulatory mechanisms of CSCs remain unclear.22,The current study identified that miR-1305 was a novel miRNA that could specifically suppress the LCSCs’ stemness and HCC tumorigenesis by directly inhibiting UBE2T.Furthermore, we also demonstrated that the restoration of miR-1305 inhibited overall HCC tumorigenesis through the Akt-signaling pathway by binding to UBE2T.The result indicated that miR-1305 plays an important role in human HCC pathology and treatment and could potentially open new and viable modes of treatment in the future.In comparison with previous studies on miRNAs and genes in LCSCs,23,24 the current study illustrated different mechanisms.For instance, Liu et al.23 found that MHCC-97H cells in the absence of miR-155 exhibit a decreased number of CD90+CD133+ cells, decreased Oct4 expression, and a weakened sphere formation ability.Meanwhile, overexpression of miR-150 profoundly repressed cell proliferation and led to partial dispersion of spheres in CD133+ HCC cells.24,Herein, we observed a new mechanism by which miR-1305 exerts its inhibitory function on the stemness of LCSCs, by directly suppressing the expression of UBE2T at a post-transcriptional level.In addition, a previous study showed that UBE2T was a target gene of miR-543 and also could accelerate HCC growth by the mediation of p53, but data for the interaction between the p53 and UBE2T proteins were not shown in this study.25,The focus of the above study was changes of apoptosis-related factor p53 in HCC, while we focused on the changes of activation-related signal Akt in HCC cells.However, both studies revealed that UBE2T was as an oncogene in HCC.Furthermore, the current study not only added miR-1305 to the current list of specific miRNAs that suppress the stemness of LCSCs but also indicated toward the existence of a direct correlation between miR-1305 and UBE2T.By suppression of UBE2T and its downstream Akt-signaling pathway, miR-1305 impaired LCSCs’ stemness and overall HCC tumor growth.One central finding in the current study was that UBE2T presented with a higher expression in HCC cell lines and LCSCs than in the normal hepatic epithelial cell line.A previous study emphasized the important roles of the ubiquitin-proteasome pathway, which is a complex protein degradation system and plays in a wide range of biological processes, such as cell cycle control, signaling transduction, and tumorigenesis.26,Recently, enhanced expression of UBE2T has been noted in several tumors, such as breast and prostate cancers, and it serves as an attractive therapeutic target.9,10,In addition, the results provided by Hu et al.17 verified that UBE2T elevated the colony formation and proliferation of nasopharyngeal carcinoma cells both in vitro and in vivo, which is in line with our findings that the silencing of UBE2T could diminish the LCSCs’ sphere and colony formation and proliferation abilities.The current study also demonstrated that miR-130 acts as a tumor suppressor in HCC by attenuating the Akt-signaling pathway.The Akt-signaling pathway is well documented and known to be involved in cell proliferation and apoptosis, and thus it affects the progression of different kinds of tumors.27,A recent report found that HCC patients presenting with high levels of CXCR2 and CXCL5 showed an activated state of the Akt/GSK3β-signaling pathway.28,More specifically, an activated Akt/GSK3β-signaling pathway was
MicroRNAs (miRNAs)are involved in the maintenance of the cancer stem cell (CSC)phenotype by binding to genes and proteins that modulate cell proliferation and/or cell apoptosis.In our study, we aimed to investigate the role of miR-1305 in the proliferation and self-renewal of liver CSCs (LCSCs)via the ubiquitin-conjugating enzyme E2T (UBE2T)-mediated Akt-signaling pathway.In addition, miR-1305 disrupted the activation of the Akt-signaling pathway by targeting UBE2T, and, ultimately, it repressed the sphere formation, colony formation, and proliferation, as well as tumorigenicity of LCSCs.In summary, miR-1305 targeted UBE2T to inhibit the Akt-signaling pathway, thereby suppressing the self-renewal and tumorigenicity of LCSCs.Those findings may provide an enhanced understanding of miR-1305 as a therapeutic target to limit the progression of LCSCs.
of obvious, denied and poorly communicated burden associated with the care of the older adults.This study showed that despite the presence of this burden associated with caregiving, the commitment to preserve life makes the caregivers persist in the caring process.Their beliefs that caring for an older adult is an investment serve as a motivation to continue despite all odds.The authors hope that the findings will be useful for policymakers to formulate strategies to help caregivers of older adults to mitigate their problem.These may include providing prevention programme to reduce chronic illness, designing a programme to improve the quality of older adults and providing relevant informational, emotional and social supports to caregivers of older adults.The study further recommends that provision of effective structured educational programs for caregivers will be beneficial to them.In addition, establishing a support group will play an important role in assisting caregivers to overcome their daily challenges.Ethical approval for the study was obtained from the Human Research Ethics Committee, Institute of Public Health, Obafemi Awolowo University, Ile-Ife and Ethics and Research Committee, Obafemi Awolowo University Teaching Hospitals Complex, Ile-Ife.Permission was also obtained from the authority of the institution, and in addition, informed consent was obtained from the caregivers before the commencement of the study.We ensured that we respect the sanctity of life of the older adults and give them the right to autonomy.For the older adults who were cognitively intact, the content and purpose of the study were explained to them and we sought their consent to interview their caregivers.We interviewed only caregivers of those elderly who consented that their caregivers should be interviewed.Participants were guaranteed anonymity and confidentiality of the information provided.This research was supported by the Consortium for Advanced Research Training in Africa.CARTA is jointly led by the African Population and Health Research Center and the University of the Witwatersrand and funded by the Wellcome Trust, the Department for International Development under the Development Partnerships in Higher Education, the Carnegie Corporation of New York, the Ford Foundation, Google.Org, Sida and MacArthur Foundation Grant No: 10-95915-000-INP.
Background: Caregivers of the elderly with chronic illnesses are exposed to the burden associated with their caregiving activities.This study described the lived experience of caregivers of older adults in Nigeria.Methods: A qualitative design guided by interpretive phenomenology informed the design of the research, whereby 15 in-depth interviews were conducted with caregivers of older adults with chronic illnesses.The interview sessions were audiotaped and transcribed verbatim and analysed using constant comparison analysis method.Results: Fifteen caregivers, from different parts of Osun State, Nigeria, took part in the in-depth interviews.The caregivers were aged between 19 and 70 years, ten were women, five of them had secondary education, seven were self-employed and six were in a spousal relationship.The study uncovered four interrelated themes with explanatory subthemes—commitment to preservation of life (managing challenges associated with daily routine, problem with mobility, bathing and grooming, feeding, and problem with hygiene) (ii) denial (refusal to accept that burden exists), other things suffer (disruption of family process, suffering from poor health and social isolation), (iv) reciprocity of care (pride in caregiving, caregiving as a necessity and not by choice, and law of karma).Conclusion: This study provides insight into the burden of care of older adults with chronic illness.Caregivers’ commitment to preserving life makes them provide assistance whose performance even run contrary to their own wellbeing.Intervention programme should be designed to support the caregivers thereby improving their wellbeing.
suggested that the thickness of the oxide layer around the nanospheres could account for the size of crack-initiating defects.Now, if we solve for strength σ in KIC = σ1/2, with a = 30 nm and KIC = 0.9 MPa m1/2 as is typical of silicon , one obtains σ ≈ 3 GPa, which is below values of strength measured here for specimens without large flaws, i.e. specimens S1 to S10.This leads to conclude that the layer of oxide formed during etching did not govern the strength of the Si particles tested here, since measured strength values correspond to critical flaws even smaller than the oxide thickness.It is, on the other hand, possible that the oxide layer could have healed small cracks along the Si particle surfaces; however, the presence of such cracks along the surface of individual Si crystals is unlikely as they would be healed by Si diffusion along the Al/Si interface or through the aluminium phase during the Si particle coarsening heat treatment.We thus deem the influence of larger flaws on the strength of the Si particles measured here to be representative of the behaviour of Si particles within the coarsened Al-12.6%Si alloy from which they were extracted.We also studied the possibility that Si particle surface defects might have been produced by the etching procedure used in this work.To this end, we extracted particles from the alloy also by electroetching with sodium chloride and with nitric acid as electrolytes.We found no indication of a difference in the observable particle defects with etching procedure, giving confidence that the observed defects are present originally in the silicon particles within the alloy.A microscopic three-point bending test is developed to measure the flexural strength of hard reinforcing particles of high aspect ratio.Although focused ion beam milling is used in sample preparation, the particle surface probed in tension is pristine, in the sense that it is not affected by focused ion beam milling or by redeposition.This is achieved on one hand through the specific specimen preparation procedure and on the other hand through the use of a tapered bend beam cross section, which focuses the stress at the centre of the bottom surface of the bending beam while reducing it at the edges.Bespoke finite element modelling is used to access the peak stress at fracture.Misalignment can lead to overestimate calculated strength values by at most 10%, the most critical source of error being deviations of the indenter placement point from the beam span centre.This level of error is of the same order as uncertainty resulting from error in sample dimension measurement.Alternatively, simple beam theory can be used to interpret data; this leads to overestimate by about 10% resulting strengths, largely as a consequence of the rather short span of the specimens.Results on coarsened eutectic silicon particles extracted from Al-12.6wt.%Si show that:coarsened Si particles can be very strong, with a characteristic strength around 9 GPa when bend particles of effective surface area near 7 μm2 are tested; and that,when such particles contain microstructural flaws, such as pin-holes and particularly trench-like interfaces along their facets, their strength is strongly diminished.Given the high particle strength values that are recorded in the absence of such flaws, their elimination from Si particles in Al–Si alloys should be a potent pathway to strongly improved strength and ductility in 3xx series aluminium casting alloys.
A microscopic three-point bending test that measures the strength of faceted particles of high aspect ratio is developed and used to probe individual coarsened plate-like silicon particles extracted from the eutectic Al-12.6%Si alloy.Focused ion beam milling is used in sample preparation; however, the tapered beam cross-section and multistep preparation procedure used here ensure that the particle surface area subject to tension in mechanical testing is free of ion beam damage.Results show that coarsened silicon particles in aluminium can reach strength values on the order of 9 GPa when they are free of visible surface defects; such high strength values are comparable to what has been reported for electronic-grade silicon specimens of the same size.By contrast, tests on eutectic silicon particles that feature visible surface defects, such as pinholes or boundary grooves, result in much lower particle strength values.Reducing the incidence of surface defects on the silicon particles would thus represent a potent pathway to improved strength and ductility in 3xx series aluminium casting alloys.
The “Supersymmetric Artificial Neural Network” in deep learningTw), espouses the importance of considering biological constraints in the aim of further generalizing backward propagation.Looking at the progression of ‘solution geometries’; going from SO representation to SU representation has guaranteed richer and richer representations in weight space of the artificial neural network, and hence better and better hypotheses were generatable.The Supersymmetric Artificial Neural Network explores a natural step forward, namely SU representation.These supersymmetric biological brain representations can be represented by supercharge compatible special unitary notation SU, orTw parameterized by θ, bar, which are supersymmetric directions, unlike θ seen in the typical non-supersymmetric deep learning model.Notably, Supersymmetric values can encode or represent more information than the typical deep learning model, in terms of “partner potential” signals for example.
Generalizing backward propagation, using formal methods from supersymmetry.
space might contain as many as 1060 compounds .This is where FBDD comes to enrich hit compounds and produce more potent lead compounds .The two commonly used approaches for the optimization of fragment hits into lead-like compounds are Fragment growing and Fragment linking.The first is the addition of functional groups to the active fragment core in order to optimize interactions with the binding site, while the second is a less commonly used method, which links fragments that bind in adjacent sites of a target protein to turn low affinity fragments into high affinity leads."Successful examples of hit optimization are well documented, such as the discovery of Beta-site amyloid precursor protein cleaving enzyme 1 by Amgen towards inhibitors against Alzheimer's disease .Another example is the fragment based discovery of inhibitors against the phosphatidylinositol-3 kinases which are involved in cancer, rheumatoid arthritis, cardiovascular disease and respiratory disease.Computational methods have provided a powerful toolbox for target identification, discovery and optimization of drug candidate molecules.Information technologies coupled to statistics and chemoinformatic tools shed light to disease mechanisms and phenotypes revealing potential drug targets to be further validated by high throughput screening technologies.Consecutively, multiple methods allow for the prediction and characterization of binding sites, studying the dynamic nature of drug targets, identifying new active molecular entities and their optimization.Nowadays, large databases of readily commercially available compounds and ligand chemical space exploration offer drug discovery scientists with enormous data to handle.Different methods, based on readily available information on the biological system under study are evolving to assist the manipulation and handling of this data.Moreover, integration of ‘-omics’ technologies and databases may facilitate the identification of novel drug targets or the design of network-based multi-target drugs.Structure and ligand based methods are the most commonly used in the drug discovery field, however, emerging combinatorial techniques such as proteochemometrics are emerging.All the computational methods mentioned in this review, either towards target identification, either towards novel ligand discovery continue to evolve and their synergy is what we envisage that will facilitate cost-effective and reliable outcomes in an era of big data demands.
In the big data era, voluminous datasets are routinely acquired, stored and analyzed with the aim to inform biomedical discoveries and validate hypotheses.No doubt, data volume and diversity have dramatically increased by the advent of new technologies and open data initiatives.Big data are used across the whole drug discovery pipeline from target identification and mechanism of action to identification of novel leads and drug candidates.Such methods are depicted and discussed, with the aim to provide a general view of computational tools and databases available.We feel that big data leveraging needs to be cost-effective and focus on personalized medicine.For this, we propose the interplay of information technologies and (chemo)informatic tools on the basis of their synergy.
Aedes aegypti mosquitoes are the primary vector of dengue viruses.These are a great concern for public health burden in over one hundred countries, mainly located in tropical and sub-tropical regions.There are more than two billion people living in at risk areas and tens of millions of apparent infections are estimated to occur each year.This is a particularly serious threat since there are no known drug treatments available and the first vaccine has only recently been licensed for use but is only recommended in areas with a high dengue burden and only in those over nine years of age.There are also a range of other insect-vectored pathogens that pose serious threats to public health including Zika, malaria and filariasis.Current methods do not appear sufficient to deal with these problems suggesting a necessity to investigate alternate methods of control.In recent years there have been rapid advances in tools available to molecular biologists.This has made the idea of genetic control methods a very real prospect.As such, a number of different genetic control strategies have been proposed that could either supplement or replace the methods currently in place.One category of genetic control, would be introduced by releasing into the wild, insects carrying modified genes rendering them refractory to one or more pathogens of medical importance, such as one or more dengue viruses.These modified genes would be combined with a gene drive mechanism, causing them to be inherited by the progeny of released mosquitoes at a super-Mendelian rate, meaning they could spread towards fixation in a population over the course of a number of generations.One such class of gene drive system that is currently under development is known as engineered underdominance.Underdominance, in a single locus setting, refers to the situation whereby a heterozygous individual is less fit than homozygous individuals.Such single-locus underdominance represents a gene drive system in its own right and has been engineered in Drosophila melanogaster and also considered theoretically.In the example considered here, a similar effect is achieved via the introduction of two independently inherited transgenic constructs each carrying a lethal genetic element and a suppressor for the lethal at the other locus.In essence this can be thought of as two killer-rescue systems split across two transgenic constructs.Linked to each of the transgenic constructs are cargo genes, here assumed to be genes rendering individuals refractory to dengue, for example those developed by Franz and colleagues.In this type of system, individuals survive only if they carry no transgenic constructs or if they carry at least one copy of both.This creates a selection pressure for individuals to carry both transgenic constructs and thus potentially allows the refractory genes to be spread throughout a population.If these genes spread to fixation within a population, we would expect that over time infections with the targeted dengue strain would be eliminated, or at least significantly reduced, in the affected area.Two of the most important classifiers of gene drive systems are related to their persistence and invasiveness.Persistence can be thought of in terms of two distinct categories; self-limiting systems where transgenes naturally fade away over time or self-sustaining systems where transgenes persist indefinitely in the absence of a genetic change and may even increase in prevalence over time.Previous theoretical studies of engineered underdominance systems have demonstrated that they sit on the borderline between persistence classifications.In particular, such systems have a threshold for the introduction of insects above which transgenes will spread and below this the transgenes will be eliminated.Mathematically this can be thought of as three equilibrium points; two stable equilibria typically representing fixation of wild-type or introduced alleles and an unstable equilibrium that determines which allele heads toward fixation.The exact size of this introduction threshold in terms of allele frequencies is dependent on the fitness of transgenic insects relative to wild-type individuals.In terms of invasiveness, a gene drive may either be defined as ‘global’ where it would be expected to spread into every insect in every linked population or ‘local’ whereby there would only be spread within a target population.The invasiveness of gene drive systems has been a key consideration in terms of their regulation as the Cartagena Protocol prohibits the release of any system capable of spreading across international borders, unless a prior agreement has been reached.Invasiveness is also a key ecological consideration since precise interventions will potentially allow effects to a target pest species with little or no impact on other populations.As such, it could be considered desirable for a gene drive to spread within a target population but not beyond.Previous theoretical work has shown that engineered underdominance systems are extremely unlikely to spread to fixation in any non-target populations as the introduction threshold is too large to be reached through migration alone.This same work went on to show that typical migration rates may only result in ∼ 3.2% of the non-target population being made up of transgenic mosquitoes.Thus, engineered underdominance appears to be an exciting prospect in that these systems are feasible to engineer and seem able to satisfy a number of key regulatory and ecological issues associated with the release of transgenic insects.Previous theoretical work on engineered underdominance systems has focused on the case whereby transgenic individuals of both sexes are released into the wild population; carry lethals that affect both sexes; and display full suppression of one or two copies of a given lethal depending upon the number of copies of the relevant lethal suppressor possessed.Whilst these assumptions are reasonable, there are a
In such a system two genetic constructs are introduced, each possessing a lethal element and a suppressor of the lethal at the other locus.aegypti as vectors of Zika, yellow fever and chikungunya viruses and also to the control of a number of other insect species and thereby of insect-vectored pathogens.
these cases do however display a diminishing return to the addition of extra releases.The extent of this diminishing return varies depending on the strategy considered and is most clearly seen in male-only releases.However, in spite of this diminishing return, it is likely that a strategy lying close to the borderline between success and failure could be improved either through engineering effort to reduce fitness costs; increasing the number of releases made; introducing insects at a greater release ratio; or some combination of these measures.These measures could also be used to provide a buffer against any uncertainty in the wild population size or measurements of fitness costs.As with any mathematical model, this work is based on a number of simplifying assumptions that are common within this type of modelling work.Since the validity of these assumptions has been discussed previously they are not considered any further here.There are however a number of areas in which future work would be useful in order to better understand where engineered underdominance gene drive systems will be successful and where they will fail.The genetic systems modelled here assume two distinct forms of lethal suppression.It is quite possible that a laboratory engineered system may not necessarily fit precisely into the categories of strong or weak lethal suppression, instead falling somewhere between these two classifications.As such, future work will be necessary in order to ascertain exactly how this affects the criteria for success of a given system.However, we would anticipate that release thresholds for such an intermediate system would lie somewhere between the two cases discussed here.In this work a population genetics model based on discrete generations has been considered.For a species such as Ae.aegypti this assumption may be a reasonable approximation where populations are synchronised by climate conditions, but is unlikely to hold for wild populations which are thought to reproduce continuously, at least in many areas.To build upon this work it would be useful to reformulate this model as differential equations that would enable the consideration of population dynamics and timing of lethality, in a similar manner to that considered for other genetic control systems.It is feasible here that the timing of transgene lethality and/or density-dependent competition during the larval phase could alter the transgene introgression thresholds, however we would not expect to see large differences.It is also feasible that overlapping generations could allow a single, male-only release with weakly suppressed lethals to produce lasting transgene introgression if the released transgenic individuals survive long enough to mate with the first transgenic offspring.Huang et al. have considered age and spatial structuring of mosquito populations in the context of two-locus engineered underdominance gene drive.This work could be extended to consider a number of the alternate release strategies and genetic systems studied here.This would enable us to examine the effects of different configurations of spatial release and how these may differ for the various systems considered within this study.The model presented here could also be adapted into a two-deme version.This would allow the investigation of a number of factors important from a regulatory point of view.In particular it would allow for the identification of threshold migration rates that result in different outcomes.Here we would expect that high migration rates could result in one of two outcomes.They could potentially cause transgenes to spread into a neighbouring population, or alternatively the migration of wild-type individuals into the targeted population may reduce the transgene frequency below the necessary threshold thereby preventing successful transgene introgression.As shown by Marshall and Hay, for lower migration rates we would expect transgenes to spread within the target population and only reach extremely small frequencies in neighbouring populations.Within this work it has been assumed that the cargo genes are permanently linked to the transgenic constructs and that no resistance evolves within the population.The likelihood of these components becoming unlinked is currently unknown and may only be established in long term experiments, although the effects of this could be modelled in a manner similar to that of Marshall.As discussed above, results here clearly demonstrate the thresholds that must be satisfied for successful transgene introgression and the timescale upon which this acts.This provides a clear indication that engineered underdominance gene drive systems could potentially be used to replace wild-populations with refractory individuals, thus reducing/eliminating the pathogen.However, the incidence of a pathogen cannot be directly related to transgene introgression levels without a number of additional modelling assumptions.Due to the fairly general nature of modelling assumptions given here, we anticipate that results would be applicable to a range of insect species and pathogens.As discussed here, Ae.aegypti mosquitoes are the primary vectors of dengue.They are also vectors of yellow fever, chikungunya and Zika viruses.We thus anticipate that this model would be relevant for assessing the control of those viruses and other pathogens also.In addition to this, it is likely that results here would also be applicable to other mosquito-borne diseases such as malaria and filariasis, and other genetic pest management scenarios.This study indicates that feasible release strategies and genetic systems are likely to allow successful transgene introgression across a range of fitness regimes.In particular, this work helps to define the relationship between specific genetic designs and constraints on appropriate release strategies.Further modelling work would likely be able to refine this further.
Engineered underdominance is one of a number of different gene drive strategies that have been proposed for the genetic control of insect vectors of disease.We anticipate that results presented here will inform the future design of engineered underdominance gene drive systems as well as providing a point of reference regarding release strategies for those looking to test such a system.Our discussion is framed in the context of genetic control of insect vectors of disease.One of several serious threats in this context are Aedes aegypti mosquitoes as they are the primary vectors of dengue viruses.However, results are also applicable to Ae.
for NGO distributors not providing training was that some perceived government veterinarians to be easily accessible to farmers in case of need.Our survey instrument collected information on whether individuals had received any training from sector vets.Hence our next set of results now additionally control for such training having been received.When sector vets offer training, they often do so to groups of individuals in a village.In our survey of sector vets, the majority of them reported that either everybody in a village or every farmer in a village is invited to attend a training.Only 7% of vets stated that they select participants based on need and only 11% said they decide to help a household or give them advice when they are visiting them for other reasons.As expected, there is strong evidence for households themselves demanding the type of group training session from vets described above.All but one of the 30 sector vets stated that farmers approached them requesting training and the type of training provided by the sector vets is very closely in line with the type of training demanded by the farmers.This raises the concern that the indicator for having received training from the sector vet is endogenous and likely biased upwards as better or more needy farmers may have demanded such services in the first place.The results are in Table 6.As in the previous section we focus again on on the main outcomes, now also including the number of assets owned.We note that across most outcomes, in these specifications where we also control for training from sector vets, the indicator for training received with the Girinka cow remains positive, significant and of similar magnitude to the earlier results.Two additional robust findings emerge.First, the provision of training from sector vets has little significant impact on these outcomes of interest.Even if such training is sought out endogenously, it appears to be not much correlated with later milk production, earnings and asset accumulation.This might tentatively suggest that the returns to training are especially high when provided at the same time as livestock asset transfers, but that training provided subsequently has far lower returns.This in turn might be because such training is only sought, and only provided with some delay, when outcomes are deteriorating with regards to livestock production.An alternative explanation for the low returns to training received from sector vets is that the quality of training that sector vets are able to provide is just much lower than that provided with Girinka cows.Indeed, 72% of sector vets interviewed indicated that it was in their job description to provide “advice” rather than training, suggesting that the nature of sector vet training may have been less formal.Sector vets also identified their training on a particular topic to be composed on average of fewer sessions, and of shorter length relative to NGO offered training.Second, the results in Table 6 also show the provision of packages of medicines and other inputs at the same time as asset transfers has little significant impact on the outcomes of interest, indeed a number of the coefficients have negative point estimates.This suggests that easing capital constraints slightly at the same time as livestock asset transfers is far less effective than such transfers being bundled with the provision of training.15,Both these results are informative for the design of future livestock asset transfer programs: training should be provided, there should not necessarily be a reliance on existing public sector vets as a source of training to farmers, and the provision of other inputs such as medicines, appears less effective in this setting than the provision of training per se.The Girinka One Cow policy is an ambitious and extensive asset transfer program, with over 130,000 livestock distributed to the rural poor in since 2006.The program provides a first opportunity to study the impacts of combining training with livestock asset transfers, relative to only providing livestock assets.We are able to do so because we note that the Girinka program was jointly implemented by government agencies and NGOs.The role of NGOs lay predominantly in the the distribution of cows.Given NGOs varied in their capacity to provide training alongside cows, we observe some beneficiary households only receiving cow transfers and others receiving cows with complementary training.As farmers themselves do not self-select to receive training, but rather the provision of training is driven by supply/capacity constraints faced by NGOs, the assignment of training is plausibly exogenous to other factors that drive outcomes related to milk production, livestock productivity, household earnings and assets, as measured up to six years after the initial livestock asset transfer."Our results show that even in a setting where linkages between farmers and markets remain weak – that might attenuate the returns to training all else equal say because farmers are unable to sell at high prices to urban consumers, the provision of training with asset transfers still has permanent and economically significant impacts on household's milk production, livestock productivity, earnings, and asset accumulation.This training is found to be far more effective for these outcomes of interest than the availability of subsequent training from local government vets, or the provisions of small amounts of capital inputs provided with livestock asset transfers.The rate of return to the provision of training is high and likely larger than for other investments available to beneficiary households: even a conservative estimate suggests training costs would be recovered and the program break even if the training allows households to
We present evidence from Rwanda's Girinka ('One Cow per Poor Family') program that has distributed more than 130,000 livestock asset transfers in the form of cows to the rural poor since 2006.Supply side constraints on the program resulted in some beneficiaries receiving complementary training with the cow transfer, and other households not receiving such training with their cow.We exploit these differences to estimate the additional impact of receiving complementary training with the cow transfer, on household's economic outcomes up to six years after having received the livestock asset transfer.Our results show that even in a setting such as rural Rwanda where linkages between farmers and produce markets are weak, the provision of training with asset transfers has permanent and economically significant impacts on milk production, milk yields from livestock, household earnings, and asset accumulation.The results have important implications for the design of 'ultra-poor' livestock asset transfer programs being trialled globally as a means to allow the rural poor to better their economic lives.
was done to compare all included variables between mothers included and excluded in the current analysis.IPV exposure: The Intimate Partner Violence Questionnaire is a 12-item inventory adapted from the WHO multicountry study and the Women’s Health Study in Zimbabwe and assessed recent exposure to emotional, physical, and sexual abuse.Mothers were asked about exposure to partner behavior and frequency of occurrence.Mothers completed the IPVQ at the 28–32 week antenatal visit and at 10 weeks, 6, 12, 18 and 24 months postpartum.Partner behavior indicating emotional IPV included having been insulted or made to feel bad, having been humiliated in front of others, intentionally scared or intimidated or threatened with physical harm.Physical IPV included being slapped, pushed, shoved, hit with an object, beaten or choked.Sexual IPV exposure was classified based on having been forced to have sex, afraid not to have sex or forced to do something sexual which was degrading or humiliating.Using questionnaire responses mothers were grouped into four categories of exposure: no IPV where all past year behaviours were “never” experienced; isolated or low IPV was designated where any past year behaviours were experienced as “once” and none more frequently than once; moderate where past year behavior was experienced “a few times”; and high where “many times” was indicated.This was done at each of the six time points to investigate changing exposure patterns during the 2 year period of follow up.Scoring guidelines were devised for the purposes of this study, and were based on prior work in South Africa.Maternal Maltreatment in Childhood: The Childhood Trauma Questionnaire is a 28- item inventory assessing three domains of childhood abuse, and two domains of childhood neglect, occurring at or before the age of 12 years.Each item is scored on a frequency scale from 1 to 5, such that each subscale is scored on a spectrum from 5 to 25.Dichotomous variables were included in the present analysis, as previously described, such that above threshold for each domain was defined as: physical neglect; physical abuse; emotional neglect; emotional abuse; and sexual abuse.Mothers completed the CTQ antenatally at 28–32 weeks’ gestation.Sociodemographic variables were collected from an adapted questionnaire used in the South African Stress and Health study.Maternal age, income , education, employment and partnership status were self-reported antenatally at 28–32 weeks’ gestation.The DCHS was approved by the Faculty of Health Sciences, Human Research Ethics Committee, University of Cape Town and by the Western Cape Provincial Health Research committee.Mothers provided informed consent in their preferred language: English, Afrikaans or isiXhosa and were given R100 for travel reimbursement to reach study sites.Study staff were trained on the content of questionnaires and ethical conduct of violence research, including confidentiality and safety issues.Interviews were conducted privately, data were de-identified and only accessible by study staff to ensure confidentiality.Staff were trained to recognise signs of mental health issues as well as circumstances endangering mothers or children, including Department of Health mandatory reporting requirements for endangerment.Where identified, staff were trained to refer participants to appropriate care or social services in the Paarl area specialising in the issue identified.Further, all women involved in the study, independent of identified mental or physical health issues, receive information regarding social and support service providers in the area.Latent class growth analysis was used to derive latent classes indicating severity patterns of emotional, physical and sexual recent IPV victimisation separately and across the six time points included.LCGA is a type of growth mixture modeling in which there is no within-class variability modelled.Categorical IPV data was used to estimate the latent classes.Compared to latent class analysis, LCGA of categorical data allows for a more parsimonious model.In the Mplus implementation, continuous Gaussian variables are assumed to underpin each categorical class-indicator, so the item thresholds are also modelled more efficiently.LCGA was used to create longitudinal classes separately of IPV sub-types.A second analysis was done to investigate patterns across all sub-types of IPV.This analysis builds upon the association models discussed above – by utilising latent class analysis to determine combined latent classes for all IPV sub-types to investigate differential profiles considering all IPV sub-types.LCA was used as an analytic approach to allow within class variation to enable different patterns between groups of key variables to emerge within each class; thus allowing cross sectional and longitudinal heterogeneity to be captured.Both LCGA and LCA group individuals into classes based on profiles of indicator variables, allowing identification of heterogenous groups of homogenous person-centred patterns.LCGA and LCA analyses were completed in MPlus 8.0.Optimal number of classes were determined based on multiple statistical criteria, including Akaike Information Criterion and sample size adjusted Bayesian Information Criterion as well as sufficient class size.To ensure a meaningful class size and allow clinically relevant interpretation, we excluded any models where the smallest class was fewer than 30 women, similar to other studies.To investigate associations of maternal maltreatment in childhood with LCGA and LCA class membership for IPV, we utilised the bias-adjusted 3-step approach, which takes into account inaccuracy of class assignment.Using this 3-step approach, multinomial logistic regression analyses were performed; sociodemographic variables were included in the LCGA and LCA models as covariates.All analyses were restricted to mothers who contributed data for at least three of the six time points to enable investigation of changes over time for all included women .A sensitivity analysis was done to ensure there were no meaningful differences for key variables for mothers included compared to mothers excluded in the current analysis.No meaningful differences were found for variables
Maternal IPV (emotional, physical and sexual) was measured at six timepoints from pregnancy to two years postpartum (n = 832); sociodemographic variables and maternal maltreatment in childhood were measured antenatally at 28–32 weeks’ gestation.Associations between maternal maltreatment in childhood and IPV latent class membership (to identify patterns of maternal IPV exposure) were estimated using multinomial and logistic regression.
included in the present analysis.The study sample was characterised by low levels of education, and a minority of mothers were employed, or were married/in a stable relationship.The majority were born in the study area, and earned >R1,000 per month.Median age was 26.2 years.Levels of childhood exposure to maltreatment were high, with over a third of the sample reporting at least one form of maltreatment and 5% reporting all five types.Polyvictimisation based on IPV class membership was prevalent; 44% of women were grouped into high or moderate IPV classes for at least one sub-type with 6% grouped into high or moderate classes for all three IPV sub-types, Table 2.IPV severity prevalence rates across time points and by sub-type are presented in Supplemental Table 2.Model fit statistics for models with a varying number of classes are presented in Table 3.A 3-class model was chosen for emotional IPV, based on lowest ssaBIC and AIC while retaining a meaningful class size; entropy was 0.732.Women in the first class were characterised by no or isolated/low emotional IPV over time with increased probability of emotional IPV exposure during pregnancy compared to postpartum.Women in the second class were characterised by moderate emotional IPV over time; those in the third class were characterised by high IPV across time points, Fig. 1.The no/low emotional IPV class was treated as the reference class for multinomial regression.Exposure to childhood abuse of all types was associated with membership in the high emotional IPV exposure class , Table 4.Childhood experience of physical abuse and physical neglect was associated with membership in the moderate emotional IPV class, Table 4.A 3-class model was chosen for physical IPV, based on lowest ssaBIC and AIC as well as meaningful class size; entropy was 0.643, Table 3.Women in the first class were characterised by low/isolated or no IPV with slightly increased probabilities of exposure during pregnancy, compared to postpartum; women in the second class were characterised by moderate physical IPV over time; and women in the third class were characterised by high IPV over time, Fig. 2.Exposure to each of the child maltreatment domains was significantly associated with membership in the high physical IPV class compared to the no/low physical IPV class.Exposure to child maltreatment increased this risk by between 2 to 5-fold .In our sample, prevalence of recent IPV during pregnancy was very high, with 24%, 18% and 6% of mothers reporting exposure for emotional, physical and sexual IPV respectively.This may be due to a number of factors.Few studies in South Africa have reported pregnancy rates of IPV, specifically by sub-type, though rates from our sample are within ranges where combined or reported from other African countries, such as Nigeria .Further, of the two other South African studies reported, one was in Durban, which has lower overall rates of violence than the Western Cape, where the current study is located.The other was restricted to a sample of Black African participants, whereas reported rates were slightly higher in the mixed-ancestry community included in our study sample.Culture and gender norms within focal communities may in part explain our high prevalence of IPV.Zembe and colleagues investigated social risk factors and relationship power inequity and IPV in a sample in the Western Cape, finding high acceptability of violence in intimate relationships as well as gender norms supporting aggressive masculinity and subservient femininity, particularly in historically marginalised groups such as those included in the present study.Distinctive patterns of longitudinal IPV exposure emerged, when investigating separate IPV sub-type profiles, classifying women into high, moderate and low exposure groups over time for emotional and physical IPV and a high/moderate and no/low exposure group for sexual IPV.As expected the largest classes for each IPV sub-type were characterised by no/low IPV exposure.However a significant proportion of women were in high or moderate classes and, importantly, in low exposure groups there was an increase in IPV exposure during pregnancy across all IPV sub-types.Childhood physical, emotional and sexual abuse, as well as physical neglect, were all significantly associated with membership in the high IPV exposure classes compared to no or low exposure classes.There was a large degree of longitudinal stability within classes, which indicates a consistent and repeated pattern of IPV exposure, based on IPV sub-type and intensity of exposure, from pregnancy through two years postpartum.Further, a large proportion of women were grouped into high or moderate classes for both physical and emotional IPV.These women exhibit sustained risk and were exposed to prolonged high or moderate intensity IPV.Though this may be a pattern already established and unrelated to pregnancy, it appears to be present during, and persists following, pregnancy.Given that negative child health outcomes have been linked to IPV exposure in utero as well as during early life, this pattern of sustained exposure may represent an important risk to not only the mental and physical well-being of these mothers, but also to that of their children.Women grouped into high or moderate exposure classes over time are key to identify early through screening conducted during routine antenatal care.The majority of women in South Africa access antenatal care and would benefit from targeted screening and referral, particularly for those exposed to ongoing high or moderate IPV.Our study identified a second risk group, which emerged across emotional and physical IPV sub-types.The largest class of women for both emotional and physical IPV exhibited a pattern of increased probability of exposure during pregnancy relative to the postpartum time-points.This is consistent with some previous
In latent class analysis separating by IPV sub-type, two latent classes of no/low and moderate sexual IPV and three classes of low, moderate, and high emotional and physical IPV (separately) were detected.In combined latent class analysis, including all IPV sub-types together, a low, moderate and high exposure class emerged as well as a high antenatal/decreasing postnatal class.
longitudinal studies, which found increased IPV exposure during pregnancy compared to postpartum.However, there is not a consensus as other studies have cited a reduction in IPV prevalence during pregnancy, compared to pre-pregnancy prevalence.This may link to risk associated with increased financial strain during pregnancy, conflict arising from unwanted pregnancy, substance use or traditional gender norms.Higher prevalence during pregnancy may be linked to relationship status as pregnant women are more likely to be partnered.The majority of women in the current study exhibited higher probabilities of IPV exposure during pregnancy; as mentioned, given that the vast majority of women in South Africa attend antenatal care, this provides a critical opportunity to screen for IPV in those with sustained risk over time as well as in women who have increased risk during pregnancy.A second focus of this study was to inform understanding of the cycle of abuse.In our LMIC sample of women, childhood sexual, physical and emotional abuse, and physical neglect were found to be associated with membership in high IPV exposure classes.The strength of these associations was high, with two to five-fold increased odds depending on type of childhood maltreatment, for women exposed.Our study confirms and extends previous findings linking sexual maltreatment or physical childhood maltreatment and IPV in adulthood.However, previous research has not found increased risk for those exposed to emotional abuse.Notably, we did not find particular evidence of specificity in terms of transmission of risk from type of childhood exposure to type of adult IPV.Membership of high IPV classes was more robustly associated with early adversity than moderate class membership, particularly for physical IPV and sexual IPV.However, moderate emotional IPV exposure was significantly associated with most types of childhood adversity.This is potentially significant, as emotional IPV is often overlooked both in research and in public health settings, even though emotional IPV may have serious effects on maternal mental health and is often more prevalent than physical IPV.Further, these findings may indicate an important relationship between intensity of IPV exposure and maternal maltreatment in childhood as the strengths of association increased with membership in higher intensity classes across all sub-types of IPV.Further, given associations between early life adversity and adult exposure to violence, children of women in the high or moderate exposure groups may already be at increased risk for victimisation in adulthood as well as negative health outcomes during childhood.To further investigate the cycle of abuse across the lifespan, we explored overall patterns for combined IPV sub-types.This approach allowed investigation of differential patterns across IPV sub-types, important given that these are known to co-occur.Expected patterns emerged for three classes, these patterns were similar to those that emerged when investigating IPV sub-types separately, as described above.However, an interesting additional class emerged with a very high probability of exposure to IPV during pregnancy, which decreased over time.There were signals for this in the LCGA analysis by separate IPV sub-types, where the largest class of women had slightly elevated levels of IPV antenatally.However, in the combined analysis, antenatal probabilities were extremely high for moderate or high levels of IPV, which decreased to 20%, 30% and 5% respectively by 2 years postpartum.Importantly, this group was 10% of the study sample and so represents a large proportion of pregnant women in the population.This decrease may be due to mothers exiting violent relationships after birth of their child.Alternatively this pattern could represent a high risk for IPV during pregnancy associated with conflict related to financial strain or an unexpected pregnancy which dissipates with time.For these women, though the risk decreases postnatally, there are significant health risks for their child linked to in utero exposure.Importantly, screening during antenatal care provides a critical opportunity to intervene.In the combined IPV class analysis, similar to the analysis by IPV sub-type, all domains of maternal childhood maltreatment were significantly associated with membership in the high versus low class.However, in looking at the childhood maltreatment patterns predicting membership of moderate vs low combined IPV class, only childhood physical abuse reached significance in our cohort.Where analysis split IPV subtypes for the mothers, all childhood maltreatment groups was associated with moderate emotional IPV class membership and only the no childhood maltreatment group was associated with moderate physical IPV class membership.This suggests a differential relationship between maternal childhood maltreatment and later emotional IPV, an association which may be lost when IPV sub-types are combined.In combined IPV models, emotional neglect in childhood was associated with membership in the decreasing IPV class, other childhood maltreatment variables were not.This group shows very high probabilities during pregnancy, a critical time for both maternal and child health.Further work is needed to better understand the increased risk for IPV during the antenatal period as well as factors which support the decrease in risk postnatally, patterns that may be due to other personal, community or family-level factors and which may provide useful targets for intervention strategies.Importantly, poly-victimisation was prevalent in the current study; 38% of women were exposed to two or more types of maltreatment in childhood and 27% were exposed to two or more types of IPV.Further, the combined IPV sub-type LCA analysis, underscores the prevalence of polyvictimisation across IPV sub-type, including the frequency and intensity of exposures, which were largely stable across sub-types.There is a dearth of evidence from LMIC settings, where unique cultural and social factors may impact associations differently than in high-income country settings.This may be particularly relevant in environments such as South Africa, with high rates of
There has been little examination of profiles of IPV and early life adversity in LMIC contexts.Moderate and high classes for all IPV sub-types and combined analysis showed stable intensity profiles.Maternal childhood sexual abuse, physical abuse and neglect, and emotional abuse predicted membership in high IPV classes, across all domains of IPV (aORs between 1.99 and 5.86).Maternal maltreatment in childhood was associated with increased probability of experiencing high or moderate intensity IPV during and around pregnancy; emotional neglect was associated with decreasing IPV class for combined model.
child maltreatment as well as IPV exist.High levels of violence in South Africa generally may facilitate a culture where IPV is normalised, thereby increasing the risk of cumulative or long-term exposure as women remain in problematic relationships.The relatively low levels of education and economic power of women in South Africa may further exacerbate the vulnerability of mothers to IPV.Given the negative physical and mental health outcomes associated with IPV exposure, for both mothers and children, it is essential that we understand the specific processes via which early adversity confers risk, and consider potential intervention targets and optimal time points to intervene.Guidelines on care of women exposed to domestic violence in South Africa and the Western Cape are available, however, existing guidelines are not universally adopted and the implementation is often lacking in continuity and poorly coordinated.More effective procedures are in place for sexual abuse than for physical or emotional IPV which may be more difficult to recognise.There is much work that is required to ensure that women in need are identified and supported in accessing care or resources.Previous studies have found that healthcare providers may resist identifying and managing IPV as a health issue, especially given its complexity and the need for long-term support.Low levels of IPV identification have been noted, even though women routinely present with physical and mental symptoms indicative of IPV exposure, including injury, anxiety and depression.One study found that only 10% of women presenting at primary care facilities while suffering from IPV were identified as such.IPV is a complex issue, requiring a multi-sectorial approach to address risk factors and provide support to women affected.Nonetheless, the majority of pregnant women in South Africa present to primary health facilities for antenatal care, offering a critical opportunity for identification and referral.The present study offers insight into patterns of IPV exposure during and following pregnancy as well as key risk factors, which may be possible to adapt for screening purposes in primary healthcare facilities.In low resource settings, it may be necessary to triage resources to identify the most critical cases.The longitudinal profiles described in the current research indicate that many women are at sustained risk of exposure during and following pregnancy, with a key group emerging that is at increased antenatal risk.This highlights the need for a universal screening programme at the primary healthcare level, with a priority focus given to pregnant mothers.Although inclusion criteria for the DCHS were broad to ensure generalisability, recruitment was done during antenatal care visits; mothers who did not present for antenatal care were therefore not captured, which may have resulted in the highest risk mothers being underrepresented in the sample.There may be some bias present due to the timing of assessments, which had shorter intervals between the 1st/2nd and 2nd/3rd study visits compared to subsequent 6-monthly visits.This may have affected the patterns of IPV reported across time.In addition, 205 mothers who were enrolled in the study were not included in the analysis due to incomplete data, although there was no evidence that these excluded mothers differed from the wider sample on key risk factors.Finally, due to collinearity of types of childhood maltreatment, each was included in a separate model, we were therefore unable to determine additive risk for a particular type of childhood maltreatment.Despite these limitations, the current study is among the first to investigate associations between sub-types of maternal maltreatment in childhood and adult IPV exposure in a LMIC.A key strength includes longitudinal assessment of IPV exposure.Our findings corroborate previous research and build upon it by investigating sub-types of IPV during pregnancy including investigating intensity of exposure.Two key groups emerged in our research – one at sustained risk of high or moderate IPV during pregnancy and postpartum; the second exhibited increased probability of IPV exposure during pregnancy compared to postpartum.The majority of women in South Africa access antenatal care and both of these groups would benefit from targeted screening and referral, particularly during this sensitive period for both maternal and child health.Collaborations for the analysis of data are welcome; the DCHS has a large and active group of investigators and postgraduate students and many have successfully partnered with students or researchers from other institutions.Researchers who are interested in datasets or collaborations can find more information on our website .The study was funded by the Bill and Melinda Gates Foundation .Additional support for HJZ and DJS by the MRC of South Africa.Additional aspects of the work reported here are supported by the South African NRF and MRC, by an Academy of Medical Sciences Newton Advanced Fellowship, funded by the UK Government’s Newton Fund and by the US Brain and Behaviour Foundation Independent Investigator grant.WB is supported by the SAMRC National Health Scholars programme.AF is supported by a personal fellowship from the UK MRC and works in a Unit that receives core funding from UK MRC .
IPV is particularly problematic during the perinatal and early postnatal period, where it is linked with negative maternal and child health outcomes.We aimed to characterize longitudinal IPV and to investigate maternal maltreatment in childhood as a predictor of IPV exposure during pregnancy and postnatally in a low resource setting.We observed high levels of maternal maltreatment during childhood (34%) and IPV during pregnancy (33%).Intervening early to disrupt this cycle of abuse is critical to two generations.
results of this preliminary investigation indicate that our dual modality technique could be an effective tool for assessing the effects of antiangiogenic drugs on zebrafish larvae.In the future we plan to assess the impact of higher I3M concentrations on both the anatomy and vasculature of developing wild type zebrafish, as well as investigate the effects of other drugs such as statins, which may be involved in vascular stability and intracerebral hemorrhages .The PAR images made it possible to identify anatomical structures including the myotomes, notochord, and yolk; while the PA images made it possible to visualize the head and trunk vasculature with single cell resolution.While this represents the first time that simultaneous images of this nature have been acquired using only OR-PAM, there are some inherent limitations to our technique.First, as with all UHF studies, acoustic attenuation limits the depth at which high SNR signals can be acquired .This restricts the described dual modality technique to whole-body imaging of larval fish, or to regions less than 100 μm deep at higher frequencies in mature fish.Another limitation of the current system is the scanning time.A 160 × 160 point raster scan currently takes approximately 30 min; however, we plan to reduce this scan time by incorporating a laser with a higher PRF.In summary, we have demonstrated a simultaneous label-free technique for imaging the anatomy and vasculature of transgenic and mutant casper zebrafish larvae in vivo with single cell resolution.We believe that our technique will find applications in studying gold nanoparticle aggregation in tissue, and the progression of cancer metastasis.MCK and MJM have financial interests in Echofos Medical Inc., which, however, did not support this work.The remaining authors declare no competing financial interests.
With their optically transparent appearance, zebrafish larvae are readily imaged with optical-resolution photoacoustic (PA) microscopy (OR-PAM).Previous OR-PAM studies have mapped endogenous chromophores (e.g.melanin and hemoglobin) within larvae; however, anatomical features cannot be imaged with OR-PAM alone due to insufficient optical absorption.We have previously reported on the photoacoustic radiometry (PAR) technique, which can be used simultaneously with OR-PAM to generate images dependent upon the optical attenuation properties of a sample.Here we demonstrate application of the duplex PAR/PA technique for label-free imaging of the anatomy and vasculature of zebrafish larvae in vivo at 200 and 400 MHz ultrasound detection frequencies.We then use the technique to assess the effects of anti-angiogenic drugs on the development of the larval vasculature.Our results demonstrate the effectiveness of simultaneous PAR/PA for acquiring anatomical images of optically transparent samples in vivo, and its potential applications in assessing drug efficacy and embryonic development.
with a trehalose wash and drying as described previously.Nuclei were selected using Cellomics software from blue channel images to create a mask, which was used for identification of fluorescently labeled EdU in the green channel for quantification of actively proliferating cells over the 24-hr EdU exposure.Data were fit to a sigmoidal dose-response model using GraphPad Prism to calculate IC50 values.J.S.D. and D.V.S. conceived the project.J.S.D., D.S.C., and D.V.S. supervised the studies.G.J.N., S.K.M., B.C.P., and J.F.P. designed experiments.G.J.N. performed experiments and analysed data.G.J.N. and J.S.D. wrote the manuscript.All authors critically revised the manuscript.
A 3D cell culture chip was used for high-throughput screening of a human neural progenitor cell line.The differential toxicity of 24 compounds was determined on undifferentiated and differentiating NPCs.Five compounds led to significant differences in IC50 values between undifferentiated and differentiating cultures.This platform has potential use in phenotypic screening to elucidate molecular toxicology on human stem cells.
due to the sparse and interleaved arrangement of neurons responding to CS+ and CS-.Multivariate fMRI studies from different laboratories and with two different approaches have provided evidence that CS+ and CS- are encoded differently and that such threat encoding increases over time.In line with these findings, we observed that both complex and simple CS+/CS- were represented by distinct patterns.Interestingly, this pattern difference was to some extent shared by simple and complex sounds.Complex neutral sounds were not differentially encoded in the amygdala while simple neutral sound pairs showed distinct patterns as well.This lead to a significant CS decoding effect over and above NS only for complex sounds.However, threat encoding is a well-established phenomenon in the amygdala.The non-significant difference between simple CS and simple NS may thus imply that classification performance stemming from encoding of stimulus features and from threat encoding is subadditive in the amygdala.Such subadditivity could also apply to HG where the difference between CS and NS was at least descriptively dominated by decoding differences for complex sounds.Notably, neither amygdala nor HG appears to distinguish neutral complex sounds; yet can apparently encode threat predictions.Thus, it appears that the common representation of CS+/CS- in these areas across complexities is independent from the acoustic features of the sounds and instead is associated with propagation of threat-information to the extended fear-learning network; the ensuing associations may well be created in higher auditory or polymodal regions.Stimulus-independent threat predictions are in line with our initial hypothesis 2 and constrain possible models of amygdala/ACX interactions.The current study could not provide insights into the differential roles of amygdala and ACX in forming threat associations.The relatively low number of trials per condition precluded analysing the trajectory of threat predictions in these areas.More specifically tuned experimental designs might shed light on this question, and electrophysiological methods could help elucidate the intra-trial communication of threat predictions across areas.In the current study, we focused on threat predictions; however, we cannot disentangle whether our findings are specific to this situation or would also occur for other salient stimuli, or even associative learning of non-salient events.Previous work has highlighted early primary sensory cortex responses to reward predictors, and these representations have not directly been compared to threat predictors.Since we excluded all US trials from our analysis, and given the low time resolution of fMRI, we cannot exclude that CS offset responses contribute to our findings.However, a differential offset response to CS- and CS+ must be the consequence of threat predictions such that even in this case, our results highlight ACX threat predictions.In summary, we demonstrate a novel pattern of CS-induced threat encoding in HG and higher ACX.HG encoding is similar for simple and complex sounds, making an origin in top-down or post-learning selective attention less likely.Rodent research has suggested that a direct path from thalamus to amygdala, bypassing ACX, is sufficient for acquiring a threat association if sounds are composed of single sine tones, but that A1 is required for complex sounds.Our results indicate that in both cases, threat information from CS is encoded in HG and higher ACX.Our findings strengthen a network perspective on fear acquisition including sensory cortices in humans, encouraging the use of multivariate methods to discover the role of key brain areas in learning environments and shed new light on early processing stages associated with memory formation.This work was funded by the Swiss National Science Foundation .The Wellcome Trust Centre for Neuroimaging is supported by a core grant from the Wellcome Trust .
Learning to predict threat depends on amygdala plasticity and does not require auditory cortex (ACX) when threat predictors (conditioned stimuli, CS) are simple sine tones.However, ACX is required in rodents to learn from some naturally occurring CS.Yet, the precise function of ACX, and whether it differs for different CS types, is unknown.As in previous rodent work, CS+ and CS- were defined either by direction of frequency modulation (complex) or by frequency of pure tones (simple).In an instructed non-reinforcement context, different sets of simple and complex sounds were always presented without reinforcement (neutral sounds, NS).Threat encoding was measured by separation of fMRI response patterns induced by CS+/CS-, or similar NS1/NS2 pairs.We found that fMRI patterns in Heschl's gyrus encoded threat prediction over and above encoding the physical stimulus features also present in NS, i.e.This was the case both for simple and complex CS.Furthermore, cross-prediction demonstrated that threat representations were similar for simple and complex CS, and thus unlikely to emerge from stimulus-specific top-down, or learning-induced, receptive field plasticity.Searchlight analysis across the entire ACX demonstrated further threat representations in a region including BA22 and BA42.However, in this region, patterns were distinct for simple and complex sounds, and could thus potentially arise from receptive field plasticity.Strikingly, across participants, individual size of Heschl's gyrus predicted strength of fear learning for complex sounds.Overall, our findings suggest that ACX represents threat predictions, and that Heschl's gyrus contains a threat representation that is invariant across physical stimulus categories.
of 274 kJ/100 g for lucerne was reported, which is in agreement with the results of this study.The three raw lucerne cultivars had significantly higher protein contents than raw SB.The protein content of the raw lucerne cultivars did not differ significantly between each other, which was in agreement with previous results.The cooked ‘WL711’, however, had a significantly higher protein content than cooked ‘SAS’, ‘WL525’ and SB.Cooked ‘SAS’ and ‘WL525’ had significantly higher protein contents than cooked SB.These results were in agreement with previous results, who reported green forage of lucerne to contain 5.2 g protein/100 g.Researchers found that the protein content of green leafy vegetables ranged from 1 to 7%.The protein content of the three lucerne cultivars in this study were satisfactory and compared good to the SB.In Table 5, no significant differences were found for the AAs of all three lucerne cultivars, except for the aspartic acid content, that was significantly higher for raw ‘SAS’.‘SAS’ indicated the highest levels of all AAs evaluated in this study.Comparing the AA content of the three lucerne cultivars to the content of SB, the lucerne cultivars had higher AA contents for all the essential AA, and non-essential Arg and Thr.These results were in agreement with previous results."Amino acid scoring provides a way to predict how efficiently a protein will meet a person's AA needs.A reference AA scoring pattern is used, which expresses the AA requirements in mg/g of dietary protein or as percentages in an “ideal” protein.Between the lucerne cultivars, ‘SAS’ indicated the highest AAS for all the AAs.Therefore, consumers will have to consume more of ‘WL525’, ‘WL711’ and SB to receive the same amount of AA than ‘SAS’.The three lucerne cultivars had higher AAS in all the AAs than the SB.The LAA for ‘SA Standard’ was Leu.The LAA for ‘WL525’ was Leu and Met + Cys, while Met + Cys were the LAA for ‘WL711’.Lys was the LAA for SB.This is in agreement with previous results, where low levels of Cys, His and Met were found in lucerne cultivars.Lucerne is classified as a legume which is deficient in Met and Cys.Therefore, it can be concluded that protein complementation will be necessary to ensure a more complete protein.Protein complementation is when plant protein sources are combined to achieve a better AA balance, than either would have alone.It also increases the overall quality of the protein that is consumed.For example, lucerne is low in Met and Cys, while grains like wheat contain high amounts of Met and Cys.In this paper the chemical analysis of three lucerne cultivars and one SB cultivar was performed, to determine its potential as an alternative green leafy vegetable for human consumption.Lucerne, which is mainly used for animal feed, has not recently been investigated as an alternative protein source for human consumption in SA.With the evaluation of the soil for lucerne establishment, it was found that enough minerals were available for sustainable lucerne production.It was concluded that lucerne cultivar ‘SAS’ can be used for future studies, which could lead the way for the development of novel foods, as it indicated: a good Brix value that was significantly higher than SB; the highest values for 10 out of the 12 chemical tests, followed by ‘WL711’ and ‘WL525’; a positive gain for seven out of 10 minerals after cooking; a high protein, DM, carbohydrate and energy content and the highest level of AAs of the three lucerne cultivars.Based on these findings, the chemical composition of lucerne compared well to the properties of SB, making them desirable in terms of nutrition and could be used as a potential vegetable for human consumption in developing countries.
The chemical composition of three lucerne (Medicago sativa L.) cultivars (‘SA Standard’, ‘WL711’ and ‘WL525’) were compared to spinach beet (Beta vulgaris var.cicla L.), a familiar leafy vegetable (as control), in order to establish its potential as an alternative green leafy vegetable for human consumption.The protein content of the cooked ‘SA Standard’ and ‘WL525’ lucerne cultivars had significantly (p < 0.001) higher protein contents than cooked spinach beet.Cooked lucerne cultivar ‘SA Standard’ had a significantly (p < 0.001) higher energy content than cooked lucerne cultivar ‘WL525’ and spinach beet.Based on these findings, the chemical composition of lucerne compared well to the properties of spinach beet, making them desirable in terms of nutrition and could be used as a potential vegetable for human consumption.
The current proliferation of camps globally has attracted increasing attention among scholars, including geographers, who have interrogated their diffusion and governance, as well as the everyday practices of the people living in these socio-political spatial formations.In addition to refugee camps and immigration detention centers, new hotspots, asylum seekers centers, and migrant identification facilities are quickly mushrooming as a response to the so-called European ‘migration crisis’."This growth manifests an alarming phenomenon of burgeoning marginalization, and shows how the concept of ‘camp’ and what Minca has on the pages of this journal described as “camp studies” are today increasingly crucial to grapple with current social changes in the world's geographies of exclusion and inclusion. "This article arises from Minca's call for “spatial theories that might help us understand the actual workings of the camp” – also echoed by Davies and Isakjee – and aims to contribute to the analysis of camp governance.In so doing, the question addressed in this paper is: how can we conceptualize sovereignty in institutional camps?,Drawing on scholarly work that suggests seeing camp sovereignty as plural and hybrid, I will focus on the contentious nature of camp sovereignties.The perspective put forward in this article foregrounds the interaction between state and non-state actors governing the camp and the dynamic nature of their relationships, which constantly change over time, fluctuating between conflict and cooperation.I will do so by using the analytical tools developed by McAdam, Tarrow, and Tilly in their theory of ‘dynamics of contention’, which focuses on framing strategies, political opportunities, resources and repertoires of action as key aspects in the interaction between the actors involved in the camp.Overall, through this article I intend to show the usefulness of this framework in the analysis of camp governance, not only because it underscores multiplicity but because it also emphasizes a temporal perspective, deepening the understanding of the historical evolution of camp sovereignties.In the first section of the article I will examine the meaning of “institutional camp” to specify the scope of my argument.I will then consider the literature on camp governance, focusing on how the political authority over institutional camps has been conceptualized.After reviewing Agamben-inspired works, which stress the role of the sovereign state decision in the creation of camps, and those that draw on the Foucauldian notion of governmentality, I will discuss the current understanding of camp sovereignty, which scholars have recently suggested to see as layered, multiple, and hybrid.By building on these debates, in the second section I will expose the theory of political contention advanced by McAdam et al. and suggest construing camp sovereignty as contentious, i.e. inherently constituted by ever-evolving power relations among claim-makers whose frames, opportunities, resources, and repertoires change over time.To show the benefits of such perspective, the third and fourth sections of the paper present an analysis of the Italian Roma camps, which shows how the sovereignty over the camp is not only fragmented into a multiplicity of actors but is also the result of constant conflict, compromise, and co-optation.The data presented in this paper have been collected in Rome from September to December 2013.During the fieldwork, I conducted 60 in-depth interviews and informal conversations with a variety of actors participating in the governance of the Roma camps, i.e. policymakers, politicians, members of subcontracting associations, advocacy volunteers and activists of social movements.In addition to this, I analyzed 22 policy documents, including local ordinances, council deliberations, policy guidelines, documents of the local police, regional and national legal texts, and policy reports.Through the interviews and conversations, I identified the conflicting views and claims made by different actors, how these were framed, the resources mobilized, and the opportunities and repertoires of action.The analysis of the policy documents enabled me to trace the historical development of the Italian Roma camps, with a specific focus on their definitions, objectives, and target population, which provided an understanding of the context within which the actors involved in the camp governance operate.As I will show in the article, these interviews and documents clearly highlight the complex and contentious nature of sovereignty over institutional camps.Scholars working on the camp have highlighted the multi-faceted dimension of this spatial formation, which includes camps for refugees, semi-carceral institutions, like migration detention centers and EU hotspots, spaces of transit and of sanctuary, protest camps and, some argues, gated communities.For this reason, as Hailey points out, “efining the camp is a central problem of our contemporary moment”.Broadly speaking, a camp can be defined as a temporary confined space, characterized by an exceptional and ambiguous status between exclusion and protection.Camps differ, however, in a series of other aspects.For example, while migration detention centers can be regarded as a form of forced segregation, gated communities are usually seen as a case of self-segregation.Secondly, those living in sanctuary spaces or gated communities are represented as needing protection, whereas those in identification and removal centers are seen as a potential threat to the nation state order.Finally, despite their official temporariness, camps have different durations.While refugee camps often persist and become a temporal limbo of governmental inertia, “autonomous camps”, such as informal settlements or protest camps, fight for extending their duration.This article is concerned with one specific set of camps: institutional camps which are officially created and managed by governmental agencies in alleged emergency situations and which forcibly segregate stigmatized subjects for a protracted period of time.As observed by Minca, there is a difference between “state-enforced camps” and “counter-camps”.Drawing on this distinction, this
The global proliferation of camps manifests an alarming phenomenon of burgeoning marginalization, and shows that the concept of ‘camp’ is today increasingly crucial to grapple with current changes in the world's geographies of exclusion and inclusion.Specifically, this article focuses on ‘institutional camps’, i.e.The article contributes to this perspective by drawing on the theory of ‘contentious politics’ advanced by McAdam, Tarrow, and Tilly (2001).Through this analytical framework, I suggest construing camp sovereignties as contentious, i.e.In order to show the benefits of such approach, the paper focuses on the empirical case of the Italian Roma camps in Rome, through which I show that camp sovereignty is not only fragmented into a multiplicity of actors but is also the result of conflict, compromise, negotiation, and co-optation among actors whose frames, opportunities, resources, and repertoires constantly change over time.
as opportunities, the Roma who joined political squats have used the city as a space of politicization, allying themselves with the urban social movements.Through these new opportunities, frames, resources and repertoires, the urban squatting movement and the Roma managed to influence the deployment of the camp as a technology to govern Roma slum dwellers.These examples show how the positioning of the actors involved in the Roma camp governance has changed in the last three decades.In order to understand how the Roma camps are governed, we need not only to understand how the political authority over these camps is fragmented between a plurality of both state and non-state actors, but also how it is the product of contention.The creation of the Roma camps in the 1990s, though seemingly produced as a decision of the executive powers in a situation of emergency, is in fact the outcome of compromise and negotiation among conflicting parties advancing very different claims and changing their positions over time.Indeed, the protracted presence of these camps is not the product of a prolonged state of exception decided by the state but the result of changed opportunities, frames and resources mobilized by the actors involved in the governance of these spaces.Likewise, the recent strategies of action which are redrawing the use of the camp emerged within a reconfiguration of repertoires, frames and resources enabled by new opportunity windows.Even though the creation of institutional camps in times of emergency is the result of government resolutions, there are a plethora of non-state actors that participate in the camp governance.While Agamben-inspired research on the camp stresses the role of the executive authority in the formation of this space of exception, scholars drawing on the Foucauldian notion of governmentality foreground the plurality of governing practices and actors shaping the camp.Also the critics of an Agambenian approach argue that the sovereignty over the camp, far from being an indivisible entity, is layered, multiple and hybrid.With this article, I have contributed to this debate and shown that camp sovereignty is not only multiple and heterogeneous but also inherently contentious, i.e. constituted by conflicting and changeable interactions.To consider how hybrid sovereign assemblages evolve can allow us to comprehend how camps persist or change over time.In order to examine the interactions and conflicts between these actors, I have suggested to turn to the analytical tools developed by McAdam et al. in their theory on dynamics of contention.This is indeed useful to unpick the elements characterizing the conflicting relationships around the Roma camp, namely the framing strategies developed by actors, the opportunities they use, the resources they mobilize and the repertoires they adopt.Through this theoretical framework, I have analyzed the governance of the Roma camps in Rome and I suggest that a similar analysis could be fruitful to understand the sovereignty of other institutional, or “necessity”, camps.I have illustrated how considering all the elements of contentious politics allows us to trace the historical development of the power relations between state and non-state actors, as well as their fluctuation between conflict, negotiation, compromise, and co-optation.Indeed, while in the early 1990s non-state actors managed to influence the government decision to create the Roma camps, the later incorporation into institutional governance of several pro-Roma associations produced a change in their framing discourses as well as socio-organizational and financial resources, which resulted in a weakening of their demands and in the persistence of the Roma segregation in camps.However, a recent shift in repertoires of action in the urban context, and the opportunities unexpectedly offered by the recent economic crisis, enabled the social movements to strengthen their claims to Roma housing inclusion through new frames, resources and repertoires.To conclude, this paper has addressed the question of camp sovereignty by conceptually foregrounding the conflict between a multiplicity of actors and their changeable positions.Brown has highlighted the ambiguity and paradoxes of the notion of sovereignty in Western political philosophy and argued that sovereignty is “both generated and generative, yet it is also ontologically a priori, presupposed, original”.In order to embrace and fully grasp the contradictory nature of sovereignty, we should not only focus on what makes it multiple, as opposed to unitary and autonomous, but also on what makes it inconsistent and fluctuating, as opposed to fixed and temporally absolute.
created by government agencies in alleged emergency situations and aims to conceptualize sovereignty over this type of camp.After critically reviewing the ongoing scholarly debate on camp sovereignty, I situate my approach within the work of scholars who see political authority over the camp as comprising a multiplicity of both state and non-state actors.inherently constituted by conflicting and ever-evolving power relations that change according to framing strategies, political opportunities, resources and repertoires of action.
Multiparticulate formulations, in the form of pellets, granules or beads, offer a range of advantages over conventional tablets and capsules, such as ease of swallowing, flexible dose titration, and suitability for taste-masking and controlled-release.They are considered a flexible solid dosage form for the delivery of drugs to a broad range of patients, including paediatrics, geriatrics and patients with swallowing difficulties.However, previous studies suggest that grittiness and rough mouthfeel perception might be a barrier to palatability and patient acceptability.A potential solution to overcome palatability and acceptability issues could be co-administration with a suitable vehicle, which could help to conceal the presence of multiparticulates.Some drug products that contain multiparticulates within a capsule or sachet indicate in the labelling that the internal beads can be sprinkled on soft foods for their administration.Commercial examples include Depakote® sprinkle capsules, Creon® capsules, Granupas® gastro-resistant granules in sachets and Cipla’s lopinavir/ritonavir pellets in capsules.Typical vehicles recommended for products labelled for sprinkle include apple sauce and yogurt that provide both flavour and viscosity to facilitate administration and improve patient acceptability.Studies comparing sprinkle formulations to liquid dosage forms in children have commonly showed preference for the solid over the liquid form.A range of studies showed acceptance of iron supplement sprinkles and preference for these compared to oral drops in children with anaemia.Likewise, studies involving children suffering from epilepsy consistently showed preference for sprinkle formulations over syrup.In a recent study, children showed preference for sprinkles over syrup after 12-week antiretroviral therapy; 72% of children below 12 months and 64% of children between 1 and 4 years preferred the multiparticulate dosage form.For those preferring syrups, key issues with sprinkles were problems masking the pellets with food and food refusal, and concerns about not giving the whole dose.However, multiparticulates overcome the storing and transporting issues of syrups.Medicines are known to be mixed with liquid and semi-solid foodstuff to allow administration in clinical practice.However, this is not without the risks that mixing medication with foodstuff could alter bioavailability and introduce poor control over dose intake.Therefore, when co-administration of medicines and foodstuff is recommended in the package leaflet, the potential impact on patient acceptability, dosing accuracy, compatibility and drug bioavailability of the proposed vehicle must be investigated.This task becomes very impractical when medicines can be mixed with a range of foods with different composition and physical properties.The development of a standard pharmaceutical vehicle, rationally designed and fit for purpose, could facilitate such investigations.This vehicle could be manufactured by industry and provided with the finished drug product or it could be compounded in a pharmacy.Investigation into swallowing aids for the administration of oral solid formulations have been the focus of previous research, with some products already in the market in the form of sprays, pastes or jellies.The rheological properties of the administration media require special consideration.Thick fluids are known to exhibit prolonged oral transit times than thinner fluids, which can be used in the management of patient with dysphagia.Conversely, greater efforts are required to manipulate in the mouth and swallow thick fluids compared to thin ones.Consequently, overly thick liquids may increase the risk of post-swallow residue in the mouth and pharynx, especially for patients with reduced tongue or pharyngeal muscle strength.Newtonian fluids have also been reported to require greater effort in oral processing and swallowing than shear thinning fluids.In addition, the rheology of the vehicle also affects its palatability, with organoleptic attributes often reported to worsen as the consistency of the fluid increases.Therefore, it is imperative that the rheological and sensory properties of the vehicle are adequately characterised.The use of a suitable vehicle could facilitate administration of multiparticulates by preventing fast sedimentation, improving palatability and reducing the risk of aspiration or choking often associated with this dosage form design.Development of media for the administration of multiparticulates and evaluation of its effect on palatability has been the focus of two recent studies, both of which concluded that the use of polymeric hydrogels as administration media could help conceal the presence of particles, reducing oral grittiness perception.However, both studies focussed on a swirl and spit methodology, which overlooks other important sample attributes such as ease of swallowing.Moreover, neither of those studies investigated the palatability of the liquid vehicles alone, which limited their understanding on the effect of the palatability of the vehicle on the overall palatability and acceptability of the final formulations.The aim of this work was to develop liquid vehicles for the administration of multiparticulates and to investigate the effect of the administration media properties on palatability and ease of swallowing of multiparticulate formulations.Administration media were developed using xanthan gum and carboxymethyl cellulose as model hydrocolloids.The rheological properties of the hydrocolloids were characterised to investigate the effect of consistency and shear thinning behaviour on their performance as administration media.Palatability attributes and ease of swallowing of the vehicles alone and formulations containing model multiparticulates were evaluated in healthy volunteers.Microcrystalline cellulose pellets were provided by Pharmatrans Sanaq.Xanthan gum was supplied by CP Kelco; sodium carboxymethyl-cellulose was provided by Ashland; and vanillin was procured from Sigma-Aldrich.Polymeric hydrogels in the range of 0.15–1.50% were prepared by slow addition of hydrophilic polymer into 100 ml of water under continuous stirring at room temperature.Previous research indicate that some hydrocolloids may impart a noticeable foreign taste or off-flavour to water and other liquid vehicles.For this reason, a small amount of vanillin was added to mask any potential taste and smell of the polymers which could negatively impact results.Samples were left stirring overnight to
Multiparticulate formulations based on pellets, granules or beads, could be advantageous for paediatrics, geriatrics and patients with swallowing difficulties.However, these formulations may require suitable administration media to facilitate administration.The aim of this work was to investigate the effect of administration media properties on palatability and ease of swallowing of multiparticulates.A range of vehicles were developed using xanthan gum (XG) and carboxymethyl cellulose (CMC) as model hydrocolloids.
ensure complete polymer hydration and stored in the refrigerator at 5 ± 0.5 °C.Samples were allowed to equilibrate to room temperature before testing.For the sensory evaluation study, the viscosity of the XG and CMC vehicles were targeted to meet the International Dysphagia Diet Standardisation Initiative descriptors.According to this framework, Level 1 fluids are “thicker than water but flow through a teat/nipple and straw”, Level 2 fluids “require effort to drink through a straw and flow quickly off a spoon” and Level 3 fluids are “difficult to suck through a straw and pour slowly off a spoon”.A Bohlin CVO rotational rheometer system was used to investigate the flow properties of the samples using a cone and plate geometry.A shear sweep measurement mode was employed, whereby the shear rate of the sample was advanced across the range of 0.1–200 s−1, with ascendant logarithmic progression.The temperature of the samples was maintained at 25 ± 0.2 °C throughout testing.This procedure was repeated three times for each sample.The ability of the administration media to maintain multiparticulates in suspension was calculated based on sedimentation experiments.A sample containing 500 mg of Cellets and 50 ml of administration media was filled into a 50-ml graduated plastic tube.The tube was turned upside down until homogeneous dispersion of the particles.Then, the sample was left standing and the time taken for Cellets to clarify the top 15 ml of the dispersant was determined.This experiment was adapted from that described by Kluk and co-workers.Experiments were repeated using multiparticulates of two extreme particle sizes, Cellets 200 and Cellets 700, to account for the effect of particle size on sedimentation.The experiment was conducted in triplicate.Thirty healthy adult volunteers were enrolled in a randomised, single-blind, single-centre, 3-treatment, crossover sensory evaluation.The study was approved by UCL Research Ethics Committee.All participants received a detailed information sheet and provided written consent to participate in the study.The study was conducted in three sessions taken place on three separate days.On each day, participants tested samples of liquid vehicles without particles, with 200–355 µm particles and with 700–1000 µm particles.Participants were divided into six groups to ensure that all possible sequence orders between treatments were considered.In each session, participants were handed eight samples, including XG and CMC samples and water as a control, in a randomised order.For samples containing multiparticulates, 250 mg of solid particles were pre-dispersed in the administration media using a spatula immediately before administration.Microcrystalline cellulose pellets used in this investigation were non-disintegrating in water or in the mouth.Samples were provided onto 5-ml plastic medicine spoons that were handed to participants of the study.During the evaluation of the samples, participants had free access to spring water to complete sample intake and clean their palate.To minimise subject discomfort and carryover effect 5–10-minute intervals were respected between samples.A digitalized questionnaire was used for data collection.Immediately after swallowing the sample, volunteers were asked to rate several sample attributes, including appearance, ease of swallowing, mouth-feel and taste, using a 5-point hedonic scale.In addition, the feeling of particles in the mouth during sample intake and the feeling of particles in the mouth after sample intake and after rinsing their mouth with water was assessed using a 5-point magnitude scale.Participants could also provide voluntary feedback of each sample attribute using their own words.The different categories of the 5-point scales were assigned numeric scores from lowest to highest stimuli perception, respectively.Statistical analysis was performed using the non-parametric Kruskal-Wallis one-way analysis of variance followed by Dunn’s test as post hoc for pairwise comparison, both with a 95% confidence level.Minitab 17 was used for data analysis.The rheological properties of the administration media require especial attention as these will have an impact on several critical quality attributes, such as suspendability and palatability.XG and CMC were selected as model excipients to develop polymeric hydrogels for the administration of multiparticulates, being Generally Regarded As Safe excipients commonly used in oral formulations.The rheological properties of XG and CMC hydrogels were investigated as a function of the hydrocolloid concentration.The consistency index of XG and CMC-based polymeric hydrogels increased as the hydrocolloid concentration increased, as it can be expected.The increase in viscosity was more pronounced for XG than it was for CMC, revealing the higher ‘thickening power’ of XG.In addition, XG hydrogels exhibited a strong shear thinning behaviour whereas CMC hydrogels showed a much lower degree of shear thinning; the flow behaviour index of XG-based hydrogels was lower than that of CMC-based hydrogels throughout the range of concentrations tested.The contrasting rheological characteristics of XG and CMC made them ideal candidates to investigate the effect of shear thinning behaviour on their performance as administration media for the administration of multiparticulates.Previous research indicate that Newtonian fluids require greater effort in oral processing and swallowing than shear thinning fluids.However, it could be hypothesised that shear thinning fluids would be less effective in ‘masking’ the presence of particles since the viscosity of these fluids will decrease under the relatively high shear rates experienced during oral processing.An ideal vehicle should be able to maintain multiparticulates in suspension from dispersion of the particles in the media until administration.Sedimentation time of multiparticulates in polymeric hydrogels was determined by measuring the time lapse between homogeneous dispersion of multiparticulates and clearance of the top layer of the liquid vehicle.Results of sedimentation time as a function of the media viscosity are summarised in Table 3.The time needed for manipulation and administration of a medicine is usually less than 5 min but can take longer
Such vehicles were prepared at three consistency levels (Level 1 – ‘syrup’ Level 2 – ‘custard’ and Level 3 – ‘pudding’) to investigate the effect of viscosity on their performance as administration media.A randomised, single-blind sensory evaluation study was carried out in thirty healthy adult volunteers using microcrystalline cellulose pellets as model multiparticulates, dispersed in the hydrogels (and water as control) at a concentration of 250 mg in 5 ml.
no significant differences were found between XG and CMC hydrogels in their ability to mask the grittiness of particles, either during sample intake or after swallowing of the samples; although the trend suggests that CMC vehicles performed better when using multiparticulates of larger size.As previously discussed, this indicates that the sensory properties of the samples were dominated by the presence of multiparticulates rather than the shear thinning behaviour of the vehicles.Moreover, no significant differences were found between vehicles thickened to different consistency levels based on scale ratings, although anecdotal feedback indicated that thicker hydrogels performed better than thinner vehicles in terms of masking the presence of particles, e.g. “less thick than other samples, thus I can feel the particles when I swallowed it; need to drink water to remove the particles”.Disagreement between grittiness ratings and voluntary feedback suggests that scale ratings were influenced by the overall appreciation of the sample, an issue commonly encountered in sensory evaluation studies.As the concentration of hydrocolloid in de sample increased, the perception of particles decreased but other organoleptic attributes worsened.As such, mouthfeel and grittiness perception were driven by a balance between those opposing phenomena, which explains the similar ratings obtained by samples of different consistency despite their different organoleptic attributes.These findings highlight the multifactorial nature of palatability and mouthfeel perception which poses a challenge to evaluate sample attributes independently.The results of these trial are summarised in Fig. 4, where the radar charts show the mean result for each palatability attribute as a function of the size of the multiparticulates and the administration media.The use of polymeric hydrogels as administration media resulted in an overall improvement of the samples: appearance, taste, mouthfeel, ease of swallowing and residue in the mouth improved by ca. 0.5 points, and oral grittiness perception improved by ca. 1 point.Overall, polymeric hydrogels thickened to medium consistency demonstrated the best performance by virtue of their ability to conceal the grittiness of multiparticulates in the mouth and to aid swallowing of the formulation as a bolus, while maintaining a balanced consistency to ensure appropriate mouthfeel.These findings were in line with previous research in the field in that vehicles of very thick consistency tend to be disliked despite their ability to mask the presence of multiparticulates in the formulation.Further investigation of the physicochemical properties of the samples, such as adhesiveness, ductility and lubrication properties could provide a more rigorous insight into the physical drivers for mouthfeel perception.Differences in palatability and acceptability can be expected in different sub-sets of the population, thus future work should investigate the acceptability of these vehicles in paediatrics and patients with swallowing difficulties, those who could benefit the most from these formulations.The use of hydrogels as administration media for multiparticulates improved a range of sample attributes compared to water formulations, including appearance, taste, mouthfeel, ease of swallowing and grittiness perception during and after sample intake.This improvement was apparent for samples containing multiparticulates of both sizes investigated.Polymeric hydrocolloids provided ‘cushioning and lubrication’ of the particles and acted as an effective vehicle by ‘carrying the particles together’, concealing the gritty feeling of the multiparticulates and assisting swallowing.Participants also reported that additional intake of water after administration of the gel formulation was beneficial to facilitate swallowing of the full dose of multiparticulates.Polymeric hydrogels thickened to medium consistency were preferred over either thinner or thicker vehicles.A balanced consistency reduced the gritty feeling of multiparticulates while not being overly thick, which would hinder sensory attributes.Meanwhile, differences between XG and CMC hydrogels were minimal despite their opposing shear thinning behaviour.These findings suggest that the consistency of the vehicle was an attribute of greater importance than its shear thinning behaviour.However, XG brings the added value of its strong thickening power, requiring very low concentrations to produce hydrogels of adequate consistency.Results of this study indicate that polymeric hydrogels could be used to improve palatability, facilitate swallowing and enhance patient acceptability of multiparticulate formulation.Felipe L. Lopez: Methodology, Investigation, Formal Analysis, Visualization, Writing- Original draft preparation.Terry B. Ernest: Funding Acquisition, Conceptualization, Methodology, Supervision, Writing – Review & Editing.Mine Orlu: Funding Acquisition, Conceptualization, Methodology, Supervision, Writing – Review & Editing.Catherine Tuleu: Funding Acquisition, Conceptualization, Methodology, Validation, Supervision, Writing – Review & Editing.
Samples were evaluated using 5-point scales.The use of hydrogels as administration media improved a range of sample attributes compared to water formulations, including appearance, taste, mouthfeel, ease of swallowing and residue in the mouth (all improved by ca.0,5 points) and oral grittiness perception (improved by ca.1 point).Polymeric hydrogels thickened to medium consistency (Level 2, XG 0.5% and CMC 1.0% w/v) demonstrated the best performance.
themselves is not clear.From our results, where we noted no PBM-induced changes in the number of 2 neuronal types, one would assume that there was a direct impact of PBM on the astrocytes, rather than as a secondary response to a change in neuron number.However, a much more exhaustive investigation of the changes in the number of many different types of striatal neurons, well beyond the scope of this study, would establish if this is the case.For the microglia, unlike the astrocytes, we found no clear increase in their numbers between 3m and 12m groups, indicating no evidence for microgliosis in our model; furthermore, we found no activated “amoeboid-like” cells in the striatum of the 12m group.Previous studies have reported that different brain regions and the retina all show signs of microgliosis, but at different stages of aging.For example, the midbrain SNc develops microgliosis in mice aged 12 months, whereas the cerebral cortex and perhaps the retina also develop microgliosis at a later stage, from 18–24 months.Following, it is likely that the striatum is one of those brain regions, like the cerebral cortex, for example, that develops microgliosis at a later stage in aging, well after 12 months of age in mice.Finally, we found that the 12 + PBM group, rather curiously, had many fewer microglia cells than the 12m, as well as the 3m, group.The significance of this finding is not clear, but we suggest that PBM may have had an inhibitory effect on microgliosis in the striatum, that this treatment served as a prophylaxis to microglial hypertrophy.In contrast to the effect on glial cells in the striatum of aged mice, our results showed that long-term PBM had no impact on the 2 types of striatal interneurons examined in this study, namely the Pv+ and Eno+ cells.For both neuronal types, there were no differences in their number between the 12m and 12m + PBM groups.The result was particularly relevant for the number of Pv+ cells that, unlike the number of Eno+ cells, showed a marked decrease from the 3m to the 12m group.This reduction was not mitigated by PBM.The beneficial outcomes of PBM are thought to involve an activation of a photoacceptor, such as cytochrome c oxidase, leading to an increase in electron transfer in the respiratory chain within the mitochondria and increased adenosine triphosphate production.This, in turns, triggers a cascade of secondary downstream signaling pathways that collectively stimulate intrinsic neuroprotective mechanisms.With regard to our findings here, our long-term PBM may not have been efficient in activating these beneficial mechanisms and to prevent neuronal loss in aging, at least among the Pv+ cells of the striatum."Perhaps a different dosage may have been more efficient; for various neurological disorders, from Alzheimer's to Parkinson's disease, dosage has been reported to be important in effectiveness of PBM as a neuroprotective agent, and the same may be the case in aging.Our analysis of the dopaminergic system, in terms of terminals in the striatum and cell bodies in the midbrain SNc, indicated that it remained stable in mice aged 12 months.These findings are consistent with previous ones in animals of this age.Furthermore, our results indicated that, as with the striatal interneurons, long-term PBM had no deleterious effects on these key structures of the basal ganglia.After 8 months of daily exposure, the density of dopaminergic terminations and the number of their cell bodies in the 12m and 12m + PBM groups were near identical."This result has positive implications for the safety of long-term PBM on the dopaminergic neurons and for future use in Parkinson's disease patients.In conclusion, our results indicated that long-term PBM had beneficial effects on the aging brain; this treatment was effective in reducing glial cell number and did not have any deleterious effects on the neurons and terminations in the striatum; and that this treatment was not toxic to these cells after such a long-term exposure in the aged brain is a key safety issue, particularly when considering use in humans.Future studies may explore any functionally related changes, for example, using molecular and/or electrophysiological methods, in the aging striatum and if there is any impact on these changes after PBM.There are no conflicts of interests to declare.
This study explored the effects of long-term photobiomodulation (PBM) on the glial and neuronal organization in the striatum of aged mice.We had 2 control groups, young at 3 months and aged at 12 months old; these mice received no treatment.Brains were aldehyde-fixed and processed for immunohistochemistry with various glial and neuronal markers.We found a clear reduction in glial cell number, both astrocytes and microglia, in the striatum after PBM in aged mice.By contrast, the number of 2 types of striatal interneurons (parvalbumin+ and encephalopsin+), together with the density of striatal dopaminergic terminals (and their midbrain cell bodies), remained unchanged after such treatment.In summary, our results indicated that long-term PBM had beneficial effects on the aging striatum by reducing glial cell number; and furthermore, that this treatment did not have any deleterious effects on the neurons and terminations in this nucleus.
highlighted the importance of not only understanding and considering the distribution of earnings of graduates at each age but also modeling graduate earnings dynamics.One key issue identified is the unique nature of the Japanese labor market compared to other countries operating ICLs.In particular, the low earnings of female university graduates, once they get married and/or have a child, raises interesting design issues.These low earnings appear to be driven in part by firm, tax and social security policies that give generous tax breaks if wives earn no or low income.These policies are currently under scrutiny and there is some evidence of changing behavior for younger cohorts of women after marriage and having children.However, it is likely to remain a feature of the Japanese graduate labor market for some time.We argue that most of the problems with the current JASSO loan system could be solved in a fair and efficient way by introducing a universal ICL.The unusual features of the Japanese graduate labor market mean that the parameters of the loan need to be very different from those operating in the United Kingdom and Australia.In particular, to minimize government costs there must be significantly lower repayment thresholds coupled with incremental rises in marginal repayment rates rise as earnings rise.A loan surcharge is probably also needed to reduce taxpayer subsidies and to ensure only those who need loans take them out.27,This is to ensure that females early in their career make a significant repayment contribution before they get married and that those with relatively low earnings when married make some further modest repayments.The changing nature of the female labor market, which is evident in more recent cohorts, will also help.Importantly, the availability of universal loans needs to be restricted to high-quality university and college provision for suitable students in a transparent way.This is always much more difficult in a country where most students go to private universities.This is part of a broader problem with the current Japanese loan system and requires careful thought.JASSO student loans are available to both two-year college as well as four-year university students and future work needs to ensure that any ICL design would also work for two-year college graduates taking out loans.Given the relatively low repayment thresholds we are proposing, it seems highly likely that this would not pose a problem given the lower debts associated with two-year college.While they have not been considered in our paper, it would be relatively simple to include this in future work.Finally, we have shown that incorporating realistic dynamics is crucial to developing a self-sustaining universal student loan system that is equitable, affordable and ensures that all deserving students can take full advantage of post-high school education in Japan without any financial barrier.Given recent evidence of increasing inequality in access by household income, this seems very important.
The Japanese higher education sector has seen increases in tuition with stagnant household incomes in a society where family support for university students has been the norm.Student loans from the government have grown rapidly to sustain the gradual increase in university enrolments.These time-based repayment loans (TBRLs) have created financial hardship for increasing numbers of loan recipients and their families.There is some evidence that prospective students from low-income households are forgoing a university education to avoid student loan debt.The Japanese government has introduced some measures including grants and a partial income-contingent loan (ICL) scheme to help alleviate these problems.While the ICL scheme is a positive development, this paper shows that it requires further refinement and broader coverage if it is to adequately address the challenges facing higher education financing in Japan.We show that an affordable and universal ICL system could be introduced in Japan that avoids problems with the current partial income-contingent loan scheme and would help alleviate access issues for those from disadvantaged backgrounds.Importantly, the unique features of the Japanese labor market have to be carefully considered, especially the large gender wage gap for married women.By introducing dynamics into modeling graduate earnings and using carefully selected parameters, we show that it is possible to have a universal ICL which achieves a balance between access and affordable repayment with minimal long-run costs to taxpayers.
Research studies and governmental reports have highlighted the extreme challenges that our planet faces, amongst others because of the life-style choices that we as humans have made.A sixth mass extinction is well under-way for biodiversity worldwide.The principal drivers of biodiversity loss include habitat loss and degradation, overexploitation, pollution and climate change.At present, our day-to-day activities, which can be mapped through human’s influence on the environment, alter the movements of wildlife, affect their behavioural patterns and cause species declines.As humans, we have brought about a new era called the Anthropocene whereby we now dominate the earth’s balance of terrestrial vertebrate biomass.Threats to biodiversity are also emerging due to a rapidly changing climate with uncertain, potentially large consequences for numerous species.Climate change has been identified as one of the drivers that may push the Earth system into a new state.If we want to avoid this and achieve the 2 °C target set in the Paris agreement, we need to do more than is currently done.The 2 °C limit already accepts that we will face several global challenges, and beyond this the impacts become ever more severe.Agriculture is one of the main causes of environmental degradation worldwide and intensifying agricultural practices have also contributed to widespread declines of biodiversity in recent decades.How to address the negative impacts of the agriculture industry on the natural environment, whilst simultaneously fulfilling the increasing demand of nutrition remains an ongoing challenge.Environmental and agricultural policy changes are thus clearly required to swiftly adapt how we value nature and the functioning of our ecosystems.Several aspects of human’s way of life need to change for a future sustainable planet, but in this short communication we focus on the livestock industry given its contribution to climate change and domination of land-use practices of our planet, both of which have severe consequences for biodiversity and the ecosystem services they provide.We briefly outline the impacts of the livestock industry and the agricultural policy platform within Europe, before describing a potential policy tool that we recommend incorporating within agriculture.Our aim is to raise debate about how such policies may be implemented and we briefly describe some examples of how such a policy platform may operate and the subsequent benefits for biodiversity.Livestock and humans consist of 59% and 36% respectively of the total biomass of mammals and birds, whilst their wild counterparts make up the remaining 5%.Livestock are kept to provide resources for humans, and the combined impact of growing food for humans, growing food for livestock, and maintaining the land in which livestock are kept means that nearly 40% of the earth’s ice-free land surface is dedicated to agricultural practices, 75% of which is used to either grow livestock feed or to house livestock.Land-use change is ongoing as forests around the world are being cleared to either plant food crops, bioenergy crops, create pastures, or to grow more livestock feed.Agriculture, and especially livestock farming, thus represent a major threat to biodiversity and ecosystem functioning.The livestock industry, amongst others, contributes heavily to climate change and estimates of global greenhouse gas emissions from the livestock industry range from 12 to 18%.A recent study has indicated that replacing meat with plant alternatives may reduce emissions of an average Dutch diet by 28%–46%.Meanwhile, a full life cycle analysis of ruminant meat products indicated that the greenhouse gas footprint was 19–48 times higher than high protein plant-based products.Given the impacts of consuming meat, studies have already called for changes in consumption practices.Particularly in more developed countries, consumption rates of meat are high and may constitute up to 40% of diets whereas alternative plant-based diets may still be considered “niche markets”.It is uncertain how much longer our planet can support our life choices and drastic lifestyle changes are needed if we are to avert a so-called “Ecological Armageddon”.Livestock farming is not purely detrimental to the environment and biodiversity.In fact, past pastoralism practices were responsible for many of the high value cultural and natural areas we have today because of the grassland ecosystems that grazing creates.Grazing as opposed to mowing can also be regarded as a preferred management technique for conserving semi-natural grassland and the species that depend on these habitats.The problem is that many ecosystems created by traditional farming practices have all but disappeared because of increasing agricultural intensification, especially in developed countries, even though greater biodiversity in commercial grasslands could lead to greater economic value.The large demand for livestock products has resulted in attempts to increase the level of production, where the pattern is to intensify farming practices by moving livestock out of pastures and into barns so that agricultural areas can be harvested more intensively.Agricultural intensification has been widely regarded as a driving force of biodiversity loss and whereas traditional farms provided important habitats for biodiversity, many of these important habitats are either declining or have been lost already.The challenge is to change human-nature relationships and consumer behaviours.McGregor and Houston, reviewed four propositions to do just that: promoting intensification, naturalisation, veganism, and artificial beef and dairy production.They concluded that the most economical and prevailing proposition does not provide effective solutions to address global challenges of planetary change and that creative consumption-oriented responses are needed.One such example is the consumption of organic products, including meat, which have been shown to yield benefits to biodiversity.However, the rewards for organic farmers are small, no penalties exists for farms that operate intensively, and organic farming systems alone may not be able to
One of the principal threats to biodiversity is intensive agriculture, and in particular the livestock industry, which is an important driver of greenhouse gas emissions, habitat degradation and habitat loss.Ongoing intensification of agricultural practices mean that farmland no longer provides a habitat for many species.We suggest the use of a growing policy tool, biodiversity offsetting, to tackle these challenges.
provide enough food to sustain the growing world population.Hence, although organic farming clearly provides benefits to biodiversity, it does not appear to be a practice that can yield significant changes to the livestock industry and alternative policy platforms are needed.The Common Agricultural Policy is the governing policy for agricultural practices in the European Union.Farmers receive direct payments for conducting farming practices on their land, and the payment may be higher when this is done in an environmentally friendly manner.The challenge is that often the potential benefit from CAP payments does not offset the potential gain from intensifying farming practices, which is why there has been a general trend for larger more intensive farms that are owned by fewer people.For example, in a country like The Netherlands, the number of agricultural and horticultural companies decreased by 43% in the period 2000–2016, but the decrease in area used for agricultural purposes was only 9%; the standard yield per company therefore more than doubled.The CAP is currently under review for a new CAP for 2021–2027 with hopes for budgets to be approved by 2019.We believe that now more than ever, an overhaul of the CAP is required so that intensive farming practices are penalised and to provide incentives to reduce the scale of livestock farming.The CAP could benefit from an emerging policy called biodiversity offsets, which aim to alleviate the environmental impacts of development projects.Biodiversity offsets aim to govern the ecological impacts of new developments to achieve a no net loss of biodiversity, and in some situations attempt to achieve a net gain.The general procedure is that a) developers are required to quantify the ecological impacts arising from development, b) ecological impacts should be mitigated by for example avoiding or minimising biodiversity impacts and c) any remaining ecological impacts that are not mitigated should be offset by compensating the biodiversity losses with a biodiversity gain elsewhere.The two principal ways to achieve biodiversity offsets are to either enhance a degraded site through for example restoration, or to prevent ongoing or anticipated losses at another site.A similar concept that runs parallel to biodiversity offsetting and has been in place in the Netherlands since the 1990s, is called “Ecological Compensation”.Biodiversity offsetting and ecological compensation share similar principles and have become an important platform for minimising and compensating for the ecological impacts of development projects, although whether they achieve no net loss has been a topic of debate.One of the arguments against biodiversity offsets is that as conservationists and developers squabble over the remaining natural areas of our planet, to either protect or demolish, biodiversity offsets may be seen as a justifying means for developers to construct, meaning that no net loss is never a truly achievable target.Furthermore, socio-ecological systems are complex that hold ecological values, instrumental values and non-instrumental values which may be irreplaceable and therefore cannot be compensated.Finally, restoration actions that aim to compensate biodiversity loss may not succeed due to challenges in measuring biodiversity values, uncertainty of restoration techniques and the long time-lags between the initiation of restoration projects and the realisation of biodiversity gains.It is however important to highlight that a principle aim of biodiversity offsets is to avoid damaging impacts of development.We believe that a key limiting factor of both biodiversity offsetting, and ecological compensation, is that their focus is on “new developments”.These policy instruments can be broadened to not only compensate for new projects, but to also assess ongoing impacts within the landscape.We highlight that ecological compensation provides a useful policy platform for governing the ecological impacts of the agricultural industry, and in particular the livestock industry.Ecological compensation of intensive agricultural practices has global policy implications and is particularly relevant within the European Union under the CAP.It is also a system whereby the economic costs of environmentally friendly farming practices can be passed onto the consumer without drastically impacting the livelihoods of farmers.Instead of rewarding farmers for implementing environmentally-friendly practices, they should be set as a standard.A tier system may be implemented such that biodiversity offsetting is required for progressively intensifying practices of livestock farming.For example, sustainable stocking densities can be defined where livestock can be kept in pastures and meadows without a heavy reliance on imported feed and a progressive tax is applied as sustainable stocking densities are exceeded.Implementing policies that ensure that farmers compensate for the ecological impacts that intensive livestock farming creates will provide opportunities to restore the ecosystem benefits of low-intensity farming, and also recognises that our current meat consumption habits are not sustainable.Low intensity farming will likely result in higher costs for the consumer, especially if farmers are to maintain similar incomes.Similarly, any tax that is applied to farmers that stock livestock at high densities will also likely be passed on to the consumer.However, higher costs for the consumer may also lead to reduced meat consumption, which ultimately contributes towards biodiversity conservation and climate change mitigation.Biodiversity benefits include conservation actions that can be achieved on grazed pastures, whilst arable land that was dedicated to growing animal feed can instead be used to grow alternative plant proteins for human consumption.The income generated through “taxing” intensive agricultural practices can be used to support conservation projects and also encourage low-intensity farming.In addition to direct monetary implications of compensating the ecological damage of intensive farming practices, ecological compensation policies should also require farmers to implement conservation actions to ensure that biodiversity targets are achieved.Conservation actions may include the creation of
Biodiversity loss is ongoing, landscapes continue to transform, and predictions on the effects of climate change worsen.Calls have been made for urgent action to avoid pushing our planet into a new system state.Biodiversity offsetting, or ecological compensation, assesses the impacts of new development projects and seeks to avoid, minimise and otherwise compensate for the ecological impacts of these development projects.By applying biodiversity offsetting to agriculture, the impacts of progressively intensifying farming practices can be compensated to achieve conservation outcomes by using tools like environmental taxes or agri-environment schemes.
flower strips or set asides, the size of which could be determined by the number of livestock above a sustainable stocking density.The application of flower strips, set-asides, and other measures that increase naturalness on farmland through practices like agri-environment schemes have generally had positive effects on biodiversity.Increasing landscape complexity also increases natural pest control and thus reduces dependences on pesticides, another major threat to biodiversity resulting from agricultural intensification.Farmers may also consider alternative farming regimes to improve biodiversity value, for example a compartmental strategy of combining high yield farming, natural areas and low yield farming has been shown to benefit bird populations in lowland areas.It is thus clear that there are many benefits to implementing agri-environment schemes on agricultural land and the standard way that agricultural landscapes are designed needs to change.We feel a policy of ecological compensation within agriculture provides the opportunity to benefit biodiversity as well as farmers and society as a whole.We have previously highlighted that the effects of the livestock industry are not only negative, and grazing livestock actually play an important role for grassland ecosystems.Our aim is to highlight that the ecological impacts of intensive farming need to be compensated, especially given that intensively farmed livestock no longer provide biodiversity benefits.Similarly, we do not argue for the complete removal of livestock farming.Alternative forms of farming may not be appropriate in some regions, whilst the biodiversity value of rangelands, which are maintained through livestock grazing, is being lost due to cropland conversion.Instead, a return to lower intensity practices are needed coupled with a reduction in meat consumption practices.Reducing the intensity of livestock farming may create new challenges for the agricultural industry, for example reducing availability of fertiliser, however, sustainable farming systems are aiming to reduce fertiliser input, such as integrated crop-livestock systems and other agroecological approaches.Applying biodiversity offsetting to the livestock industry is not without its challenges.As we highlighted, the true benefits of biodiversity offsetting remains a contested topic.An important difference between the existing policy structure and our proposal is that it is not focused on potential biodiversity loss from new development projects, but instead on the ongoing ecological impacts of intensive agriculture.Compensating for these ongoing ecological impacts, for example through agri-environment schemes, will improve the biodiversity value of agricultural landscapes.A challenge also concerns how the general public, i.e. consumers, will respond to potential price rises of meat products and the expectation of reduced meat consumption.Studies have shown that the general public may not perceive eating meat as a problem, nor that it is linked to climate change or biodiversity loss.Consumers may also associate eating meat with social and cultural values and may not only be concerned about the nutrition of meat.Reducing meat consumption is thus associated with several challenges and Stoll-Kleemann and Schmidt, described various factors that create barriers to consumers reducing meat consumption, but also describe opportunities to overcome these barriers.It is therefore important that policy changes are accompanied with broad awareness-raising campaigns of the health, and especially environmental benefits of eating less meat.Increasing the cost of meat would also reduce meat consumption, through for example taxes.Food taxes have already been shown to change consumer behaviours but taxes will be met with substantial opposition.Solid arguments are needed for the purpose of environmental taxes, and also education to improve consumer’s willingness to pay and to adopt associated dietary changes.Policymakers may feel that our proposition is idealistic and resistance from farmers and consumers is to be expected, yet a change in our farming system is urgently needed.Our proposal directly contributes to several of CAP’s future objectives including amongst others fair income to farmers, climate change action, environmental care and to preserve landscapes and biodiversity.A system of applying carbon taxing to the livestock industry in Denmark is a first step in recognising the impacts of intensive agriculture.However, more needs to be done to recognise the full spectrum of ongoing ecological impacts.The immense decline of insect biomass and apparent decrease of meadow birds may indeed signal an imminent “Ecological Armageddon”.Alternative farming systems like integrated crop-livestock farming have been shown to be able to achieve high crop yields and profits.To return to these alternative farming systems, our proposition is to implement ecological compensation policies that would remove incentives to farm intensively so that intensive meat production is replaced by increased naturalness of livestock farming.We have focused on livestock farming in this short communication however the principles also extend to intensive crop production, which may be equally devastating to biodiversity.Our policy recommendations are set-up in a way that farmer’s profits should remain somewhat unchanged since the costs will largely be incurred by the consumer.Currently biodiversity and all mankind are paying the price for the current meat consumption practices and instead it is time that these costs are only paid by the consumers of meat through a system that strengthens the biodiversity value of our landscapes and restores the ecosystem services that it provides.
An increasingly gloomy picture is painted by research focusing on the environmental challenges faced by our planet.Low intensity, traditional, farming systems provide a number of benefits to biodiversity and society, and we suggest that the consumer and the agriculture industry compensate for the devastating ecological impacts of intensive farming so that we can once again preserve biodiversity in our landscapes and attempt to limit global temperature rise below 2°c.
Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models.While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well.Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift.One possible explanation for this gap between theory and practice is that popular scalable approximate Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space.We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions.Our results show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space.We demonstrate that while low-loss connectors between modes exist, they are not connected in the space of predictions.Developing the concept of the diversity--accuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods.
We study deep ensembles through the lens of loss landscape and the space of predictions, demonstrating that the decorrelation power of random initializations is unmatched by subspace sampling that only explores a single mode.
Psychiatric diagnoses, ICD-10) lack neurobiological validity.To address this, the National Institute of Mental Health launched the Research Domain Criteria) in 2009, a research framework that “integrates many levels of information in order to explore basic dimensions of functioning that span the full range of human behaviour from normal to abnormal”.RDoC represents a paradigm shift in psychiatry and highlights the need to include measures of genes, brain and behaviour to understand psychopathology.RDoC is structured as a matrix with four dimensions: i) domains of functioning which are further divided into constructs; ii) units of analysis; iii) developmental aspects; and iv) environmental aspects.Analysing data containing multiple such modalities poses statistical challenges, however.Here we propose a novel framework that is robust to some typical problems arising from high dimensional neurobiological data, such as overfitting, poor generalisation, and interpretability of the results.Factor analysis and related methods) have long traditions in statistics and psychology.These techniques decompose a single set of measures into a parsimonious, latent dimensional representation of the data.Applications of these approaches include general intelligence), the five-factor personality model, and many others.However, factor analysis cannot integrate different sets of measures, e.g. investigate brain-behaviour relationships.A principled way to find latent dimensions of one set of data which do align well with another set of data is to use partial least squares, or the closely related canonical correlation analysis.PLS was introduced to neuroimaging by McIntosh et al and it has been widely used.Unfortunately, the high dimensionality of neuroimaging data makes PLS and CCA models prone to overfitting, moreover, the interpretation of the identified latent dimensions is usually difficult.Regularised versions of PLS and CCA algorithms address these issues; two popular choices are lasso and elastic net regularization which constrain the optimization problem to select the most relevant variables.Sparse CCA and sparse PLS were originally proposed in genetics, and have since been used in cognition, working memory, dementia, psychopathology in adolescents, psychotic disorders and pharmacological interventions.However, most of these studies used approaches for selecting the regularization parameter and inferring statistical significance of the identified multivariate effects that do not account for the generalizability and stability of the results.Here, we propose an innovative framework combining stability and generalizability as optimization criteria in a multiple hold-out framework which is applicable to both regularized PLS and CCA approaches.Crucially, it increases the reproducibility and generalisability of these models by i) applying stability/reproducibility for model selection, and ii) using out of sample correlations of the data for model evaluation.To demonstrate this novel framework, we investigate associations between whole-brain voxel-based grey matter volumes and item-level measures of self-report questionnaires, IQ and demographics in a sample of healthy and depressed adolescents and young adults.We report the results from using SPLS in the main text, and for comparison we include results with another regularized approach, kernel CCA in the Supplement.Figure 1 illustrates how PLS/CCA models can be used to identify latent dimensions of brain-behaviour relationships.PLS/CCA maximises the association between linear combinations of brain and behavioural variables.The model’s inputs are brain and behaviour variables for multiple subjects.Its outputs, for each brain-behaviour relationship, are: brain and behavioural weights, brain and behavioural scores, and a value denoting the strength of the correlation/covariation.The brain and behaviour weights have the same dimensionality as their respective data and quantify each brain and behavioural variable’s contribution to the identified association.Once the association/weights are found then brain and behavioural scores can be computed for each subject as a linear combination of their brain and behavioural variables.The brain and behavioural scores can then be combined to create a latent space of brain-behaviour associations across the sample.Furthermore, each brain-behaviour association can be removed from the data and new associations sought.Next, we present a brief overview of PLS/CCA and some other latent variable models to contextualize our modelling approaches.Essentially, all these models search for weight vectors or directions, such that the projection of the dataset onto the obtained weight vectors have maximal variance, correlation or covariance.Note that PCA is limited to finding latent dimensions in one dataset e.g. behaviour.Although its principal components can be used in a multiple regression, e.g. to predict brain variables, the directions of high variance identified by PCA might be uncorrelated with the brain variables, whilst a relatively low variance component might be a useful predictor.Therefore, CCA and PLS can be seen as extensions of PCR to find latent dimensions relating two sets of data to each other.In the regularized versions of CCA/PLS, additional constraints are added to the optimization problem to control the complexity of the CCA/PLS model and reduce overfitting.A regularized version of CCA was proposed by Hardoon et al., in which two regularization parameters control a smooth transition between maximizing correlation and maximizing covariance.Our kernel CCA implementation is an extension of this regularised CCA, where the kernel formulation makes the algorithm computationally more efficient.A regularized sparse version of CCA has been proposed by Witten et al., which applies elastic net regularization to the weight vectors.Interestingly, as the variance matrices are assumed to be identity matrices in this optimization, their formulation becomes equivalent to our SPLS implementation.Elastic net regularization combines the L1 and L2 constraints of the lasso and ridge methods, respectively.The L1 constraint shrinks some weights and sets others to zero leading to automatic variable selection, however, it has three main limitations: i) selecting at most as many variables as the number of examples in the data; ii) selecting only a few from correlated groups of variables; iii) leading to worse
Background: In 2009, the National Institute of Mental Health launched the Research Domain Criteria, an attempt to move beyond diagnostic categories and ground psychiatry within neurobiological constructs that combine different levels of measures (e.g., brain imaging and behavior).Methods: We propose an innovative machine learning framework combining multiple holdouts and a stability criterion with regularized multivariate techniques, such as sparse partial least squares and kernel canonical correlation analysis, for identifying hidden dimensions of cross-modality relationships.The brain data consisted of whole-brain voxel-based gray matter volumes, and the behavioral data included item-level self-report questionnaires and IQ and demographic measures.
hippocampal areas, caudate and fusiform gyrus: all of which feature in this latent dimension.Whether volumes of other subcortical regions like amygdala and putamen contribute to depression risk is more controversial: large univariate analyses have not found significant associations with depression.Although the specificity of these findings for depression is unclear – similar hippocampal and subcortical volume associations are seen in PTSD and ADHD respectively – even cross-disorder findings may be useful for predicting outcome and, in particular, treatment response.For example, in relatively small samples, insula volume has been shown to predict relapse in MDD, and a combination of amygdala, hippocampus, insula and vermis can predict treatment response to computerised cognitive behavioural therapy for MDD.The identified latent dimensions also relate to existing RDoC domains, namely positive valence systems – i.e. reward anticipation and satiation that is excessive in the first but impaired in the second – and social processing, in the attribution of negative and critical mental states to others.Another important element of the RDoC framework is the interaction of its domains with neurodevelopmental trajectories and environmental risk factors.Although our subjects were adolescents, our brain structure results were consilient with the adult depression literature.This is important because although some studies in children find hippocampal volume associations with both depression and anxiety, a meta-analysis in MDD adults concluded hippocampal volume associations were absent at first episode.Indeed, our previous study of functional imaging data in this dataset revealed two latent dimensions of depression that had opposite relationships with age; one of which related to trauma.This study has some limitations.The sample size is modest, especially for depressed participants, which – combined with the likely heterogeneity of the disorder – makes the results for the depression-related modes somewhat unstable.Validation of our SPLS/KCCA models in a larger dataset of healthy and depressed adolescents and young adults would further strengthen the generalizability of our findings.Furthermore, the inclusion of a broader selection of clinical disorders would reveal the specificity of these findings for depression rather than psychological distress in general.Finally, we suggest some key areas for future work: i) future studies should investigate other regularization strategies for CCA and PLS, e.g. applying group sparsity can capture group structures in the data which might exist either due to pre-processing or a biological mechanism; ii) non-linear approaches) could explore more complex relationships between brain and behavioural data; iii) regularized CCA and PLS approaches can be used to find associations across more than two types of data which may enable a more complete description of latent neurobiological factors; iv) the obtained latent space could be embedded in a predictive model to enable predictions of future outcomes, such as e.g. treatment response; v) finally, further research should investigate how these latent dimensions relate to the currently used diagnostic categories.In conclusion, we have shown that regularized multivariate methods, such as SPLS and KCCA, embedded in our novel framework yields stable results which generalize to hold-out data.The identified multivariate brain-behaviour associations are in agreement with many established findings in the literature concerning age, alcohol use and depression-related changes in brain volume.In particular, it is very encouraging that our depression-related results agree with a wider literature despite having only a small number of subjects with MDD.The depression-related dimension also contained largely cognitive and behavioural aspects of depression, rather than its biological features.Altogether, we propose that SPLS/KCCA combined with our innovative framework provide a principled way to investigate basic dimensions of brain-behaviour associations and has great potential to contribute to a biologically grounded definition of psychiatric disorders.E.T.B. is employed half-time by the University of Cambridge and half-time by GlaxoSmithKline; he holds stock in GlaxoSmithKline.The other authors report no biomedical financial interests or potential conflicts of interest.The authors disclose that a preprint version of the paper will be made available at https://www.biorxiv.org during the submission process.
To illustrate the approach, we investigated structural brain–behavior associations in an extensively phenotyped developmental sample of 345 participants (312 healthy and 33 with clinical depression).Results: Both sparse partial least squares and kernel canonical correlation analysis captured two hidden dimensions of brain–behavior relationships: one related to age and drinking and the other one related to depression.The applied machine learning framework indicates that these results are stable and generalize well to new data.Indeed, the identified brain–behavior associations are in agreement with previous findings in the literature concerning age, alcohol use, and depression-related changes in brain volume.Conclusions: Multivariate techniques (such as sparse partial least squares and kernel canonical correlation analysis) embedded in our novel framework are promising tools to link behavior and/or symptoms to neurobiology and thus have great potential to contribute to a biologically grounded definition of psychiatric disorders.
To reconstruct the color patterns of Sinosauropteryx, we analyzed three of the best-preserved specimens available.To reconstruct the color patterns accurately, first the distribution of pigmented plumage was described in detail for each specimen.Each specimen shows extensive preservation of dark, presumably organically preserved fibers identified as feathers/feather homologs in distinct areas of the animal.Alternative interpretations of these structures as degraded skin collagen have recently been shown to be unfounded .Preservation of feathers as organic films is due to the presence of the pigment melanin, and thus only originally pigmented feathers are found preserved in this manner .Visible absence of feathers in certain regions of the fossil is therefore likely due to unpigmented plumage that did not preserve, rather than a true absence of feathers in life .Alternatively, the areas lacking feathers could have been naked but would similarly be inferred to have been unpigmented.Because the feathering likely also served an insulatory role, an extensive distribution seems most plausible.Mapping the distribution of preserved pigmented feathers is therefore considered to reflect the extent of colored plumage on the animal, with other areas being covered by white feathers.Illustrations of NIGP 127586 and NIGP 127587 show the pattern of plumage distribution across the fossils.From this distribution, a complete reconstruction was created; this was done blind to any predictions from the modeling of illumination.The consistency of plumage patterns observed across multiple specimens gives confidence to the reconstructed color pattern.The pattern of pigment across the face appears to show a band of pigmented plumage running from the dorsal area of the head anterioventrally, which then angles toward the eye before running to the posterioventral margin of the lower jaw.The banded tail shows a transition from narrow to widely spaced bands from the proximal to distal regions, with the ventral pigmentation becoming denser toward the end of the tail.The ventral extent of the pigmented plumage, representing the likely countershading transition, appears to be relatively high on the flank, at around two-thirds of the way down the abdomen.For countershading to be effective in obliterating 3D cues of an animal’s presence, the pattern of pigmentation from the dorsal to ventral body regions should match the illumination gradient created by the lighting environment in which it lives .This allows the determination of likely habitats of animals based on quantification of color patterns .Those that inhabit open environments with direct lighting conditions generally exhibit a sharp transition from dark to light color high up on the flanks of the body .Conversely, animals inhabiting a more closed habitat with diffuse lighting coming in at many angles often show a smoother gradation from dark to light lower down on the body .To predict the optimal pattern of countershading, we created and photographed 3D models of the abdomen of Sinosauropteryx under different lighting conditions.The reconstructed color patterns based on NIGP 127586 and NIGP 127587 more closely match the pattern of countershading predicted from images of the models taken under direct light conditions than those of diffuse lighting conditions, indicative of animals living in open habitats .The addition of synthetic fur made little difference to each countershading prediction.For direct overhead sun, the mean predicted transition point to lighter coloration was 72% of the way from dorsal to ventral side.For direct sun at 30° it was 60%, and for diffuse illumination it was 85%.Only the direct illumination confidence intervals include the observed transition point.The presence of pigmented feathers surrounding the orbit and running in a band across the face conforms to “bandit masks” seen in many modern birds and mammals .Multiple functions have been proposed for bandit masks in modern taxa .One such function is as an anti-glare device .Reducing the glare from the feathers around the eye would be particularly useful to an animal living in environments with abundant direct sunlight, as is seen often in diurnal extant birds and mammals .Additionally, it has been suggested that glare is especially high in riparian habitats, because light reflectance is increased by proximity to water, as may have been the case in the lacustrine environment in which Sinosauropteryx fossils were deposited .Pigmented bands that run directly across the orbital region may also help to mask the presence of the eyes as a form of camouflage against both predators and potential prey .Eye stripes are common in modern birds, which most often also have dark eyes, making them likely harder for visual predators or prey to detect, and given that eyes elicit responses from both in many situations, it is a plausible hypothesis .Other possible functions of dark patches around the eyes of extant animals include aposematism and intraspecific signaling .Bandit masks have been suggested as being primarily aposematic in mammalian taxa living in exposed open habitats and are especially prevalent in mammalian carnivores, which co-exist with larger carnivores , as is likely to have been the situation for Sinosauropteryx.A number of modern mammals combine bandit masks with defensive nauseous discharges , but it is not possible to ascertain whether this was the case with Sinosauropteryx, and aposematism is generally thought to be rare in modern birds , making aposematism unlikely in Sinosauropteryx.Alternatively, conspicuous face markings could serve as a warning of a physical deterrent, such as a weapon or armor .Although the theropod had an enlarged claw on each hand , the animal’s small size makes it unlikely that it posed any real threat to its likely much larger theropod predators, making this function of the bandit mask
The optimal countershading pattern is dictated by the lighting environment, which is in turn dependent upon habitat [1, 3, 5, 6].Reconstructed patterns match well with those predicted for animals living in open habitats.Sinosauropteryx is also shown to exhibit a “bandit mask,” a common pattern in many living vertebrates, particularly birds, that serves multiple functions including camouflage [13–18].Sinosauropteryx therefore shows multiple color pattern features likely related to the habitat in which it lived.Video Abstract Smithwick et al.reconstruct the coloration of the small carnivorous dinosaur Sinosauropteryx.It had a bandit mask and striped tail and was also countershaded (dark on top, light below).
from the fossils and the width was extrapolated from the curvature of the ribs and gastralia.This method produced consistent relative proportions in each model despite a difference in the overall size of each.Each abdomen was taller at the posterior end than the anterior in both specimens, and so the models were tapered according to the exact dimensions measured from each fossil.The difference in the degree of tapering may represent ontogenetic differences, as NIGP 127586 is a much smaller individual than NIGP 127587.The two 3D models were then printed by Shapeways in gray polylactic acid and sanded using increasing grit sandpaper to smooth the surfaces.To replicate the feathers, unicolor synthetic fur was used to wrap around each model and the filament length trimmed based on the lengths of the feather filaments measured from each fossil.The 3D models of the two Sinosauropteryx abdomens were printed uniformly gray to allow assessment of the position of self-shadows depending on different lighting conditions, independent of actual color patterns .The models were mounted on sticks attached horizontally to a tripod to avoid any shadows being cast across them from other objects.The two models were photographed under different lighting conditions, similar to the recent study of Psittacosaurus .A Nikon D5300 SLR camera with an 18-55 mm Nikkor lens was used for imaging with the light metering set on the center of the model and automatic focus used.Images were saved in TIFF format.A color standard was positioned next to and in the same plane as the model.Photographs were taken at the University of Bristol Botanical Gardens at around midday on sunny and cloudy days in both open and closed environments.The area chosen was populated by plants typical of the Early Cretaceous.The models were placed facing directly toward the sun in both instances, as this is the situation in which symmetrical countershading will be most effective as the illumination gradient will be the same on both flanks .Previous work has shown that due to variability in the sun’s position and the effect that will have on illumination gradients, modern ungulates often show countershading patterns which are a compromise between the range of lighting conditions in which each taxon lives where predation pressure will be experienced .Each model was therefore also imaged at an angle perpendicular to the sun, with the dorsal side receiving direct illumination to imitate the sun being directly overhead.The models were imaged both as gray uncoated plastic and with the synthetic fur tightly wrapped around to test for any differences in the illumination gradients with and without feathers.As with previous work, the shadows cast reduced to two illumination conditions corresponding to whether the light was coming directly from the sun’s disk or the sky.Consequently, images taken under cloudy conditions produced the same shadowing patterns as those taken in sunlight under vegetation, making them equivalent, for predictions, to a closed habitat.After imaging, the models were cropped and the lighting inverted to show where the optimal countershading transition should fall for each lighting condition in order to counterbalance the illumination gradient and thus minimize conspicuousness through self-shadow obliteration.This was carried out in MATLAB .The predicted countershading transitions were then directly compared to the reconstructed color patterns across the abdomens of both Sinosauropteryx specimens.Confidence intervals for the transition points to a lighter belly were estimated as follows.First, transects of the calibrated intensity were taken from dorsal to ventral side.For each transect a cubic spline with 7 degrees of freedom was fitted as a smoother using function smooth.spline in R 3.4.0 .Smoothing was necessary, particularly for the fur-covered models which showed spatial heterogeneity due to irregularities in the lie of the fur; 7 d.f. adequately captured the general trend in gradient without too much smoothing.The point along each transect, in pixels, at which the gradient flattened out was located and converted to a percentage of the distance from dorsal to ventral side.Such estimates were calculated for five replicates of each illumination condition, integument and model.The mean and 95% profile confidence intervals for each illumination condition were estimated using a Linear Mixed Model with random effects “model” and “integument”.The model was fitted using function lmer in package lme4 in R.The final calculated confidence intervals can be found in the Results.Data supporting this study are provided within the paper and supplemental material.F.M.S. produced all illustrations and the reconstruction in Figure 2A, created and imaged the 3D models, and wrote the manuscript.J.V. devised the project concepts and imaged the fossils.R.N. produced the full art reconstruction in Figure 2B. I.C.C. produced the MATLAB models and performed the statistical analyses of countershading predictions versus the reconstruction.All authors commented on the manuscript.
Countershading is common across a variety of lineages and ecological time [1–4].A dark dorsum and lighter ventrum helps to mask the three-dimensional shape of the body by reducing self-shadowing and decreasing conspicuousness, thus helping to avoid detection by predators and prey [1, 2, 4, 5].With the discovery of fossil melanin [7, 8], it is possible to infer original color patterns from fossils, including countershading [3, 9, 10].From reconstructions based on exceptional fossils, the color pattern is compared to predicted optimal countershading transitions based on 3D reconstructions of the animal's abdomen, imaged in different lighting environments.Using 3D models under different light, the authors show that its camouflage would have worked best in an open habitat.Paleocolor can help predict paleohabitat.
Sequence-to-sequence neural models have been actively investigated for abstractive summarization.Nevertheless, existing neural abstractive systems frequently generate factually incorrect summaries and are vulnerable to adversarial information, suggesting a crucial lack of semantic understanding.In this paper, we propose a novel semantic-aware neural abstractive summarization model that learns to generate high quality summaries through semantic interpretation over salient content.A novel evaluation scheme with adversarial samples is introduced to measure how well a model identifies off-topic information, where our model yields significantly better performance than the popular pointer-generator summarizer.Human evaluation also confirms that our system summaries are uniformly more informative and faithful as well as less redundant than the seq2seq model.
We propose a semantic-aware neural abstractive summarization model and a novel automatic summarization evaluation scheme that measures how well a model identifies off-topic information from adversarial samples.
we do not yet fully understand this.There are a number of outstanding questions in the field.Thus, we still have much to learn regarding the fascinating roles of macrophages particularly in lipid metabolism.Importantly, as our understanding of macrophages increases, both in terms of their common and tissue-specific features, we are more and more in a position to design new tools to address such specific questions and as such exciting times lie ahead in terms for macrophage biology.While it is obvious from the literature reviewed here that Mϕs play predominant roles in lipid metabolism both in the steady state and during disease pathogenesis, it is also very clear that we do not yet fully understand this and as a result many pertinent questions remain to be answered by the field.For example, with the addition of recent data the role of lipid metabolism in driving Mϕ phenotype is being debated.Thus, the question remains what role does lipid metabolism play in regulating the phenotype of Mϕs in vivo?,Moreover, it will be important to understand how lipid metabolism is regulated in Mϕs?,It has been reported that senescent Mϕs downregulate their expression of cholesterol efflux genes.What regulates this?,Furthermore, how does this translate in terms of lipid load in macrophages or the tissues in which they reside?.In addition to this, another question that remains unanswered is why is the transcriptional profile of certain Mϕs enriched for lipid metabolism genes compared with others?,While for some this is clear, the role of the other Mϕs enriched for this function, such as the KCs, is/are largely unknown.Are these cells contributing to lipid clearance and recycling similar to their neighbours, the hepatocytes?,Moreover, how are these cells then involved in conditions of excess lipid such as NAFLD?,Are they solely driving NAFLD pathogenesis by acting as sensors of DAMPs and PAMPs?,Or does their role go beyond this into lipid uptake and clearance?,More importantly, is this function conserved in human KCs?,And can manipulating the KCs in terms of their lipid metabolism function alter and ultimately improve patient outcome?,This is something we are actively investigating in the lab.Another key and perhaps the most crucial question for the field, is the question of Mϕ subsets and plasticity.Many studies have reported the presence of M1 Mϕs at one stage of disease and M2 Mϕs at another.However, two things remain unclear with this characterisation.Firstly, the relevance of the use of the M1/M2 nomenclature in vivo.We know that the M1/M2 classification does not hold up in vivo and as often these Mϕs are defined on the basis of one or two surface markers it is really unclear what these cells actually are.Secondly, does this reflect truly different subsets of macrophages at different disease stages or macrophage plasticity?,Are these distinct subsets of Mϕs recruited at different stages of the disease?,Perhaps resident versus recruited Mϕs?,Or is it one subset of cells that changes it function along a spectrum based on the signals around it?,How plastic are Mϕs?,Do they adapt to changes in local lipid level or do they die and get replaced?,With the so-called Mϕ disappearance reaction occurring during most insults, one might consider that resident Mϕs are not actually that plastic and hence once their environment or niche is altered they cannot adjust and hence die being replaced by monocyte-derived cells, but this remains to be investigated.This also then leads to the question, how plastic are the recruited monocytes?,Are they altering their function during disease progression?,Or are they again replaced by a more adapted Mϕ when the environment changes?
Distinct macrophage populations throughout the body display highly heterogeneous transcriptional and epigenetic programs.Recent research has highlighted that these profiles enable the different macrophage populations to perform distinct functions as required in their tissue of residence, in addition to the prototypical macrophage functions such as in innate immunity.These ‘extra’ tissue-specific functions have been termed accessory functions.One such putative accessory function is lipid metabolism, with macrophages in the lung and liver in particular being associated with this function.As it is now appreciated that cell metabolism not only provides energy but also greatly influences the phenotype and function of the cell, here we review how lipid metabolism affects macrophage phenotype and function and the specific roles played by macrophages in the pathogenesis of lipid-related diseases.In addition, we highlight the current questions limiting our understanding of the role of macrophages in lipid metabolism.
field trial as promising for future vaccine improvement strategies.We also assessed FliC as an adjuvant.The in-vitro analysis showed that FliC stimulation of bovine TLR5 induced a significant CXCL-8 response in HEK cells, although this was lower than that induced via human TLR5.The addition of FliC to the vaccine formulation reduced antibody titres and survival when compared with Group 1, although this latter effect was just outside the conventional levels of significance.These data suggest that FliC is unlikely to enhance protection against MCF.WA-MCF has a case-fatality ratio greater than 96%.The finding that 15% of the trial cattle had evidence of prior AlHV-1 infection was therefore surprising.Non-fatal infections have been reported in SA-MCF and serological evidence of non-fatal infections was described in the field trial.These findings add further evidence that non-fatal outcomes are a feature of WA-MCF and that the case-fatality ratio could be lower than previously described.The cell biology and pathogenesis of MCF are poorly understood.The fact that four cattle were PCR positive at baseline suggests that, following initial infection, virus was not eliminated from cattle that survived the infection.It is not clear whether the virus became latent, residing in certain body tissues as it does in the carrier host, nor whether it might cause MCF at a later stage.In summary, immunization with atAlHV-1 induces an oro-nasopharyngeal antibody response in FH and SZC and there is evidence that, when combined with Emulsigen®, the vaccine mixture induces a partial protective immunity in SZC.A larger study is required to better quantify this effect.We have shown that direct challenge with the pathogenic AlHV-1 virus is effective at inducing MCF in SZC.We have also provided evidence that the atAlHV-1 + Emulsigen® formulation may be less effective at stimulating a protective immune response in SZC cattle than FH cattle.Furthermore, and in support of the field trial, we have provided evidence that non-fatal AlHV-1 infections are relatively common and we speculate that there could be resistance to fatal MCF in SZC cattle, possibly through genetic background, previous exposure to AlHV-1 or alternative acquisition of a level of inherent immunity.Finally, we demonstrated that FliC is not an appropriate adjuvant for the atAlHV-1 vaccine.
Malignant catarrhal fever (MCF) is a fatal disease of cattle that, in East Africa, follows contact with wildebeest excreting alcelaphine herpesvirus 1 (AlHV-1).Recently an attenuated vaccine (atAlHV-1) was tested under experimental challenge on Friesian-Holstein (FH) cattle and gave a vaccine efficacy (VE) of approximately 90%.However testing under field conditions on an East African breed, the shorthorn zebu cross (SZC), gave a VE of 56% suggesting that FH and SZC cattle may respond differently to the vaccine.To investigate, a challenge trial was carried out using SZC.We report 100% seroconversion in all immunized cattle.The group inoculated with atAlHV-1 + Emulsigen® had significantly higher antibody titres than groups inoculated with FliC, the smallest number of animals that became infected and the fewest fatalities, suggesting this was the most effective combination.A larger study is required to more accurately determine the protective effect of this regime in SZC.There was an apparent inhibition of the antibody response in cattle inoculated with atAlHV-1 + FliC, suggesting FliC might induce an immune suppressive mechanism.The VE in SZC (50–60%) was less than that in FH (80–90%).We speculate that this might be due to increased risk of disease in vaccinated SZC (suggesting that the vaccine may be less effective at stimulating an appropriate immune response in this breed) and/or increased survival in unvaccinated SZC (suggesting that these cattle may have a degree of prior immunity against infection with AlHV-1).
a priority and should be described fully, where possible, consistent usage of pre-existing outcome measures across studies would be beneficial in order to increase comparability across trials, researchers should specify a primary outcome measure a priori, and participant engagement and fidelity should be clearly reported.Looking forward to the future, considering the marked number of small trials, well-designed definitive trials from different research groups around the world are needed in order to demonstrate that CBT is an empirically validated treatment use with people who have ASDs.To date, there has only been a single definitive trial within this area.Bearing the aforementioned recommendations for future studies in mind, and considering the conclusions from both the current and previous meta-analyses, CBT is at least associated with a small non-significant effect size, and at best, associated with a medium effect size, depending on whether you ask those receiving the treatment, those supporting the treatment, or those delivering the treatment.There are three further comments we would like to add to help in the design of future studies, including the interventions.First, there have been a variety of modelling and pilot studies across different countries, but very few researchers have developed interventions within the spirit of co-production with people with autism and their families.Co-production means working together with those who will receive the intervention when developing and running a clinical trial to ensure that those who are likely to receive the intervention have also genuinely helped design the intervention.While some studies employed this, if used more commonly, such a strategy would lead to improved engagement and outcomes, especially from the point of view of children and adults with autism.Second, many of the reviewed studies focused on delivering group-based interventions for a variety of different problems.While delivering interventions in a group may be more cost effective, this may not be associated with greater effectiveness.The reason for this is that co-morbidity is high amongst people with autism, and within a group there may be participants who have obsessive-compulsive disorder, social phobia, generalised anxiety disorder, depression, or many other psychiatric problems, in addition to the difficulties associated with autism itself.While there are marked similarities, cognitive behavioural therapy for depression is different than cognitive behavioural therapy for obsessive compulsive disorder, and delivering interventions within a group may have prevented therapists form being able to tailor the intervention to address the needs of each individual within the group adequately.Related to this, there are some individuals with ASDs who may be unable or unwilling to access group-based interventions.As such, we recommend that researchers begin to focus more heavily on formulation-driven and trans-diagnostic interventions delivered with individuals, rather than within a group, bearing in mind that there is evidence that individually delivered CBT is associated with stronger effect sizes than group-based CBT for people with intellectual disabilities, another group which tends to have marked co-morbidity.Finally, little to no attention has been paid to therapist competence within this area, including therapist style, integrity, alliance and experience, all of which has been linked to outcomes in a variety of studies involving people without ASDs.Further research is needed into these factors within studies involving people with ASDs in order to potentially help improve outcomes.Related to this, little attention has been paid to the accreditation of cognitive behavioural therapists within the literature.While behavioural therapists are certified through the Behaviour Analyst Certification Board®, those offering cognitive behavioural therapy are not certified in a similar manner in many jurisdictions.In some countries, such as the United Kingdom, there are organisations which accredit cognitive behaviour therapists, namely the British Association for Behavioural and Cognitive Psychotherapies, but this does not mean that therapists have appropriate clinical expertise and experience of working with people who have ASDs in order to ensure that they are able to adapt therapy in a way that is likely to be efficacious.Further, while CBT should be adapted to meet the needs of those with ASDs, we still know relatively little about the effectiveness of many of these adaptations, as they have not been investigated using experimental designs to determine whether they lead to substantial improvements in treatment engagement and outcome.While future definitive trials are certainly needed within this area, alongside this, we also need greater experimental work examining the effectiveness of various adaptations to CBT for use with people who have ASDs.
The aims of this study were to undertake a meta-analytic and systematic appraisal of the literature investigating the effectiveness of cognitive behavioural therapy (CBT) when used with individuals who have autistic spectrum disorders (ASDs) for either a) affective disorders, or b) the symptoms of ASDs.Following a systematic search, 48 studies were included.CBT, used for affective disorders, was associated with a non-significant small to medium effect size, g = 0.24, for self-report measures, a significant medium effect size, g = 0.66, for informant-report measures, and a significant medium effect size, g = 0.73, for clinician-report measures.CBT, used as a treatment for symptoms of ASDs, was associated with a small to medium non-significant effect size, g = 0.25, for self-report measures, a significant small to medium effect size, g = 0.48, for informant-report measures, a significant medium effect size, g = 0.65, for clinician-report measures, and a significant small to medium effect size, g = 0.35, for task-based measures.Sensitivity analyses reduced effect size magnitude, with the exception of that based on informant-report measures for the symptoms of ASDs, which increased, g = 0.52.Definitive trials are needed to demonstrate that CBT is an empirically validated treatment for use with people who have ASDs.
with pulsed low-cost sources.A comparison between performance of PAI systems with pulsed and modulated sources can be found in Ref. .PAI systems with low-cost pulsed sources have been designed in ORPAM, ARPAM, and PAT configurations with various wavelengths, resolutions, and imaging depths and have shown potentials for clinical and pre-clinical applications.For microscopic applications, in order to facilitate in vivo clinical applications, reflection mode PAM systems with high resolution and real time image acquisition are yet to be developed, an achievement that requires further investigations into improved light delivery methods, beam shaping , and scanning techniques.Tomography systems would also benefit from improved SNR, depth of penetration, and imaging speed as well as further clinical evaluations more thoroughly investigating the potentials of low-cost PAT systems for disease diagnosis and monitoring.Contrast agents with strong absorption in the NIR wavelength range and suitable for in vivo imaging can also benefit clinical applications of low-cost PAI system .
Benefitting from advantages of optical and ultrasound imaging, photoacoustic imaging (PAI) has demonstrated potentials in a wide range of medical applications.In order to facilitate clinical applications of PAI and encourage its application in low-resource settings, research on low-cost photoacoustic imaging with inexpensive optical sources has gained attention.Here, we review the advances made in photoacoustic imaging with low-cost sources.
to abundance and increasing returns, competitive turmoil becomes more likely in the coming years.Firms such as Airdine may threaten existing restaurants by circumventing established regulations in the same way as Uber did in the personal transportation sector, thereby creating competitive and institutional turmoil.Something similar might apply for simpler services where firms like TaskRunner may become viable threats within the service sector.The fashion and clothing sector may be affected by the growth of various swapping alternatives, but it is difficult to assess the magnitude of this phenomenon and whether it will become a complement or a substitute.Other initiatives such as ride-sharing may not necessarily have any competitive effects at all, but can potentially generate novel opportunities for both entrant firms and established actors in the transportation sector.Management of emerging technologies is a critical capability and this skill is likely becoming more valuable in those sectors where the sharing economy is growing.This paper has sought to map in what sectors the sharing economy is currently attracting increased attention while also discussing the associated consequences of increased abundance.Drawing on data from the Swedish media landscape to systematically assess ways in which actors in traditional media as well as users in social media perceive that different sectors of the economy are gaining traction, our findings reveal that the sharing economy is currently expanding its scope while also illustrating the distribution of actors within specific sectors.Our findings illustrate how the sharing economy now encompasses novel sectors of the economy not previously associated with the sharing economy in extant literature, including, for example, on-demand services, fashion and clothing, and food delivery.We also observe a long tail of different niches spanning many sectors of the economy.In total, 17 sectors and 47 associated subsectors were identified, together containing 165 distinct sharing-economy actors.In conclusion, our findings suggest that more sectors of society are likely to be characterized by abundance and increasing returns in the coming years due to the emergence of the sharing economy as a discontinuous innovation.We acknowledge one main limitation of our study.The collected data sets contain user-generated content and news articles published in Swedish, which means that this study is limited to the ways in which the sharing economy is perceived in the Swedish media landscape.Therefore, the empirical focus of the data sets imposes constraints upon generalizations from this data to other national contexts.As Western economies become increasingly characterized by abundance rather than scarcity due to discontinuous shifts in technologies related to the spread of the sharing economy, much remains to be learned about how this shift toward abundance is unfolding; we welcome further empirical research on the subject.Concerning the specific case of the sharing economy, a closer examination of whether processes of creative destruction are taking place in any of the sectors identified in this study would be of great interest.Also, we see a general need for knowledge concerning how incumbent firms can proactively turn the sharing economy into a business opportunity rather than a competitive threat.
The sharing economy can be regarded as a discontinuous innovation that creates increased abundance throughout society.Extant literature on the sharing economy has been predominantly concerned with Uber and Airbnb.As little is known about where the sharing economy is gaining momentum beyond transportation and accommodation, the purpose of this paper is to map in what sectors of the economy it is perceived to gain traction.Drawing on data from social and traditional media in Sweden, we identify a long tail of 17 sectors and 47 subsectors in which a total of 165 unique sharing-economy actors operate, including sectors such as on-demand services, fashion and clothing, and food delivery.Our findings therefore point at the expanding scope of the sharing economy and relatedly, we derive a set of implications for firms.
hourly averaged measurements represents the near field contribution of local pollution sources.We also estimate the influence of the uncertainty in the regional CO signal to the overall pollution levels.Both the regional and far field contributions are limited by the lower and upper range of the regional CO signal, respectively, and may therefore be underestimated.Overall, the regional signal is the largest contributor to air pollution with 73% for the suburban and 49% for the urban sensor node under the high pressure conditions studied here.However, the error associated with the estimation of the regional signal contributes as much as 19% to the total pollution levels for the suburban sensor node under high pressure conditions in April.Due to higher total pollution levels, this contribution represents only 13% for the urban node.During low pressure conditions in May this error only contributes to 9% and 8% to the levels for the suburban and urban sensor nodes respectively, again an indicator for the increased similarity in pollutant levels observed between the different measurement locations.It is unlikely for local emission sources to change significantly between April and May for rural and suburban environments.Thus, the observed doubling of both near and far field contributions under low pressure conditions when compared to those under high pressure was used to estimate a 4% accuracy of the assessed individual contributions to total pollution levels.The near field contribution for the urban sensor node decreases from 17% to 13% when changing from the high pressure conditions in April to low pressure in May, which generally lies within the associated error bars.However, the far field contribution decreases from 22% in April to 8% in May.This highlights the effect of local pollutant sources on pollution levels over a longer time period when confined to urban environments and how different meteorological conditions may affect pollution levels.In this paper we have shown that low-cost sensors deployed in a dense network can provide the information required to carry out detailed source attribution.A high spatial and temporal resolution data set of CO concentration measurements, collected over a two month period using 32 electrochemical sensor nodes included in a dense network deployed in Cambridge, UK, has been analysed.A novel and flexible method has been developed to determine sensor baselines, i.e. underlying variation, of measurements which is suitable for application to large data sets.Combining these baselines with high spatial resolution measurements made across the network we have demonstrated how to use these to separate and quantify levels of those pollutants that accumulate in urban environments and increase the long term pollution levels in these areas.The measured signals can thus be distinguished into three components: local plume events, influenced mainly by traffic; build-up of local events, influenced mainly by traffic queuing and general energy usage; regional background, showing the large-scale effect of accumulations of the surrounding areas.One limitation of the technique developed here is that the methodology outlined is based on the assumption that no chemical processing of pollutants occurs between each of the measurement sites and that pollutant levels are determined by emissions alone.Another source of uncertainty lies in the representativeness of each site as a degree of subjectivity may be applied when classifying these.In addition, as a result of the filtering process applied to data for a proportion of the day for each sensor it is probable that some uncertainty surrounding source attribution for certain sites, i.e. those in particular affected by diurnal changes in traffic, will be introduced and thus treated with caution however in subsequent generations of electrochemical sensors such drops in CO data are not observed.Through this analysis, the variation in CO baseline pollution levels within an urban environment can be studied in greater detail, which may, for example, provide valuable information for both informing pollution mitigation measures and evaluating urban dispersion models in the future.We also note that the technique has wider applicability, for example to low cost sensor networks of other species and particulates, and indeed could be applied effectively to reference instrument data were it available at high time resolution rather than the hourly averages which are publicly disseminated.
To carry out detailed source attribution for air quality assessment it is necessary to distinguish pollutant contributions that arise from local emissions from those attributable to non-local or regional emission sources.Frequently this requires the use of complex models and inversion methods, prior knowledge or assumptions regarding the pollution environment.In this paper we demonstrate how high spatial density and fast response measurements from low-cost sensor networks may facilitate this separation.A purely measurement-based approach to extract underlying pollution levels (baselines) from the measurements is presented exploiting the different relative frequencies of local and background pollution variations.This paper shows that if high spatial and temporal coverage of air quality measurements are available, the different contributions to the total pollution levels, namely the regional signal as well as near and far field local sources, can be quantified.The advantage of using high spatial resolution observations, as can be provided by low-cost sensor networks, lies in the fact that no prior assumptions about pollution levels at individual deployment sites are required.The methodology we present here, utilising measurements of carbon monoxide (CO), has wide applicability, including additional gas phase species and measurements obtained using reference networks.While similar studies have been performed, this is the first study using networks at this density, or using low cost sensor networks.
The steel materials have diverse applications throughout the world in various fields because of the ease of production, availability, low-price and better mechanical strength.The main drawback of these materials is ‘corrosion’ in their applications which leads to economic problems.The diversity in applications of steel made it so important to protect from corrosion process .The study on the protection of steel substrates from corrosion phenomena was an interesting research topic since many years.Considering the corrosion problem of steel metals, investigations were focused on the development of protective layers on the surface of a steel substrate by an electrochemical process .The electroplating technique has been widely applied to the surface treatment of steel substrates to achieve better corrosion resistance properties of steel .The deposition of metallic layers on steel substrates involved the electrolysis of certain metals like Zn, Ni, Cu, Sn etc., provided a good corrosion protection under aggressive atmosphere .Indeed, the chrome coating provided an excellent corrosion passivation for the steel surface from the surrounding environment thereby corrosion resistance of the steel metal was sacrificial.The chrome passivation, however, has been prohibited due to the toxicity towards the environment.Thus, it is of essential to develop non-toxic and longer life spanned surface coating for steel surface protection .Among various coatings, zinc coating found much importance because of its broad range of applications in the automobile industry, construction platforms and also marine applications thanks to cost friendly and good mechanical property.The presence of the salinity in the marine environment causes the deterioration of Zn-coated steel substrates which affects the service life of the Zn-coating .In recent years, efforts have been moved on to Zn-composite coatings due to their better corrosion resistance property compared to pure Zn-coating.The extensive research on composite materials for Zn-composite coating was focused on the utilization of metal oxides , carbides , nitrides , polymers .These coatings improved the corrosion resistance properties of Zn-coating with respect to the pure Zn-coating in the presence of a corrosive atmosphere.Amongst, the metal oxide nanoparticles received more attention due to their availability and low cost of preparation .Nowadays, doped metal oxides and mixed metal oxides exhibit remarkable physical and chemical properties.Practically, Zn-1% Mn-doped TiO2 composite coating on the steel substrate has been studied by Kumar et al .They obtained a better corrosion resistance property in comparison to the Zn-composite coating.The corrosion resistance property and tribological properties of Zn-Al2O3-CrO3-SiO2 have been reported by Malatji et al. .The observed results signified the enhanced anticorrosive property of Zn-mixed metal oxides composite coating.A good improvement was reported for Zn-TiO2-WO3 composite coating .The present effort focused on increasing the service life of Zn-coating with the reinforcement of Ni doped SnO2 nanoparticles as a composite additive in the Zn-matrix.The tin metal and its oxides have many applications in various fields because of its better thermal stability and good mechanical property.SnO2 is a n-type semiconductor metal oxide having a bandgap of 3.6 eV .The research on SnO2 metal oxide nanoparticles as a composite material in zinc coating has been investigated by Fayomi et al. .They found that the anticorrosive and tribological properties of Zn-Al-SnO2 composite coating were satisfactorily good compared with that of Zn-Al alloy coating.To our best knowledge, no work has been found regarding incorporation of Ni doped SnO2 nanoparticles as a composite additive in Zn-coating for corrosion protection of steel.Nickel chloride heptahydrate was supplied from Himedia Laboratories Pvt. Ltd.Mumbai.Tin chloride dihydrate was received from Merck Life Science Pvt. Ltd.Mumbai.Citric acid anhydrous was arrived from Merck Specialties Pvt. Ltd.Mumbai and Millipore water.In a typical synthesis of Ni doped SnO2 nanoparticles, salt precursors of SnCl2·2H2O, NiCl2·7H2O and citric acid as a fuel were taken as 1:1 ratio and completely dissolved in a dilute HNO3 solution to get a combustion mixture.Afterward, the solution mixture was heated on a hotplate with constant stirring until a solution mixture converted into a gel form .The gel was transferred into a quartz crucible and kept into a preheated furnace maintained at 400 °C.Within a few seconds, precursor gel gets boiled and ignited.Then the crucible was taken out and kept for cooling for few minutes at atmospheric temperature.The product was finely grounded in an agate mortar and calcined for 2 h at 500 °C.Scheme 1 illustrated the experimental steps involved during the synthesis of Ni doped SnO2 nanoparticles."The crystallite size of Ni doped SnO2 was determined by Powder x-ray diffraction analysis.The surface morphology and percentage composition of the prepared products were studied by scanning electron microscopic photographs followed by energy dispersive spectroscopy.The electroplating bath composition and parameters were listed in Table 1.Steel substrates with dimensions of 4 × 4 × 0.1 cm3 were used as cathode substrates and zinc sheets of the same dimension were used as anode materials.Before electroplating experiment, the surface cleaning of steel plates was carried out using emery papers and acetone.Finally, plates were rinsed with distilled water and used.The zinc sheets were plunged in 5% HCl to activate the surface of the anode material each time .The bath solution prepared for Zn-Ni doped SnO2 composite coating has been stirred for 10 h to prevent the agglomeration of nanoparticles.Scheme 2 demonstrates the experimental setup carried out for generation of the Zn-Ni doped SnO2 composite coating.The fabricated Zn and Zn-Ni doped SnO2 coatings were subjected to electrochemical corrosion studies of Tafel and electrochemical impedance spectroscopy using potentiostat CHI660C electrochemical workstation.The EIS studies were executed at the open circuit potential with frequency ranging from 0.1 Hz to
The synthesis was carried out by the combustion method using citric acid as a fuel.The surface characterization and elemental analysis of the coated samples were examined by X-ray diffraction spectroscopy (XRD), scanning electron microscopic images (SEM) followed by energy dispersive spectroscopy (EDAX).The corrosion resistance property of the Zn-Ni doped SnO2 composite coating was studied by Tafel polarization and electrochemical impedance spectroscopy.
10 kHz and amplitude of 5 mV.The morphology and composition of the deposits were scrutinized by XRD, scanning electron microscopic images and energy dispersive spectral investigation.The surface morphology and elemental analysis of the prepared nanoparticles were displayed in Fig. 2.As can be seen that the particles appeared like agglomerated spherical shaped flakes like morphology of Ni doped SnO2 nanoparticles.The elemental composition showed that the presence of Ni, Sn and O with the percentage of constituents and there was no foreign elements were observed.The XRD patterns of the Zn and Zn-Ni doped SnO2 coatings are represented in Fig. 3.The crystallite size was calculated using Debye Scherer equation and the obtained size for zinc coating was 33.88 nm and for Zn-composite coating it was 28.48 nm.The characteristic peaks at and planes showed the highest intensity in case of pure Zn-coating and they are decreased for Zn-Ni doped SnO2 composite coating.This finding indicated that the presence of Ni doped SnO2 nanoparticles inhibited the crystal growth thereby reduced the grain size .The reduction in grain size on the zinc surface leads to a compact structured surface and less number of surface pores.SEM photographs of pure Zn coatings and Zn-Ni doped SnO2 composite coatings were represented in Fig. 4.The pure zinc deposit was accompanied with some gaps and micro holes on its surface as displayed in Fig. 4.These micro-holes were greatly reduced and nearly absent in the Zn-Ni doped SnO2 composite deposit, which exhibited fine compact structured surface morphology as shown in Fig. 4 .It can be seen that the reduced grain size leads to the formation of tiny Zn fibers like compact surface morphology in the case of Zn-Ni doped SnO2 composite coated steel surface compare to pure Zn deposit.The energy dispersive spectrum demonstrated in Fig. 5 indicates the presence of Ni doped SnO2 nanoparticles in the Zn-composite matrix.The Zn-coating at Zn-1.5 g/L of Ni doped SnO2 nanoparticles concentration yields a good corrosion resistance property compared to the all other concentrations.The increased amount of nanoparticles caused the reduced polarization resistance behavior.This is due to the fact that the agglomeration of nanoparticles at higher concentration leads to the poor adhesion on the surface and slows down the deposition process .Hence the further addition of nanoparticles in Zn-coating was stopped after the Zn-2 g/L Ni doped SnO2 deposition.The Qdl and Qcoat values of a pure Zn-coating was higher compared to that of the Zn-Ni doped SnO2 composite coating.The presence of Ni doped SnO2 nanoparticles in the Zn-composite provided a more stability for coated surface and formed a strong corrosion barrier under corrosive atmosphere.The lower Rct value for Zn-Ni doped SnO2 composite coating compared to pure Zn-coating indicated a reduced number of surface active pores which are the cause of corrosion reactions.The incorporation of Ni doped SnO2 nanoparticles in Zn-coating accumulates the surface pores and thereby slows down the corrosion reactions at the interface of the metal surface and electrolyte in aggressive media .The corrosion resistance property was satisfactory at the Zn-1.5 g/L Ni doped SnO2 composite coating.Further increase of the Ni doped SnO2 concentration in the zinc matrix results in a decreased impedance of the deposit.Hence, the 1.5 g/L concentration of the Ni doped SnO2 composite additive has been considered as an optimum concentration for the good Zn-composite coating.The polarization resistance was given by the sum of the resistances of Rcoat and Rct.The Zn-Ni doped SnO2 composite coating has higher RP value in view of more corrosion resistance property compared to pure zinc coating.Similar results have been observed in bode plot and bode phase angle plot as displayed in Fig. 9a,b in which higher modulus impedance was observed for the Zn-1.5 g/L Ni doped SnO2 composite coating and it was lesser for pure Zn-coating.Also, a maximum phase angle was attained for the Zn-Ni doped SnO2 composite coating due to the more homogeneous surface with good corrosion resistance property of Zn- Ni doped SnO2 composite coating.The SEM images depicted in Fig. 10 shows that the corroded surface morphology captured after corrosion studies in 3.65% NaCl solution.The surface of pure Zn coated specimen was highly deteriorated and some cracks were also observed in Fig. 10.This indicates the poor corrosion resistance property under corrosive environment.The Zn-1.5 g/L Ni doped SnO2 composite coated surface exhibited a little effect on the corrosion reactions as shown in Fig. 10.Here, less deterioration and no cracks were appeared on the surface.The presence of Ni doped SnO2 nanoparticles in Zn-matrix provided a strong corrosion barrier in corrosion media.The Ni doped SnO2 nanoparticles were prepared by combustion method.The Zn-Ni doped SnO2 composite coating was fabricated by an electroplating technique.X-ray-diffraction study revealed the nano size of the Ni doped SnO2 particles.Surface morphology of Ni doped SnO2 showed the spherical nanoflakes structure.The EDAX analysis confirmed the percentage composition of the prepared nanoparticles.The Zn-Ni doped SnO2 composite coating exhibited an improved surface texture.The incorporation of the Ni doped SnO2 particles in the Zn-composite coating was confirmed by the EDAX analysis.The Tafel and electrochemical impedance studies proved that the presence of the Ni doped SnO2 in Zn-coating increased the corrosion resistance property of Zn-deposit as compared to pure Zn-deposit.
Zinc (Zn)-composite coatings are still in demand as good corrosion barrier coatings to protect steel substrates from corrosion environment.In this article, the Ni doped SnO2 nanoparticles were synthesized and used as a composite additive for Zn-coating.The Zn-Ni doped SnO2 composite coating was produced on mild steel by an electroplating technique.The surface morphology of Zn-Ni doped SnO2 composite before and after corrosion showed a more compact surface structure with respect to the pure Zn-coat.
MIT and USD, it could be further simplified and reduced significantly in cost.Future work at MIT will focus on developing a device using only a magnetic limit switch, an infrared motion detector, and a piezo alarm controlled by an Arduino UNO.These changes not only simplify the installation and setup of the device, but also cut down the price and reduce the payback period of the device significantly, while eliminating privacy concerns related to the video camera and the need to record or store any data.With this outlook in mind, we found that the device presented in this study is a useful platform to quantitatively characterize the efficacy of energy-saving behavioral-modification methods, exemplified by its use here to provide the proof that audible feedback works.Future work will compare the results obtained with the device designed in this study to the results obtained with the simplified device described above for verification that the audible alarm system is equally effective without video monitoring.This future work will also guide which methods should be implemented on a larger scale.As shown, an effective way to achieve long-term energy savings is by changing everyday behavioral practices.For laboratories, this includes altering equipment use practices, as well as selection of more energy efficient equipment.Modifying behaviors of lab users usually entails some type of incentivization, as we saw in this work and in previous studies.However, the energy saving methods will not be effective unless the form of feedback is consistent and difficult to ignore with time, such as an alarm.Other lab behaviors and practices that can be modified with the feedback device used in this study include turning off lights when there is no motion for a certain threshold period or limiting the time that a piece of lab bench equipment can be powered on after not being used for a threshold period.The device used in this study has the ability to offer the same impact on other lab behaviors as it did for fume hood use practices here.These results demonstrate the energy-saving impact of real-time audible feedback to alert fume hood users when the sash is open while a hood is not in use.With a payback period of just over a year, this proposed method of reducing wasted energy can be easily implemented at any VAV fume hood to generate a net financial gain.Furthermore, based on the quantitative evidence demonstrated here, a less sophisticated and more cost-effective version based on simpler sensors and processors, and without the need for data logging, is a feasible future step with a clear path to broad adoption and commercialization.Finally, because the unique monitoring characteristics of the device are based on open-source image processing software, this device can be used as a test platform not just for behavioral modification methods at fume hoods, but also for applications in area lighting, water use, waste streams, and beyond.
Fume hoods in laboratories consume the energy equivalent of up to four American households per hood; however, closing a modern hood's sash completely can save up to 75% of that energy.Past efforts have attempted to harness this potential energy reduction by reminding users to close the sash when a hood is not in use.In this work, we developed a device to measure the efficacy of these energy-saving methods.The device records the position of the sash and detects motion to determine whether a user is present, and, when fitted with a piezoelectric buzzer, can audibly alert users to close the sash when not in use.We installed this device in laboratories to quantify the energy and cost savings resulting from real-time audible feedback and found that the alarm reduced wasted energy by 87 to 98%.In addition, the platform demonstrated here can be used to quantitatively test other energy-saving methods that rely on user behavioral change in future work.