text
stringlengths
330
20.7k
summary
stringlengths
3
5.31k
connectivity state of healthy subjects, the possibility of “regression to the mean” should be considered as our study did not compare the longitudinal effects between CS patients with and without receiving CAS treatment.Furthermore, lifestyle modification has been shown associated with function connectivity changes.However, lifestyle changes after CAS were not formally recorded in our patients, and this confounding factor should be taken into account regarding to the post-CAS functional connectivity changes.The hyper-connectivity ROIs in the SMN network were mainly located in the subcortical nuclei, including the caudate, pallidum, putamen and thalamus.Interestingly, these hyper-connectivity loci are part of the frontal-subcortical circuit, which is involved in aspects of planning, working memory, rule-based learning and the decision threshold in reaction time tasks.These previous results are compatible with our findings that the connectivity strength in the SMN hyper-connectivity spots were correlated with the CVLT memory test and color-naming Stroop test results.As for the connectivity strength of the SAL hyper-connectivity and hypo-connectivity ROIs, they were mostly correlated with the Stroop test results, not the CVLT memory test results.Such findings are consistent with the view that the SAL network responds to behaviorally salient stimuli and helps determine which inputs are more likely to capture attention.In our study, education attainment was correlated with the strength of hyper-connectivity and hypo-connectivity.Subjects with lower education attainment had higher hyper-connectivity and worse hypo-connectivity, reflecting that they may have lower neural resources and that the presence of hyper-connectivity and hypo-connectivity are reciprocal dynamic changes.Education attainment is often deemed as the proxy measure for cognitive reserve, and subjects with higher cognitive reserve could tolerate more brain pathology and bypass the needs to recruit compensatory resources.Furthermore, the correlations between education attainment and connectivity alterations may suggest that the observed hyper-connectivity and hypo-connectivity in CS patients is related to neuronal activity and not simply the blood flow effect.As for the influence of unilateral CS on functional connectivity, decreased intra- and inter-hemispheric connectivity was noted in the ipsilateral cerebral hemisphere.On the other hand, Lin et al. once observed the ipsilateral superior temporal lobule hyper-connectivity of the frontoparietal network in CS patients, and attributed it to a less anti-correlated activity rather than a compensatory activity.Similar findings on hypo-connectivity were noted in our study, and, furthermore, there was more hyper-connectivity in the contralateral cerebral hemisphere.The discrepancy in the hyper-connectivity findings might be attributed to differences in the sample and seed selections.Our patients had lower education attainment and lower MMSE score, and their low cognitive reserve might have high drive to recruit additional brain regions to cope with the asymmetrical vascular burden.Furthermore, the seeds for extracting time series correlation coefficients were placed in the contralateral cerebral hemisphere in our study because the connectivity of the spared cerebral hemisphere was the main focus in our analysis, while the seeds were placed in the ipsilateral cerebral hemisphere in previous studies.There were limitations in our study.First, structural connectivity was not examined.Diminished mean fractional anisotropy was reported to be associated with functional hypo-connectivity.How the functional hyper-connectivity interplays with structural changes requires further investigation.Secondly, there were correlations of cognitive performance with hyper-connectivity and hypo-connectivity strength.However, the sample size was relatively small, and a larger sample size is necessary to replicate the study results.Thirdly, there was temporal attenuation in functional hyper-connectivity and hypo-connectivity after CAS, but we were unable to estimate the influence of connectivity changes on cognitive performance after CAS because the cognitive test results in the follow-up visits were not accessed.It requires longitudinal studies to determine whether such connectivity attenuations are associated with long-term changes in cognitive performance.In conclusion, we found patients with unilateral CS have lateralized functional connectivity in the SMN and SAL networks.The prolonged and lateralized hyper-connectivity may act as the compensation factor in neuroplasticity, affecting the cognitive performance in patients with unilateral CS.Our finding is the first study to investigate the compensatory neural adaptations to the influence of CS before and after CAS.This study was carried out under the grants from the Ministry of Science and Technology, Taiwan, and the Research Fund of Chang Gung Memorial Hospital.
Although functional hyper-connectivity is one of the physiological over-compensation phenomena in neurological diseases, the literature on the cognitive influence of functional hyper-connectivity in CS patients is limited.We aimed to investigate the longitudinal changes of hyper-connectivity after CAS and its association with cognition in CS patients.Methods: Thirteen patients with unilateral CS and 17 controls without CS were included.Comparisons of functional connectivity (FC) between CS patients and controls in multiple brain networks were performed.Results: In patients before CAS, FC in the cerebral hemispheres ipsilateral and contralateral to CS was mainly decreased and increased, respectively, compared with normal controls.Part of the FC alterations gradually recovered to the normal condition after CAS.The stronger FC abnormality (both hypo- and hyper-connectivity compared with normal controls) was associated with poorer cognitive performances, especially in memory and executive functions.Conclusion: The study demonstrated the lateralization of hyper-connectivity and hypo-connectivity in patients with unilateral CS in contrast to the FC in normal controls.These FC alterations were associated with poor cognitive performances and tended to recover after CAS, implying that hyper-connectivity is served as a compensation for neural challenge.
The agriculture sector uses chemical fertilizers to increase agricultural production to enable feeding the growing food demands of populations .In Iran, chemical fertilizer utilization increased from 2.4 t in 2001 to 3.3 t in 2011.That means more than 27,000 t chemical fertilizers have been used in agriculture sector annually .In 2010, Iran was ranked 93rd in the world due to improper use of chemical fertilizers, hormones, pesticides and chemical residuals in agricultural products .This study aims to assess the knowledge and tendency towards organic foods use among 400 people who randomly selected from two exhibitions held in Tehran, Iran.The respondents had forty-six percent of the knowledge about organic food and were able to correctly describe it, whilst 45% did not.among the healthy foods exhibition participants, these figures were 63% and 24% respectively.Table 1 shows Description figures about tendency towards organic foods use, there was no statistically significant difference between the two groups in relation to their tendency towards organic foods use.Table 2 shows ANOVA between independents variables and tendency towards organic food and Table 3 indicates regression coefficients model for independent variables.There were reverse relation between knowledge and accessibility and positive relation between trust and no relation with price.This is a descriptive-analytical study that utilized data collected through questionnaires adapted from previous studies .The questionnaire contained three sections that obtained data concerning the participants’ demographic characteristics, knowledge about organic foods, and tendency towards organic foods use.Food safety specialist reviewed the questionnaire and modifications were made accordingly before its administration.Then, it was further pretested 50 randomly selected individuals."The analysis of the responses provided Cronbach's alpha test result of 70%, indicating good reliability of the questionnaire.The study was conducted on a randomly selected individuals from two exhibitions held in Tehran.One of the exhibitions was about “fall exhibition” that mainly presented school articles and the other was on “healthy food” that presented organic food products.We computed the sample size by the formula for simple random sampling technique using the proportion that provides the maximum sample size, at 95% level of significance, and margin of error of 0.05.The sample size was calculated at 384, 10% was added, as a contingency for non-response, and the final sample was settled at 400.The data was collected from 200 individuals each from the two exhibitions in the morning and in the evening sessions.Different analytical techniques including T-test, ANOVA and regression analysis were carried out using the statistical packages for socials sciences version 17.
Improper use of chemical fertilizer and pesticide poses not only threats to the environmental safety but also major public health issues globally.The adverse effects of chemical fertilizers and pesticides forced agricultural scientist to look for safer methods such as organic farming.This study was aimed at assessing the knowledge and tendency towards organic foods use among people of living in a megacity, Tehran.Data was collected from “fall exhibition” and “health food exhibition” participants using pretested questionnaire.Data were entered, cleaned and analyzed by SPSS version 17.T-test, ANOVA and Regression analysis were carried out and the association was considered significant at p-value less than 0.05.A total of 400 respondents participated in the study, making a response rate of 100%.There were reverse relation between knowledge and accessibility and positive relation between trust, marriage and gender and no relation with price.Building trust in consumer, and allocation of a special label, known logos and ways to track most of the products sold as organic foods seems necessary for increasing consumption.
may be easily neglected.NUACI, on the other hand, is a rule-based product derived from Landsat imagery and DMSP-OLS nighttime lights.As a consequence, some impervious surfaces with low nightlight brightness could be incorrectly masked out.Within the urban core of Nanchang, river islands receive special attention because of the seasonality of inundations.Based on our knowledge, none of the five maps perfectly captures the ground truth of imperviousness in the river islands, indicating the impact of seasonal water occurrence on impervious surface mapping or change detection.Another issue that may cause the difference is the inconsistency of the land cover classification scheme.For example, in GLC30, there is no impervious surface class, thus we used the class of artificial surface as an alternative.This land cover type, however, contains non-impervious surfaces such as the urban green space.Therefore, a thorough evaluation of our mapping performance requires more established land cover products in which the class of impervious surface is included.Due to the long temporal interval, current fine-resolution impervious surface products can capture some, but not all, of the imperviousness dynamics during urbanization.Our approach renders a new possibility to fill this gap.The independent classification of the two baseline years can be replaced by the corresponding layers derived from existing datasets.Then, the stable area masking and continuous change detection can be implemented in sequence.The integration of existing products and our approach brings the following benefits.First, no additional data or manual intervention is required, so the product consistency can be met.More importantly, by generating more frequent impervious surface extent maps at the annual or even intra-annual scales, we are able to provide more nuanced insights for monitoring impervious surface dynamics that are not represented by estimates of existing products.Although accuracy assessments show good performance of the present approach, our approach has limitations and uncertainties.The classification errors of the start and end years can induce uncertainties in the stable area mask.In this study, reference samples for modeling and validation are not strictly ground truth data, but rather extracted from satellite images via manual interpretation.This process contains errors, which may be propagated into the final outputs.The CCD algorithm depends heavily on high frequency of clear observations.Hence, its application may be limited in places with unfavorable weather conditions or extreme seasonal imbalance of clear observations.Since a “change target” method focuses only on impervious surfaces, the present approach cannot reveal the cause of change, which is equally important in understanding urban system dynamics.The heterogeneous nature of urban landscapes gives rise to the common presence of mixed pixels that can lead to a decreased change detection accuracy due to the spectral signature interference from other land cover types.Therefore, further studies may focus on sub-pixel level implementation of monitoring continuous impervious surface dynamics.Accurately monitoring impervious surface dynamics is of great importance far beyond the city limit.In this study, we develop an approach for continuous impervious surface change estimation with the use of the spatial-temporal rules and dense time series stacks of Landsat data.Experimental results in Nanchang, China, reveal the effectiveness and efficiency of this approach.Three major conclusions are summarized.First, by applying classification in the start and end years, stable areas characterized by temporally persistent land covers or irrelevant land cover changes can be spatially excluded, saving unnecessary computing workload.Second, based on the temporal irreversibility rule and the CCD algorithm, continuous change detection can be achieved on the per-pixel level by finding the corresponding breaks through time series models using Landsat dense time series.Thus, an impervious surface extent map at any given time during the study period can be generated.Finally, compared to the traditional impervious surface change monitoring methods, the present approach not only provides convincing and more frequent estimates, but also has the flexibility to be integrated into the existing products for further utilization at the varying spatial and temporal scales.
Impervious surface dynamics have far-reaching consequences on both the environment and human well-being.Thus, monitoring impervious surface dynamics with high temporal frequency in a both accurate and efficient manner is highly needed.Here, we propose an approach to capture continuous impervious surface dynamics using spatial-temporal rules and dense time series stacks of Landsat data.First, a stable area mask based on image classification in the start and the end years is generated to remove pixels that are persistent or spatially irrelevant.The Continuous Change Detection (CCD)algorithm is then employed to determine the change points when non-impervious cover converts to impervious surface based on the property of temporal irreversibility.We apply and assess the proposed approach in Nanchang (China), which has been experiencing rapid impervious surface expansion during the past decade.According to the validation results, overall accuracies of image classification in the start and the end years are 97.2% and 96.7%, respectively.Our approach generates convincing results for impervious surface change detection, with overall accuracy of 85.5% at the annual scale, which is higher than three commonly used approaches in previous studies.The derived impervious surface extent maps exhibit comparable performances with five widely used products.The present approach offers a new perspective for providing timely and accurate impervious surface dynamics with dense temporal frequency and high classification accuracy.
Tables 2 & 3.Differences between the different somaclones for the eight different agronomic traits such as plant height, diameter of bush, number of tiller/clump, number of leaves/clump, leaf length, leaf breadth, weight of 100 leaves and oil content were evident at 1% level of significance."Significant differences between the different somaclones and also with donor parent were also evidenced by Duncan's Multiple Range Test.We also estimated the superiority of these somaclones in RBD trial by considering the performance of the donor parent as 100.To select these somaclones we gave more attention to the economically important characters like oil content and oil quality.It was noticed that these 10 somaclones retained relatively high quantity of oil ranging between 2.35% and 3.12% in initial screening against the 1.21% in donor parent and ranging from 2.33% to 3.22% in RBD trial against 1.15% in donor parent.Enhancement in oil content in somaclones ranged from 102% to 180% compared to the control.On the basis of oil content of the 10 somaclones four somaclones were selected as they maintained their superiority over the parent line with > 1.5 fold high oil content in RBD trial.Considering oil content SC2 was also selected as it has shown 146% enhancement in oil content compared to the control plant.The oil content in the rest of the somaclones was also high in contrast to the parent.The percent composition of two major constituents of oil of the 10 selected somaclones was analyzed which have executed better to the parent plant in terms of oil content.There were significant variations in constituents for citronellal and geraniol among the 10 selected somaclones and also compared to the donor parent.It was observed that these selected somaclones retained the oil quality containing relatively a high amount of citronellal and geraniol.Such type of enhancement in geraniol and citronellal content was also reported in somaclones of C. martinii, citronella Java and Jamrosa.SC6 was selected on the basis of its better oil quality as it contained higher amount of geraniol and citronellal which also showed superiority in oil content.Somaclones Such as SC1, SC2, SC3 and SC10 also maintained their oil quality with respect to percent composition of citronellal and geraniol.SC7 contained a higher amount of citronellal with a lower geraniol and oil content.It was also observed that in some somaclones the oil quality is not directly proportional to the high oil quantity.Similar observations were also reported in in vitro regenerated plants of Java citronella and Mentha.However, in the present investigation, the majority of the somaclones revealed more favorable variation in oil content and oil quality in replicated trial helping to select the somaclones with high oil quantity together with oil quality.The 10 superior somaclones of C. winterianus which retained improved oil content and quality in initial screening and replicated trial were assessed for their stability in clonal propagation in the field in subsequent years.The quality of oil in SC4 decreases from first clonal generation and this decrease becomes dramatic from third clonal generation showing total geraniol and citronellal content less than the control.Somaclone No. 7 and 8 maintained > 1 fold oil yield than control up to two clonal propagation failed to maintained this trait from third clonal propagations onwards and became comparable to the control.The somaclone line 5 and 9 also maintaned their superiority in oil content compared to donor parent in successive clonal generations but their oil quality is decreased gradually in successive clonal propagations.From the 10 selected somaclones rest five superior somaclonal lines namely SC1, SC2, SC3, SC6 and SC10 were further propagated and they showed relative stability for the main selected traits such as high oil content, high geraniol and citronellal content up to 4th clonal propagation of the 2nd year, 2008.These five somaclones are subjected to RAPD using 18 arbitrary primers and the results were compared to the control.Out of the 18 primers used, 10 primers revealed polymorphism showing distinctly different banding patterns in the five improved somaclones, which were equally prominent in their differences from the control.The sequence of the primers and the specific amplified fragments of the DNA, which were generated against the primers, are presented in Table 8.In general 1–13 amplified fragments were scored depending upon the primers.The number of polymorphic fragments between the mother and five different somaclones was also calculated.A maximum of 19 polymorphic bands were obtained between the mother and SC3, while a minimum number of polymorphic fragment were calculated between the mother and SC10 using the 10 RAPD primers.A number of polymorphic fragment between the somaclones were maximum between the SC1 and SC3 and minimum between SC3 and SC6.In the present experiment, ten out of eighteen primers gave 3–15 distinct bands per primer, ranging in molecular size from 60 to 5200 base pairs.A maximum of 15 loci were amplified with primer MS10G5, while a minimum of 3 loci were recorded with primer MS10G3 with an average of 7.4 bands per RAPD primer.Out of the 74 amplified bands 30 were monomorphic and 44 were polymorphic in nature.The variation in banding pattern indicates a high degree of polymorphism between the somaclones and with the mother.The number of polymorphic bands ranged 2–10, with an average of 4.4 polymorphic bands per primer.Primer MS10G10 generated 90% polymorphic bands, while MS10G15 produced 25% polymorphic bands.The average PIC value of the primers was 0.21, ranging from 0.111 to 0.285 and the average marker index was calculated as 0.71, ranging from 0.185 to 1.84.Out of the 44 polymorphic
Extensive somaclonal variation was noticed within these plants for all eight agronomic characters such as plant height, diameter of bush, number of tiller/clump, number of leaves/clump, leaf length, leaf breadth, weight of 100 leaves and oil content in relation to the donor parent.Significant variations were also recorded for two major constituents of essential oil such as citronellal and geraniol in some selected somaclones.Ten somaclones that were retained improved oil content and quality in initial screening and, in a replicated trial, were further assessed for their stability in the field for four clonal propagations.Out of the ten superior somaclones only five superior somaclones (SC1, SC2, SC3, SC6 and SC10) which showed relative stability both in oil content and quality were subjected to random amplified polymorphic DNA (RAPD) analysis.Out of the eighteen primers used, ten primers revealed polymorphism showing distinctly different banding patterns in the five improved somaclones, which were equally prominent in their differences from the control.
concentrations during the night hours.This work analysed the data available from the fixed–site monitoring stations to assess the effect of the odd–even trial on the PM concentrations.While a clear picture emerged from the analysis of odd–even trial efforts, further studies are recommended to target background measurements under different meteorological conditions and seasons.The monitoring of on–road traffic volume before, during, and after the odd–even trial periods through control studies in future could also help in determining the actual reduction in different types of on-road vehicles and their emissions.Furthermore, mobile monitoring around such interventions could provide better spatial resolution of concentrations and identify pollution hotspots within the city to prioritise specific emission control strategies.From the use of routine air quality data, it has not been possible to differentiate clearly the changes in air quality attributable to changes in road traffic from those due to other sources of emissions.It is recommended that before such a trial is next implemented, measures are put in place to enable either full source apportionment of the particulate matter, or as a minimum, measurement of chemical tracers for road traffic emissions.Moreover, very careful planning of the study, including simultaneous data for air pollution, meteorology and vehicle count during such periods of interventions, will be needed if it is to differentiate between the effects of changes in emissions and meteorology upon the measured concentrations.
The odd-even car trial scheme, which reduced car traffic between 08.00 and 20.00 h daily, was applied from 1 to 15 January 2016 (winter scheme, WS) and 15–30 April 2016 (summer scheme, SS).PM concentrations during the trials can be interpreted either as reduced or increased, depending on the periods used for comparison purposes.For example, hourly average net PM2.5 and PM10 (after subtracting the baseline concentrations) reduced by up to 74% during the majority (after 1100 h) of trial hours compared with the corresponding hours during the previous year.Conversely, daily average PM2.5 and PM10 were higher by up to 3–times during the trial periods when compared with the pre–trial days.A careful analysis of the data shows that the trials generated cleaner air for certain hours of the day but the persistence of overnight emissions from heavy goods vehicles into the morning odd–even hours (0800–1100 h) made them probably ineffective at this time.Any further trial will need to be planned very carefully if an effect due to traffic alone is to be differentiated from the larger effect caused by changes in meteorology and especially wind direction.
When a liquid volume is subjected to a high-intensity ultrasonic field close to the entry of narrow spaces such as capillaries, canals, pores and voids an abnormally high rise of the liquid is observed inside these narrow channels.This phenomenon is known as the sono-capillary or ultrasonic capillary effect and the research was first initiated in the former Soviet Union in early 60’s and 70’s .More recently, UCE attracted the attention of researchers with regard to liquid filtration and impregnation .Several hypotheses regarding the driving mechanism of the UCE have been developed .At the same time, a robust theory explaining the driving mechanism of the UCE has not been found yet as there are only few real time observations of UCE available that can provide us with a coherent validation of the existing theories.Specifically, experimental studies on water showed that the extreme dynamics of the acoustic cavitation bubbles were observed at the entry to capillary tubes, suggesting that cavitation generation is a requirement of the sono-capillary phenomenon assisting the penetration of the surrounding liquid into capillary micro-channels .Dezhkunov et al. showed that in the absence of cavitation no rise of the liquid in a capillary was recorded.As the amplitude of vibrations increased and a cavitation cloud appeared at the capillary channel inlet, the liquid raised inside the capillary, implying that the cavitation activity played a crucial role in the UCE.Additionally, Sankin et al. suggested that the sono-capillary effect might be explained, at least partially, by a counter pressure arising from the bubbly cloud and the capillary interaction.Although they stated that the point of application of the counteracting force influencing the liquid flow in the capillary had yet to be found.In line with these research findings was the recent work of Tamura et al. , who suggested that the cavitation bubbles formed at the open end of a capillary tube contribute to the pressure accumulation within the tube leading to the raised height of the liquid.In contrast, Rozina et al. , proposed that the specific cavitation phenomena were not the decisive factor for the formation of liquid flow penetrating micro-capillary channels and also reported that the process of the filling of dead-end capillaries with a liquid in the ultrasonic field, was affected mainly by the dissolution of gas inside the capillary rather than cavitation .The role of cavitation and cavitation bubble dynamics in the UCE phenomenon therefore remains an open subject for further investigation.Despite the continuing debate on the mechanisms of the sono-capillary effect, the application of UCE is extremely important for industry as it underlies intensification of many important technological and chemical processes connected with oil production , bio-sludge and biomass processing , membrane filtration , ceramic filters , micro-biology as well as advances in metal casting industry and the production of high quality castings .The effects induced by ultrasound in liquid media and their influence on the industrial metal processing have been studied for many years under the umbrella of acoustic cavitation .Although these effects are widely used, there is a large room for technology improvement that requires better understanding of the involved mechanisms.In addition to filtration, wetting and impregnation applications, the forced filling of small-scale defects formed during solidification, e.g. grooves, voids, crevices, which could eventually lead to crack formation and propagation, can potentially reinforce the as-cast structure and improve structural integrity of the as-cast metal.The experimental studies of UCE in liquid metals have been hindered until recently by obvious limitations such as high temperatures, opaqueness and chemical activity of the melts.Nowadays X-ray imaging technology, available through the third generation synchrotron radiation sources, is extensively applied in the in situ and real-time investigations of liquid metals and their solidification processes .In the current study, the direct observation of the penetration of a liquid metal into a pre-existing groove during the ultrasonic processing of a liquid Al–10 wt% Cu alloy, enabled by the Diamond Light Source – the national synchrotron facility of the UK, is used for testing some of the theoretical approaches behind the UCE.To the best of our knowledge this is the first time when the filling of a capillary with the liquid metal under cavitation conditions has been observed.Our results showed that a possible mechanism responsible for the UCE is the collapsing activity of cavitation bubbles in the vicinity of the micro-capillary inlet.The results are in a good agreement with the hypothesis regarding the cavitation nature of the UCE as reported by Dezkunov and Leighton who analysed this phenomenon in water.Through this novel research we are now able to help answer the long-stated questions: can ultrasound assist the penetration of capillary channels in the molten metals, what are the governing mechanisms and are these mechanisms related with the local activity of the cavitation bubbles?,The discovery of the almost instantaneous re-filling of a micro-capillary channel with the metallic melt is highly important, as it appears to confirm the existence of the sono-capillary effect in technologically important liquids, other than water, like metallic alloys with substantially higher surface tension and density.In situ synchrotron X-ray radiography studies of a molten Al–10 wt% Cu alloy subjected to ultrasonic processing were carried out using the Diamond-Manchester Branchline at the Diamond Light Source.A bespoke ultrasonic test rig, consisting of an ultrasonic processor coupled with a Ti sonotrode and a PID-controlled resistance furnace was used.A split furnace with a small X-ray translucent window to allow the X-rays to pass with minimal attenuation was vertically
An in situ synchrotron radiographic study of a molten Al-10 wt% Cu alloy under the influence of an external ultrasonic field was carried out using the Diamond-Manchester Branchline pink X-ray imaging at the Diamond Light Source in UK.A bespoke test rig was used, consisting of an acoustic transducer with a titanium sonotrode coupled with a PID-controlled resistance furnace.This allowed quantification of not only the cavitation bubble formation and collapse, but there was also evidence of the previously hypothesised ultrasonic capillary effect (UCE), providing the first direct observations of this phenomenon in a molten metallic alloy.The observation of the almost instantaneous re-filling of a micro-capillary channel with the metallic melt supports the hypothesised sono-capillary effect in technologically important liquids other than water, like metallic alloys with substantially higher surface tension and density.
entrained oxides are forced to enter the groove, enhancing the increment in the number of oxides within the groove in a shorter time.The calculated pressures in Table 3 are in a good agreement with the calculations in Table 2 where the required pressure for filling the groove is in the same range with the calculations of the hydrodynamic impact pressure delivered by the micro-jet at the entrance of the groove, hence transport of liquid mass and oxide particles inside the groove will occur.Finally, we check the hypothesis about oxide particle accumulation.For a certain size rs, which are determined by the Stokes number), particles no longer move with streamlines.Inertia plus the jet momentum tend to straighten the trajectories of these particles in those points where the stream changes its direction.As a result, the probability of a particle being retained by the tube significantly increases.This mechanism is called inertial impaction and is well explained in .For the critical value of Stk = 0.08), the probability of precipitation Ei = 0.33.Hence, only one third of inclusions with critical and larger sizes can be accumulated by inertial impact inside the groove.Fig. 4 gives the probability of particles capture) in the tube for various oxide particle sizes and Stokes numbers).It is obvious that for Al oxide particles with a typical diameter between 0.5 and 1.5 μm and for the time of filling the tube between 50 and 1 μs which is tied up with the collapse of bubbles, the precipitation percentage of particles inside the tube varies from 13.5% to 99.2%.In particular, for smaller particles of 0.5 μm in diameter, the precipitation percentage inside the groove varies from 13.5% to 93.5%.For particles with a size of 1 μm the precipitation probability is higher, from 48.7% up to 98.3%; while for even larger particles of 1.5 μm in size, the precipitation percentage varies from 70.4% up to 99.2%.Consequently, these results in combination with the number of collapsing events in the frame rate window of 78 ms and the volume of melt and oxides carried out by the jet inside the tube clearly show that the precipitation of Al oxides inside a capillary is a process that is tied up with the collapse of bubbles and particularly with the distance of the incoming high speed jet from the entrance of the groove, the speed of the jet, the size of the jet and the duration of the filling.This mechanism is also related with the ultrasonic melt filtration .Results are in a very good agreement with the studies in where the physical mechanism of particle delivery inside a capillary tube due to the action from the high speed micro-jet was experimentally shown.Very few studies of the ultrasonic capillary effect in liquid metals have been performed due to the complex nature of the physical processes involved.This lack of real-time observations, due to the challenges of high temperature experimentation and opaqueness of the melts, has restricted the validation of existing UCE theories.In this study, the re-filling of a pre-existing oxide film tube-like groove, with the action of ultrasound upon an Al–Cu melt was monitored.Analytical solutions of the hydrodynamic impact pressure exerted from the cavitation implosion jet and the hydrodynamic pressures required to fill the studied groove, have shown that the mechanism responsible for the re-filling of the groove with the melt is the collapse of cavitation bubbles near by the groove inlet.This is important evidence that clarifies for first time ever the existence of UCE in liquid metals.Additionally, during the re-filling mechanism, a secondary effect is also revealed from the changing contrast of the X-ray synchrotron images.It was shown that the concentration percentage of oxides, that are captured in the groove during melt flow through it, is likely to increase.The precipitation of Al oxides inside the groove is a process related to the collapse of bubbles and particularly to the distance of the incoming high speed melt jet from the inlet of the tube-like groove, the speed of the jet, the size of the jet and the duration of the filling.The observed phenomenon is related to the ultrasound-assisted filtration of the melt from oxide inclusion.
This was achieved by quantifying the re-filling of a pre-existing groove in the shape of a tube (which acted as a micro-capillary channel) formed by the oxide envelope of the liquid sample.Analytical solutions of the flow suggest that the filling process, which took place in very small timescales, was related to micro-jetting from the collapsing cavitation bubbles.
a multicenter controlled trial with rituximab are awaited in spring 2018.Few other treatment modalities targeting autoimmunity were evaluated in clinical trials in ME/CFS.High dose intravenous IgG therapy is efficacious in autoantibody-mediated diseases."Several intravenous IgG studies were performed in ME/CFS during the 80's with two randomized controlled trials with positive and two with a negative outcome .Preliminary data from an ongoing trial in Norway with cyclophosphamide suggests therapeutic efficacy of this broadly immunosuppressive drug.Immunoadsorption is an apheresis in which IgG is specifically removed from plasma resulting in clinical improvement in various types of autoimmune disease.We performed a pilot trial in 10 patients with ME/CFS and observed first evidence for efficacy .There is compelling evidence that autoimmune mechanisms play a role in ME/CFS.However clinical heterogeneity in disease onset, presence of immune-associated symptoms, and divergent immunological alterations point to the existence of subgroups of ME/CFS patients with possibly different pathomechanisms.Therefore, it is important to identify clinically useful diagnostic markers to select patients with autoimmune-mediated disease for clinical trials.The search for autoantibodies is of great importance enabling to develop potential biomarkers for diagnosis and providing a rationale for therapeutic interventions.Encouraging results from first clinical trials warrant larger studies with rituximab and other strategies targeting autoantibodies.This review is based upon work from European Network on Myalgic Encephalomyelitis/Chronic Fatigue Syndrome as part of Cost Action CA15111 supported by the EU Framework Program Horizon 2020.Website: http://www.cost.eu/COST_Actions/ca/CA15111.FS and CS were responsible for the first draft of the protocol, which was critically reviewed, further developed and approved by all authors.JB reports personal fees from ALBAJUNA THERAPEUTICS, S.L., outside the submitted work; CS has received grant support for clinical trials and research from Fresenius, Shire, Lost Voices, SolveME, MERUK, IBB, and speaking honoraria from Octapharma and Shire.FS, EC, JCM, SS and MM have no conflict of interest to declare.
Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS) is a frequent and severe chronic disease drastically impairing life quality.The underlying pathomechanism is incompletely understood yet but there is convincing evidence that in at least a subset of patients ME/CFS has an autoimmune etiology.In this review, we will discuss current autoimmune aspects for ME/CFS.Immune dysregulation in ME/CFS has been frequently described including changes in cytokine profiles and immunoglobulin levels, T- and B-cell phenotype and a decrease of natural killer cell cytotoxicity.Moreover, autoantibodies against various antigens including neurotransmitter receptors have been recently identified in ME/CFS individuals by several groups.Consistently, clinical trials from Norway have shown that B-cell depletion with rituximab results in clinical benefits in about half of ME/CFS patients.Furthermore, recent studies have provided evidence for severe metabolic disturbances presumably mediated by serum autoantibodies in ME/CFS.Therefore, further efforts are required to delineate the role of autoantibodies in the onset and pathomechanisms of ME/CFS in order to better understand and properly treat this disease.
to the data mining framework remaining separate from the rest of the system.In this extension to the model, each addition, change, or deletion of a rule in the evidence base is tracked, whether the rule has been manually modified or a result of an automated data mining algorithm.The prototype of template-based provenance was introduced in its early form in and successfully demonstrated the feasibility of using provenance templates to support a large, heterogeneous software infrastructure, albeit missing the theoretical foundation and full architecture presented here.A separate effort at Southampton is currently looking into lower-level provenance templates that abstract individual PROV variables, rather than larger graph fragments.The instantiation then proceeds by performing cross-products of all variable value spaces, as restricted by constraints.The concept of substructures in provenance graphs has also been researched in the context of SPARQL queries for RDF provenance repositories and generic graph fragment queries that use a graph motif with a set of constraints on that motif .Related efforts have also been made in the area of graph summarisation which use the graph structure as a basis for summarising and compressing relational knowledge by detecting patterns and compressing structural knowledge encoded within relational graphs, including repetitive or sequential structures.Finally, a body of work exists in abstracting provenance graphs for security purposes.ZOOM system uses the concept of user views to abstract nodes that are not of interest to the consumer .The TACLP , ProvAbs , and ProPub approaches use security policy definitions to determine which components of the provenance graph to include and which to abstract.A defining characteristic of the Learning Health System is the trust that must be placed in every aspect of the system .The participants in the LHS must be able to gain insight into its workings if they are to put faith in its actions and entrust it with their data.Furthermore, the system must possess introspective qualities in order to be able to learn about itself and continuously improve, engendering a Virtuous Cycle of Health Improvement.This implies capabilities for data and knowledge sharing between the research and clinical actors, under clear and automatically enforced privacy and security rules.A semantically clear and unambiguous provenance trace provides a mechanism for such sharing.This paper has looked into the reproducibility challenges facing decision support systems, guided by the LHS paradigm, and proposed a solution based on data provenance technologies and abstract provenance template constructs.The semantic complexity of the medical domain modelled was modelled using ontologies annotated onto provenance graphs, and the software architecture used the templates to facilitate provenance capture from the decision support tool.The work was originally prototyped in the diagnostic decision support system developed within the TRANSFoRm project where it was used to capture data from over four hundred simulated diagnostic patient encounters and key analytical queries on that data were shown.Ultimately, this work contributes to the efforts in integrating trust into computerised decision support systems, enabling transparency and auditability by creating a basis for implementing validation mechanisms.The complexity of decision support systems offers numerous opportunities for problems to arise, from quality of data capture and accuracy of EHR interactions via usability issues to algorithmic errors in rule design.Thus, their increased use puts more and more focus on the techniques for ensuring correctness of the tasks involved.Data provenance offers the mechanism to achieve this, and through use of provenance templates, we have shown how such infrastructure can be implemented in the context of decision support systems.In the era of Big Data, deep learning systems such as IBM Watson, and other technologies that often rely on black-box analytical environments, it is of paramount importance to support transparency in computerised systems which actions may have direct consequences on human lives.A particularly dangerous assumption of some Big Data evangelists is that with a sufficiently large data, correlation can replace causality in our analytical models.While this may be perfectly fine for market analysis questions such as investigating customer churn or supermarket shopping baskets, medical research in particular depends on full understanding of finer points such as bias, data quality, and statistical significance to derive its conclusions.Thankfully, there is an increasing understanding of the fact that we require not just intelligent machines but intelligible machines .Rather than avoiding Big Data technologies, we need to understand which aspects of it are well-suited to medical research, and then build research software frameworks that support transparency, auditability, replicability and reproducibility .The version of the DSS provenance infrastructure employed in TRANSFoRm is currently being updated for use by further projects, such as the DSS for prevention of secondary stroke in patients in South London.The tool, developed as part of the CLAHRC South London programme,4 has been designed by the team at King’s College London with key stakeholders including clinicians, patients, and commissioners, and the provenance module will provide audibility and traceability of decisions made with the tool.With the recent changes in scalability support in Neo4J, we intend to do away with the relational/RDF store and use Neo4J as our main database storage, providing a SPARQL query front end for backwards compatibility.Furthermore, we plan to add more templates that cover non-diagnostic decision support scenarios and implement some more advanced PROV concepts such as hyperedges representing relations between more than two nodes.We wish to confirm that there are no known conicts of interest associated with this publication and there has been no significant financial support for this work that could have inuenced its outcome.
Decision support systems are used as a method of promoting consistent guideline-based diagnosis supporting clinical reasoning at point of care.However, despite the availability of numerous commercial products, the wider acceptance of these systems has been hampered by concerns about diagnostic performance and a perceived lack of transparency in the process of generating clinical recommendations.This resonates with the Learning Health System paradigm that promotes data-driven medicine relying on routine data capture and transformation, which also stresses the need for trust in an evidence-based system.Data provenance is a way of automatically capturing the trace of a research task and its resulting data, thereby facilitating trust and the principles of reproducible research.While computational domains have started to embrace this technology through provenance-enabled execution middlewares, traditionally non-computational disciplines, such as medical research, that do not rely on a single software platform, are still struggling with its adoption.In order to address these issues, we introduce provenance templates – abstract provenance fragments representing meaningful domain actions.Templates can be used to generate a model-driven service interface for domain software tools to routinely capture the provenance of their data and tasks.This paper specifies the requirements for a Decision Support tool based on the Learning Health System, introduces the theoretical model for provenance templates and demonstrates the resulting architecture.Our methods were tested and validated on the provenance infrastructure for a Diagnostic Decision Support System that was developed as part of the EU FP7 TRANSFoRm project.
the ζ-potential of heat-treated powders can be discussed more thoroughly.Fig. 11 presents the ζ-potential of powders heat-treated in argon at 450 °C for 60 mins alongside the as-received data for comparison.The purpose of this was to breakdown any residual polymers on the surface and reveal the ζ-potential of an uncontaminated B4C surface.Looking at Fig. 11, heat-treated powders have their ζ-potential is shifted to more negative values across all pH values.This puts forward evidence that there is likely a charged surface species of cationic nature on the powders at varying degrees of concentration.It was also observed that the natural pH of these slurries was between 3.5 and 4.5, making them considerably less acidic than their as-received counterparts.It is proposed that increasing levels of contamination are present from 4895 through to 4969, to the point that the maximum obtainable basic ζ-potential is restricted.Heat-treating the powder enables the negatively charged surface of B4C to stabilise the suspension electrostatically above pH 4 and the IEP of all the powders to occur below pH 2.5.This results in the powders performing consistently across batches and away from the IEP without the need for pH adjustment.Electroacoustic spectroscopy has been used to determine the ζ-potential of high solids loading B4C suspensions across the pH range.The viscosity of the suspensions at each pH were shown to correlate with the ζ-potential magnitude, as expected, with the highest viscosity being observed at the IEP in each case.These observations provide confidence in the validity of electroacoustic spectroscopy for determining ζ-potential of B4C particles in water.The work has also shown that B4C powder can vary quite significantly from batch to batch once in suspension, despite nominally being of the same specification.Heat treatment of powders can normalise powders in terms of their ζ-potential, indicating that there is likely small quantities of contamination present on as received powders.Electroacoustic characterisation is a useful tool in providing information on B4C powder and may be of particular use to manufacturers looking to screen powders.It was also found that dispersant must be added in concentrations greater than where the maximum ζ-potential is observed to obtain the lowest viscosity.This is put down to the steric hindrance effect of the dispersants still having a role to play once electrostatic repulsion has been maximised.
The zeta (ζ) potential of moderately concentrated (at 15 vol%) boron carbide (B4C) suspensions were characterised using electroacoustic spectroscopy.This technique has been validated for use in this application by correlating the ζ-potential to the suspension viscosity (at 30 vol%) across a range of pH values.Zeta potential has been shown to be effective in determining differences in B4C powders, reported to be nominally of the same specification in terms of particle size distribution and X-ray diffraction data.The isoelectric (IEP) points for three different as-received B4C powders were found to be 4, 7 and less than 2.5.The study showed that differences in ζ-potential across the powders can be minimised via heat treatment, which produced suspensions all with an IEP below 2.5.The study also established the effect of an anionic and a cationic dispersant on ζ-potential and rheology, demonstrating that excess dispersant from a ζ-potential perspective was required to obtain the lowest viscosity.The study concluded that as-received B4C powders most likely contain contaminants of a cationic nature and that electroacoustic spectroscopy is a useful tool in determining their behaviour in aqueous suspensions.
critical metals were generated from publically available data by a newly-developed network metrics methodology.Network metrics were assessed for these platforms and provided information about product complexity, number of producers, average and maximum distance of a product platform to all other supply chain actors, and the level of challenge related to securing materials from potentially unreliable sourcing countries.In addition, this methodology can highlight supply chain actors that may act as potential ‘hot spots’ or ‘gatekeepers’.By doing so, the proposed metrics can provide information that would not be easily obtainable by a simple visual inspection of the supply chain.Supply Chain Network Analysis is also shown to be effective in providing insights into potential supply constraints and bottlenecks for supply chains where the data structure illuminates industrial capabilities at a national level.These network metrics could build upon resource criticality assessments which would provide important information relating to substitutability potentials, environmental implications of production, and limitations of resource availability.Finally, a comprehensive assessment for providing a measure of which materials are of more concern than others should, in our view, also incorporate aspects that relate to anticipated future metal supply and demand using various scenario storylines.We can thus imagine a “Composite Risk Methodology” for metal supply chains that would consist of Supply Chain Network Analysis, Criticality Assessment, and Scenario Analysis of future metals supply and demand.Applying Scenario Assessment in risk measures can be particularly effective for defense and security purposes: Many current high-technology products with long service lifetimes are designed around the continued availability of particular metals so as to enable full long term performance with replacement material upgrades over periods of 10–30 years; New platform designs are dependent on the continuing availability of particular metals during their manufacture, which for large platforms can take years or decades from design to deployment.
Modern technology makes use of a variety of materials to allow for its proper functioning.To explore in detail the relationships connecting materials to the products that require them, we map supply chains for five product platforms (a cadmium telluride solar cell, a germanium solar cell, a turbine blade, a lead acid battery, and a hard drive (HD) magnet) using a data ontology that specifies the supply chain actors (nodes) and linkages (e.g., material exchange and contractual relationships) among them.We then propose a set of network indicators (product complexity, producer diversity, supply chain length, and potential bottlenecks) to assess the situation for each platform in the overall supply chain networks.Among the results of interest are the following: (1) the turbine blade displays a high product complexity, defined by the material linkages to the platform; (2) the germanium solar cell is produced by only a few manufacturers globally and requires more physical transformation steps than do the other project platforms; (3) including production quantity and sourcing countries in the assessment shows that a large portion of nodes of the supply chain of the hard-drive magnet are located in potentially unreliable countries.We conclude by discussing how the network analysis of supply chains could be combined with criticality and scenario analyses of abiotic raw materials to comprise a comprehensive picture of product platform risk.
DG lattice is related to the size of its constituent cells.In general, specimens with 9 mm cells, which had only 2 repeating cells in each orthogonal direction, exhibited very different crushing behaviour than those with 3 mm cells, which had 6 cells in each direction.Because of their favourable deformation behaviour compared to the other types, DG lattices with 3 mm cells were chosen to examine the effect of a post-manufacture heat treatment.Following heat treatment, the deformation of DG lattices with 3 mm cells was seen to change significantly.None of the heat treated lattices exhibited low-strain brittle failure.Their stress–strain curves included the long plateaux more closely resembling the ideal cellular solid deformation depicted by Gibson and Ashby .This transformation to a relatively flat stress-strain curve from one featuring unpredictable weakening due to localised brittle collapse is a significant outcome of this work.It suggests a route by which the scaling laws and design rules developed by Gibson and Ashby and others, which generally assume an ideal plastic plateau for the purpose of energy absorption, may be made relevant to cellular structures made by SLM.Also evident in Fig. 6 is the effect of the heat treatment on the crushing or collapse strength of the structures, which was reduced by ∼25% compared to the as-built lattices; the effect is around twice as large as the reduction in UTS of the solid tensile specimens described in Section 2.4.However, the heat treated stress-strain curve of Fig. 6 also shows some non-ideal behaviour; decreasing then increasing strength above strain levels of around 20%.These features were likely caused by competing mechanisms.First, constrained deformation of the lattices due friction at the anvil surfaces led to localised buckling, and therefore weakening of the structures.This is evident in the central bulging, or barrelling, of the structures, as seen in Fig. 7.Second, there is the onset of local densification above ∼30% strain, supporting evidence for which can be seen in Fig. 7, where it is clear that cells toward the bottom of the structure have undergone at least partial collapse prior to those above.Our investigation has revealed that cell size plays an important role in determining the failure mechanism of metal AM lattices.To avoid low-strain structural failure due to localised fracture and crack propagation, one should choose a small cell size.In practice however, this may pose a problem, because the smallest features of the cells, be they struts or continuous walls, as examined here, clearly must not approach the manufacturing resolution of the AM platform in question, otherwise they will be reproduced inaccurately.This design-for-AM problem is undoubtedly worthy of further investigation.We also demonstrated that the deformation process of SLM aluminium lattices can be improved by a readily applied post-manufacture heat treatment.The heat treatment used here prevented the formation of a diagonal shear band in DG lattice structures, giving rise to a plateau in the stress-strain curve usually associated with ideal deformation in cellular solids.The heat treated lattices absorbed the same amount of energy under compressive deformation as the as-built lattices, but they did so experiencing a significantly lower peak stress.The benefit of a relatively flat stress plateau can be most easily appreciated considering the example of PPE such as body armour, where the energy of an incident blast wave or projectile must be absorbed uniformly and predictably to reduce the risk of harm to the user.Therefore, we can recommend that for applications involving the absorption of deformation energy below a predefined stress threshold, aluminium SLM lattices should be heat treated as a matter of course.Finally, we found that the specific energy absorbed by heat treated DG lattices up to 50% compressive strain was nearly three times that absorbed by comparable BCC lattices in a previous study .This makes a strong case for the use of DG lattices, and perhaps other TPMS lattice types too, in lightweight, energy absorbing applications.
Lattice structures are excellent candidates for lightweight, energy absorbing applications such as personal protective equipment.In this paper we explore several important aspects of lattice design and production by metal additive manufacturing, including the choice of cell size and the application of a post-manufacture heat treatment.Key results include the characterisation of several failure modes in double gyroid lattices made of Al-Si10-Mg, the elimination of brittle fracture and low-strain failure by the application of a heat treatment, and the calculation of specific energy absorption under compressive deformation (16 × 10 6  J m −3 up to 50% strain).These results demonstrate the suitability of double gyroid lattices for energy absorbing applications, and will enable the design and manufacture of more efficient lightweight parts in the future.
success of the supply chain and thus become further empowered as agents in the creation of successful supply chains.The remaining subjective element on which the buyer must pass judgement are with regards to the importance of each stakeholder group.The accuracy of stakeholder importance weightings can impact on the overall success of the final decision.Therefore this remains an area for potential improvement in the presented model.There are various weighting and scoring methods available to assist with this stage of the AHP–QFD that could also be integrated into the decision framework.The proposed model has been applied to a problem in the bioenergy industry where it is manifested as a blend or mixing problem.Similar problems exist in the food ingredients supply chain, the agricultural supply chain for grain and feed mixing and the metal smelting and fuel blending industries.By following similar steps of capturing available information, understanding the stakeholder groups and their requirements and the decision constrains and criteria each of these industries could make use of the presented general model.By ensuring that the conventional quality criteria can be incorporated into the decision the presented model does not introduce a compromise between the quality of materials purchased and stakeholder satisfaction.However, a compromise is naturally introduced between optimum price and optimum stakeholder satisfaction.In its presented form the model is only applicable where decisions are made regardless of price.The model could be extended by including some consideration of price into the model, the authors suggested approach for the presented case would be to create a layer of goal programming code that sets an acceptable total supply price.Where the price represents the limit at which the project can be economically sustainable.It is not recommended to take a similar approach for stakeholder satisfaction, the aim of using this method is to better meet the needs of the supply chain stakeholders, imposing artificial targets on performance could lead to counter intuitive solutions from the perspective of a particular stakeholder.The model also does not consider minimum order quantities, batch sizes and inventory management that may be important for the successful allocation of orders at the operational decision level.Methods demonstrating how these constraints can be included can be found in the literature.Further improvement could be made through introducing more sophisticated data management techniques for handling the performance of suppliers over time and especially for making accurate estimates of the probability distribution function that should be expected from a supplied material for each quality criteria.This could compliment the knowledge base part of the biomass specific model.The presented integrated AHP–QFD chance constrained optimization method has been shown to allow for order allocation to be optimized against stakeholder requirements with uncertain supply characteristics and with non-crisp constraints.It can be used to address multi-stakeholder, multi-supplier, multi-criteria stochastic problems.This is the contribution of this work.The presented model has been shown to assist with supplier selection in problems such as that faced when procuring biomass for energy projects.This contributes to the range of support tools available in the literature to the industry.
Integrated supplier selection and order allocation is an important decision for both designing and operating supply chains.This decision is often influenced by the concerned stakeholders, suppliers, plant operators and customers in different tiers.As firms continue to seek competitive advantage through supply chain design and operations they aim to create optimized supply chains.This calls for on one hand consideration of multiple conflicting criteria and on the other hand consideration of uncertainties of demand and supply.Although there are studies on supplier selection using advanced mathematical models to cover a stochastic approach, multiple criteria decision making techniques and multiple stakeholder requirements separately, according to authors' knowledge there is no work that integrates these three aspects in a common framework.This paper proposes an integrated method for dealing with such problems using a combined Analytic Hierarchy Process-Quality Function Deployment (AHP-QFD) and chance constrained optimization algorithm approach that selects appropriate suppliers and allocates orders optimally between them.The effectiveness of the proposed decision support system has been demonstrated through application and validation in the bioenergy industry.
Abstract reasoning, particularly in the visual domain, is a complex human ability, but it remains a challenging problem for artificial neural learning systems.In this work we propose MXGNet, a multilayer graph neural network for multi-panel diagrammatic reasoning tasks.MXGNet combines three powerful concepts, namely, object-level representation, graph neural networks and multiplex graphs, for solving visual reasoning tasks.MXGNet first extracts object-level representations for each element in all panels of the diagrams, and then forms a multi-layer multiplex graph capturing multiple relations between objects across different diagram panels.MXGNet summarises the multiple graphs extracted from the diagrams of the task, and uses this summarisation to pick the most probable answer from the given candidates.We have tested MXGNet on two types of diagrammatic reasoning tasks, namely Diagram Syllogisms and Raven Progressive Matrices.For an Euler Diagram Syllogism task MXGNet achieves state-of-the-art accuracy of 99.8%. For PGM and RAVEN, two comprehensive datasets for RPM reasoning, MXGNet outperforms the state-of-the-art models by a considerable margin.
MXGNet is a multilayer, multiplex graph based architecture which achieves good performance on various diagrammatic reasoning tasks.
This proteomics dataset comprise LC-MS/MS raw files obtained from bottom-up MS analysis of histone H3 and H4 isolated using different procedures from mouse and human tissues, which were either stored as frozen samples or formalin-fixed and paraffin-embedded.The dataset also includes the output files of the database search for common hPTMs and formalin-induced modifications.The data correspond to three main experiments: analysis of FFPE and corresponding frozen tissue in mouse spleen or liver, stored for few weeks up to 6 years, analysis of FFPE and corresponding frozen tissue in breast cancer samples derived from 3 patients, and analysis of 20 FFPE breast cancer tissues belonging to different subtypes.The analysis of breast cancer samples was carried out using a super-SILAC mix of heavy-labeled histones from 4 breast cancer cell lines as internal standard.Spleen tissue was obtained from leukemic mice as previously described and divided into two portions.One portion was washed and homogenized in ice-cold phosphate buffered saline using a Dounce homogenizer to obtain spleen cells that were counted, pelleted by centrifugation, frozen and stored at −80 °C until use.The other half of the spleen was rapidly washed in PBS and incubated for 16 h at room temperature in 4% paraformaldehyde.The fixed spleen was dehydrated with increasing concentrations of ethanol and then included in paraffin using a tissue processor.Frozen cells and FFPE samples were prepared similarly from mouse liver.Experimental procedures involving animals complied with the National Institutes of Health guide for the care and use of Laboratory animals, and were approved by the Institutional Ethical Committee.Breast cancer specimens were obtained from 23 patients with duct invasive carcinoma not otherwise specified, who were subjected to mastectomy or breast conserving surgery.The patients provided informed consent and this study was approved by the Ethical Committee of the European Institute of Oncology.Tumor samples were collected and snap frozen or fixed 4% formalin and embedded in paraffin.The immunoprofile of the samples was determined by immunohistochemistry, using the anti-ER/PgR, the Her-2 and the anti-Ki-67 antibody MIB-1.Breast cancer subtypes were defined according to the 2013 S. Gallen consensus conference recommendations: Luminal A-like: ER and/or PgR, HER2, Ki67<20%; Luminal B-like: ER and/or PgR, HER2, Ki67≥20; Triple Negative: ER, PgR and HER2, irrespective of Ki67 score; HER2-positive: HER2, irrespective of ER, PgR or Ki67.ER/PgR positivity was defined as≥1% of immunoreactive neoplastic cells; HER2 positivity was defined as>10% of neoplastic cells with strong and continuous staining of the cell membrane and/or amplified by in situ hybridization techniques.Cells were obtained from fresh mouse spleen and liver as described above and histones were isolated as previously described .Briefly, 30×106 cells were resuspended in lysis buffer and nuclei were isolated by centrifugation on a sucrose cushion.Histones were extracted by overnight incubation in 0.4 N HCl at 4 °C and dialyzed against 100 mM CH3COOH, using dialysis membranes with a 6–8 kDa cutoff.The dialyzed samples were lyophilized and stored at −80 °C.20–70 mg of frozen breast cancer tissue were thawed on ice, cut in small pieces with scissors and then homogenized with a Dounce homogenizer in PBS containing 0.1% Triton X-100 and protease inhibitors.Tissue debris were removed by filtration through a 100 μm cell strainer and nuclei were isolated through a 10 min centrifugation at 2300×g. Nuclei were resuspended in 100–200 μl of the same buffer including 0.1% SDS and incubated with 250 U of benzonase for few minutes at 37 °C to digest nucleic acids.Histone concentration and purity was assessed using the Bradford protein assay kit and SDS-PAGE.Four 10-μm tissue sections were deparaffinized by repeating 5 times addition of 1 ml of hystolemon, vortexing for 30 s and centrifuging at 18,000×g for 3 min.The samples were then rehydrated in decreasing concentrations of ethanol for 3 min at room temperature, followed by a 3−5 min centrifugation at 18,000×g. Rehydrated FFPE sections were permeabilized in 0.5 ml of Tris-buffered saline containing protease inhibitors and 0.5% Tween20 for 20 min at room temperature in a rotating platform, followed by a 5 min centrifugation at 18,000×g.The samples were resuspended in 200 μL of 20 mM Tris pH 7.4 containing 2% SDS and sonicated in a Branson Digital Sonifier 250 with a 3 mm microtip until all tissues pieces were homogenized.Proteins were extracted and de-crosslinked with a 45 min incubation at 95 °C, followed by a 4 h incubation at 65 °C.After estimating the protein concentration with the Biorad DC protein assay kit, 16–20 μg of proteins were run on a 17% SDS-PAGE gel together with known amounts of recombinant histone H3.1, which was used as a standard to estimate histone concentration.MDA-MB-231, MDA-MB-468, MDA-MB-453 and MDA-MB-361 were grown in SILAC-DMEM supplemented with 2 mM L-glutamine, 146 mg/l of lysine, 84 mg/l L-13C615N4-arginine, 10% dialyzed serum and penicillin/streptomycin for approximately 8 doublings to obtain complete labeling.Histones were isolated using the same procedure described for cells obtained from fresh-frozen mouse tissues.Equal amounts of histones from each cell line were mixed, aliquoted, lyophilized and stored at −80 °C until use.Approximately 4–5 μg of histones per run per sample were separated on a 17% SDS-PAGE gel and bands corresponding to histones H3 and H4 were excised and in-gel digested as previously described .Briefly, gel bands were cut in pieces, destained and in-gel chemically alkylated with D6-acetic anhydride 1:9 in 1 M NH4HCO3, using CH3COONa as catalyzer.After a 3 h incubation at 37 °C, the gel slices were washed with NH4HCO3, followed by ACN at increasing percentages.The in-gel digestion was performed overnight with 100 ng/μL trypsin in
The data described here provide a mass spectrometry-based quantitative analysis of hPTMs from formalin-fixed paraffin-embedded (FFPE) tissues, from which histones were extracted through the recently developed PAT-H-MS method.First, we analyzed FFPE samples from mouse spleen and liver or human breast cancer up to six years old, together with their corresponding fresh frozen tissue.
50 mM NH4HCO3 at 37 °C, in order to obtain an “Arg-C like” in-gel digestion.Digested peptides were extracted using 5% formic acid alternated with ACN 100%.In SILAC experimental set-ups, unlabeled and heavy-labeled histones were mixed in equal amounts prior to gel separation, and then processed as described above.Digested peptides were desalted and concentrated using a combination of reversed-phase C18/C and strong cation exchange chromatography on handmade nanocolumns.Digested peptides were then eluted with 80% ACN/0.5% acetic acid and 5% NH4OH/30% methanol from C18/C and SCX StageTips, respectively.Eluted peptides were lyophilized, resuspended in 1% TFA, pooled and subjected to LC-MS/MS analysis.Peptide mixtures were separated by reversed-phase chromatography on an in-house-made 25 cm column, using a ultra nanoflow high-performance liquid chromatography system connected online to a Q Exactive instrument through a nanoelectrospray ion source.Solvent A was 0.1% formic acid in ddH2O and solvent B was 80% ACN plus 0.1% FA.Peptides were injected in an aqueous 1% TFA solution at a flow rate of 500 nl/min and were separated with a 100 min linear gradient of 0–40% solvent B, followed by a 5 min gradient of 40–60% and a 5 min gradient of 60–95% at a flow rate of 250 nl/min.The Q Exactive instrument was operated in the data-dependent acquisition mode to automatically switch between full scan MS and MS/MS acquisition.Survey full scan MS spectra were analyzed in the Orbitrap detector with resolution of 35,000 at m/z 400.The five most intense peptide ions with charge states≥2 were sequentially isolated to a target value for MS1 of 3×106 and fragmented by HCD with a normalized collision energy setting of 25%.The maximum allowed ion accumulation times were 20 ms for full scans and 50 ms for MS/MS and the target value for MS/MS was set to 1×106.The dynamic exclusion time was set to 20 s and the standard mass spectrometric conditions for all experiments were as follows: spray voltage of 2.4 kV, no sheath and auxiliary gas flow.Acquired RAW data were analyzed by the MaxQuant software v.1.3.0.5 with the Andromeda search engine .The Uniprot MOUSE 1301 and HUMAN 1301 databases were used for peptide identification.Enzyme specificity was set to Arg-C.The estimated false discovery rate of all peptide identifications was set at a maximum of 1%.The mass tolerance was set to 6 ppm for precursor and fragment ions.Three missed cleavages were allowed, and the minimum peptide length was set to 6 amino acids.Variable modifications were lysine D3-acetylation, lysine monomethylation and monomethylation), dimethylation, trimethylation, and lysine acetylation.The raw data were analyzed through multiple parallel MaxQuant jobs, setting different combinations of variable modifications: D3-acetylation, lysine monomethylation with D3-acetylation, dimethylation and lysine acetylation, D3-acetylation, lysine monomethylation with D3-acetylation, dimethylation and trimethylation, D3-acetylation, lysine monomethylation with D3-acetylation, trimethylation and lysine acetylation D3-acetylation, formylation, methylene adducts and methylol adducts.For SILAC experiments, Arg10 was selected as heavy label.Raw data corresponding to each experiment are reported in Table S1.
Aberrant histone post-translational modifications (hPTMs) have been implicated with various pathologies, including cancer, and may represent useful epigenetic biomarkers.We then combined the PAT-H-MS approach with a histone-focused version of the super-SILAC strategy-using a mix of histones from four breast cancer cell lines as a spike-in standard- to accurately quantify hPTMs from breast cancer specimens belonging to different subtypes.The data, which are associated with a recent publication (Pathology tissue-quantitative mass spectrometry analysis to profile histone post-translational modification patterns in patient samples (Noberini, 2015) [1]), are deposited at the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD002669.
The UK National Ecosystem Assessment defined cultural ecosystem services as environmental settings or spaces that enhance human wellbeing through activities, capacities, identities and experiences.One of the key aspirations of the ecosystem services research community is to improve environmental decision making by providing information on the benefits of nature conservation.CES are often omitted from cost–benefit analysis and impact assessments because data on CES benefits are unavailable, and there are considerable methodological challenges to measuring them.Omitting CES from impact assessments underestimates the social and economic value of nature to people.In this paper, we present evidence that makes a strong appeal to include CES despite these measurement challenges.We show how conservation features important for a national network of marine protected areas can be translated into CES benefits and be valued using stated preference surveys, thus better accounting for CES in decision-making.This interdisciplinary research project, which was part of the second phase of the UK NEA,1 had three objectives: to add to the evidence base on marine CES values, to improve understanding about marine use and non-use values, and to provide evidence that can be used in MPA decision-making in the UK.To achieve these objectives, we developed a stated preference valuation method that linked a travel-cost choice experiment with an attribute-based contingent valuation method.The CE elicited direct and indirect use values for recreational visits to marine sites.The CVM elicited non-use and option values for protecting marine sites.Attribute-based CVM has been applied in only a few studies and the combination with a travel-cost CE is a novel approach to valuing ES.This paper is also the first to base the monetary valuation of CES on the place-based CES framework developed by the UK NEA.In this paper we report monetary values for divers’ and anglers’ marine site preferences based on CES.The total value of recreation in and designation of proposed UK MPAs is reported elsewhere.The marine environment provides many ES including fish, climate regulation, water circulation, habitats, nutrient cycling, resilience and resistance, waste absorption, detoxification of pollutants, primary production, medicinal and biotechnological products, storm protection, a wide variety of marine spaces for recreational activities such as angling, diving and snorkelling, and generates substantial cultural benefits.Currently, the long-term provision of marine ES is threatened by human activities including industrial fishing, raw material extraction, oil and gas exploration, shipping and terrestrial source pollution.Most marine activities are concentrated around coastlines because of the ease of coastal access and the limitations of accessing deeper parts of the ocean further offshore.The environmental impacts of these activities in shallow water makes them a marine conservation focal point.Three important questions for decision makers are: To what extent are marine ES being affected?, What are the benefits of protecting marine areas?, Could these benefits outweigh the opportunity costs of marine conservation on the marine economy?,The Convention on Biological Diversity signatories agreed to protect at least 10% of marine habitats by 2020.In 2010, only 1.6% of the oceans were protected.Currently, the UK and Scottish, Welsh and Northern Irish devolved governments are designating conservation areas to protect marine biodiversity in response to both CBD targets and the EU Marine Strategy Framework Directive 2020.The UK Marine & Coastal Access Act and the Marine Scotland Act empower governmental bodies to designate an ecologically coherent network of MPAs in UK waters, with the aim of progressing towards “clean, healthy, safe, productive and biologically diverse oceans and seas”.The MPA network comprises different types of MPAs including Ramsar sites, sites of special scientific interest, special areas of conservation, special protection areas and two new main types of MPA: Marine Conservation Zones and Scottish MPAs.Biological and geological conservation targets and social and economic factors are taken into account when considering potential MCZ and Scottish MPA sites.In England, stakeholders have recommended 127 MCZs, 27 of which were designated in November 20132 with some further sites likely to be designated in 2015.In Scotland, 33 MPAs were proposed for designation.Wales and Northern Ireland have yet to decide how they will contribute to the UK MPA network.In 2012, there was a public outcry over the Welsh government’s proposal to establish highly protected marine conservation zones.The Welsh government withdrew its plans as a result of the consultation responses, which were “expressing highly divergent and strongly held views”.One of the main reasons for the public upset was the exclusion of all extractive, damaging, and disturbing activities in these areas without consideration of the socio-economic implications for local communities and businesses.The experience clearly illustrates the importance of socio-economic evidence, including CES values, for decision making.While cost data on marine management is relatively easy to obtain, data on the non-market benefits of marine conservation in the UK are scarce.A recent report by Fletcher et al. specifically identified the ES provided by the UK marine habitats and species of conservation importance and highlighted the lack of information on CES values associated with these marine features.There are many potential marine CES benefits to the general public and specific communities associated with history, heritage and identity in relation to the sea.This paper focuses on the use and non-use benefits to two key recreational user groups of potential future MPAs.Most economic valuations of marine CES have been based on market related values of leisure and recreation.For example, leisure and tourism revenues including users’ expenditures on access fees, equipment, fuel, accommodation costs, etc.For the UK marine environment, these values amounted to £11.77 billion per annum in 2002.Using market related values mixes ES values with infrastructure and
UK Governments are currently establishing a network of marine protected areas (MPAs) informed by ecological data and socio-economic evidence.Evidence on CES values is needed, but only limited data have been available.The case study is an innovative combination of a travel-cost based choice experiment and an attribute-based contingent valuation method.Following the UK NEA's place-based CES framework, we characterised marine CES as environmental spaces that might be protected, with features including the underwater seascape, and iconic and non-iconic species.
to offshore sites.Site access by boat is considerably more expensive and increases further when boats are chartered.Opportunity cost associated with travel time was also not included.Therefore, outcomes of the model are likely to be a conservative estimate.Comparing our results with other angling or diving studies is not straightforward, as most studies have focused on tourists rather than those recreating in their own country, and there are few stated preference studies for temperate waters.Exemplary case studies for international divers elicited WTP between $7 and $63 per diver per year for dive access to MPAs.A Scandinavian survey elicited non-use values ranging from $56 to $140 per angler per year.Compared to terrestrial environments, marine environments are less well understood, both by scientists and the public.Stated preference methods require that participants understand the good described.Participants’ unfamiliarity with attributes can hinder the use of stated preference methods especially when participants are uncertain about their preferences for the good being valued.Experience with the non-marketed goods under valuation decreases preference uncertainty.A general lack of experience with the marine environment makes it more challenging for survey participants to value certain benefits of marine protection.Study participants were experienced divers or anglers and therefore familiar with marine features.One advantage of our experienced sample was that the number of attributes in the tasks and the detail in their descriptions could be higher than would be possible with an unfamiliar sample, or for a marine environment people are unfamiliar with, such as the deep-sea.Potentially, divers were more familiar than anglers with underwater habitats and other non-fish related survey attributes, however in focus groups anglers clearly understood the services and benefits provided by particular marine habitats, such as fish nursery sites and food sources.In our CE, anglers valued tide-swept channels – a habitat with regular supply of food that supports a rich community of marine life including fish.Conducting a similarly detailed survey with the general public is likely to be infeasible because of the cognitive challenge posed to the average respondent by processing the detailed information that our ‘expert’ users understood, as a result of their experience with the marine environment.The effect of experience on the WTP of anglers and divers has been shown in the past.Moeller and Engelken found that anglers who put higher importance on the size of fish were, on average, younger, less experienced and less willing to pay for fishing than their counterparts.In a Scottish case study, respondents’ dive experience and exposure to the marine environment increased their WTP for MPAs in deep waters and the non-use benefits of marine conservation.In this study, we provided background information on current uncertainties about management restrictions and for the magnitude of protection.There is still substantial uncertainty around the designation outcomes for the UK MPA network in terms of the number of protected sites, when protection will start, how protected sites will be managed, how restrictions on marine activities will affect recreational users and whether marine users will comply with the restrictions.There is also uncertainty about the scientific evidence base, including the ecological benefits of the MPA network.Thus, survey participants faced substantial uncertainty about the CES delivered in the future by MPAs.In the CE and CVM studies, elicited WTP is likely to be affected by how participants process and interpret current uncertainties and the risk of irreversible degradation if there is no protection.Possible interpretations of uncertainty and risk of site degradation might have been: there is a high risk of degrading sites by not insuring sites against future harm; sites will probably be safe without protection; protection may not be effective and there is risk of paying whilst harm still befalls the sites.Such perceptions will have almost certainly influenced how participants evaluated the hypothetical scenarios, just as this would have been the case if participants were asked to pay for conservation measures for a non-hypothetical site.As a consequence of and, stated WTP in the CVM may again have led to understating the value of CES of these marine sites.Including CES values into marine ES assessment is challenging, but not impossible, and is substantially facilitated through the application of a place-based approach, and innovative combinations of multiple valuation tools.Clearly, the marine environment delivers substantial CES use and non-use values, and it is crucial that decision-making takes account of this.This study provided new evidence for impact assessment reviews by UK Governments.Providing evidence for decision-making is often an aspiration in ES research, but studies are seldom directly useful to decision making.The marginal values that we estimated for hypothetical dive and angling sites under different management scenarios take ex ante management uncertainties into account, allowing policy makers to adapt the survey results to their needs.CES values of marine sites in combination with ecological conservation evidence is likely to be a stronger argument for protection than conserving biodiversity alone.Further research will be necessary to include other recreational beneficiaries of MPAs, such as surfers and yachters, and other threats to marine habitats and species, to better understand values and trade-offs with commercial fishing and other sectors of the ‘blue economy’.
Recreational users appreciate the UK marine environment for its cultural ecosystem services (CES) and their use and non-use values.An understanding of key stakeholders' CES values can inform a more holistic and sustainable approach to marine management, especially for decisions involving trade-offs between marine protection and opportunity costs of the blue economy.
Model-free deep reinforcement learning algorithms have been demonstrated on a range of challenging decision making and control tasks.However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning.Both of these challenges severely limit the applicability of such methods to complex, real-world domains.In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework.In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible.Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods.By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods.Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.
We propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework.
analyses.Furthermore, the data were collected using a questionnaire, which is subject to bias.Finally, as this is a cross sectional study, no statements can be made on causality in the found relationships.Although research on specific measures to reduce this risk of a lower compliance to the MVPA guideline and/or increased SB in business car owners is lacking, results from research on transport policies, on practices to increase PA and on perceived barriers and facilitators for cycling or walking could be informative to develop effective interventions for this risk group.First of all, the possible health risk of a reduced compliance to the MVPA guideline and an increase of SB in business car owners could be addressed by informing employers as well as employees.As a result, employers might consider this risk when deciding on travel arrangements for their employees, e.g. to offer them also a free public transport debit card for business trips that partly or completely can be made by train.Besides, increased awareness of the health risk in employees is very important if you want them to change their attitude concerning the use of a business car.Furthermore, business car owners might be advised to engage more frequently in sports and fitness activities, and also on workdays, to compensate their lower PA resulting from active transport and work, especially since a recent study showed that high levels of moderate intensity PA can possibly -- at least to some extent -- attenuate the negative health effects of prolonged sitting.As more than 80% of the business car owners reported to have a job where they mainly sit during their working hours, interventions in the organization and culture at the work place could be also considered, e.g. less use of e-mail to colleagues nearby, alternating between sitting tasks and tasks that require to walk or stand for some time as well as the use of furniture that allows to stand for a period to interrupt sitting.Policy makers in transport and fiscal arrangements involving transport, should be informed about the possible enhanced health risk of business car owners as a result of lower compliance with the MVPA guidelines.Finally, car lease companies should also be informed, as they could consider a more differentiated offer which makes it easier for the client to choose, when possible, other modes of transport than solely the car.The results of this Dutch study might be applicable for other countries, regions or cities, as fiscal incentives for business car use are prevalent in other countries as well.They might also be relevant for countries where the prevalence of business cars is low, but where other financial incentives for car ownership and use are present.Also in regions or cities with a low bike mode share in daily transport the findings might be relevant, provided that there is a realistic opportunity for cycling.Although the distances and the infrastructure for cycling and walking may not be comparable with the situation in the Netherlands, cities in many countries nowadays develop policies to promote cycling.In regions which are less suited for cycling, also a shift from transport by cars to public transport can be favorable for daily PA, as shown in other research.Before advising comprehensive measures to decrease the availability and use of business cars, more research is needed to confirm the current results, preferably with a longitudinal design and using an objective instrument to measure PA and SB and to assess that business car ownership is associated with reduced active transport.In addition, besides the known determinants of the choice for a mode of transport in commuting and other trips, insight in the determinants of PA and SB of business car owners is needed.Qualitative research among business car owners could possibly identify other, more specific determinants.Finally, it would be worthwhile to include questions in surveys determining the compliance with the MVPA guideline as well as in transport surveys addressing the following issues: the availability of a business car or other company arrangements for car use, the mode of commuting in relation to the availability of private and business cars, and the number of bicycles in the HH.To summarize, the results of this study suggest that business car owners have a significant higher risk of not complying with the MVPA guideline than owners of a private car, adult co-users of a car and adults with no car in the HH.In addition, they tend to spend more hours sitting during workdays than other adults.This may be, based upon evidence from health research on PA and SB, negatively affect the health of business car users.Hence, we should consider to inform business car owners, employers and car lease companies about the health risks of reduced compliance with the MVPA guideline associated with business car ownership.
Car ownership is associated with less frequent active transport and less PA. For business car ownership this relation is unknown.From October 2011 to September 2012 questions about use and availability of cars in the household were included in the survey Injuries and Physical Activity in the Netherlands.Multiple linear regression was used to compare six mutual exclusive groups of ownership and availability of (business and/or private) cars in the household.We concluded that owners of a business car in the Netherlands are at higher risk of not complying with the MVPA guideline and tend to spend more hours sitting during workdays than other adults.Further research in this group, e.g.with objective instruments to measure physical activity and sedentary behavior, is recommended.Policy makers on transport and fiscal arrangements, employers, employees, occupational health professionals and car lease companies should be aware of this possible health risk.
Diffuse large B cell lymphoma is the most prevalent, aggressive type of non-Hodgkin’s lymphoma.1–3,While chemo-immunotherapy improved outcome for advanced NHLs, a significant percentage of patients ultimately develops the progressive disease.4,The STAT3 transcription factor is frequently activated in NHL, regulating cell survival and proliferation, and has been associated with poor survival of patients with aggressive lymphoma.5–9,Mutations of upstream STAT3 regulators are common in DLBCL.10,STAT3 activation has been linked to autocrine/paracrine stimulation by interleukin-6 in the tumor microenvironment.11,12,More universally, STAT3 is a central immune checkpoint regulator in cancer cells and tumor-associated immune cells, such as myeloid-derived suppressor cells and tumor-associated macrophages.13–16,While an attractive target for cancer therapy,17–19 direct pharmacological inhibition of STAT3 proved difficult, and despite many attempts, there are still no US Food and Drug Administration-approved small-molecule STAT3 inhibitors.20,21,These challenges underscore the need for alternative strategies, such as oligonucleotide-based STAT3 inhibitors,20,22 and new methods for targeted oligonucleotide delivery.23,To overcome these obstacles, we previously developed a strategy for the delivery of therapeutic oligonucleotides, such as STAT3 small interfering RNA, specifically to Toll-like receptor 9+ immune cells, such as plasmacytoid dendritic cells, B cells,24,25 and cancer cells.19,26–29,TLR9 is also commonly expressed in many hematologic malignancies, including BCL.30,31,Multiple clinical trials in NHL demonstrated safety but only limited efficacy of TLR9 agonists.25,30,These difficulties can be, at least partly, ascribed to more recently described defects in TLR9 signaling in BCLs.Several studies have linked mutations in downstream TLR9 signaling or polymorphism in the TLR9 promoter to the pathogenesis of aggressive NHL10,32 or increased NHL incidence,33 respectively.Based on these observations, we developed dual-function CpG-STAT3 inhibitors to generate growth-inhibitory and immuno-mediated effects against DLBCL.We recently developed a strategy to deliver STAT3 decoy oligodeoxynucleotide inhibitor into human myeloid cells after conjugation to the type-A TLR9 agonist, CpG ODN.34,While CpG-STAT3dODN showed efficacy in targeting a variety of myeloid cell types, it showed moderate internalization by non-malignant B cells and BCLs.To improve the targeting of BCL, we modified the targeting sequence to the well-characterized, B-type CpG7909 that was previously evaluated in clinical trials in NHL patients.35,36,More extensive phosphorothioation of the new CpG-STAT3dODN improved nuclease resistance of this conjugate, which showed an 82-hr half-life in the presence of human serum, compared to the 63-hr half-life previously reported for CpG-STAT3dODN.34,Consistent with our previous study,26 primary human and mouse B cells and myeloid cells quickly and efficiently internalized fluorescently labeled CpG-STAT3dODNCy3, but not STAT3dODNCy3 alone, even at a low 50-nM dose.Furthermore, human activated B cell-like type-DLBCL and mouse A20 lymphoma cells internalized CpG-STAT3dODNCy3 within 1–6 hr of incubation.The uptake of STAT3dODN alone was negligible, with the exception of the OCI-Ly3 cells, which internalized unconjugated decoy DNA, albeit less effectively.Finally, we confirmed cytoplasmic localization of the CpG-STAT3dODN after being internalized by target lymphoma cells, using phase-contrast and confocal microscopy.Our results suggested that modified CpG-STAT3dODN can effectively penetrate into immune and lymphoma cells, thereby enabling STAT3 targeting.Binding of the high-affinity decoy molecules to activated STAT3 dimers prevents downstream target gene transactivation.20,We utilized electrophoretic mobility shift assays to assess the effect of CpG-STAT3dODN on STAT3 binding to a STAT3-specific radiolabeled high affinity mutant of the c-Fos sis-inducible element probe.As shown in Figure 2A, CpG-STAT3dODN abrogated almost completely the STAT3 activity in primary mouse splenocytes and also in mouse and human BCL cells, A20 and OCI-Ly3, respectively.In contrast, both control CpG-scrODN and CpG ODN alone increased STAT3 activity, especially in mouse target cells, which is a known effect of TLR9 signaling.The TLR9/nuclear factor κB signaling induces the expression of IL-6 and/or IL-10, which activate STAT3 to restrain immunostimulation as a negative-feedback effect.12,37–39,We further verified that the inhibition of STAT3 activity translates into reduced expression of downstream target proteins, such as BCL-XL and c-MYC, in human and mouse lymphoma cells.40,41,The protein levels of BCL-XL and c-MYC were strongly downregulated by CpG-STAT3dODN but not by the unconjugated STAT3dODN or control CpG-scrODN in A20 and even more pronouncedly in OCI-Ly3 lymphoma.Correspondingly, CpG-STAT3dODN induced dose-dependent cytotoxicity in STAT3-dependent OCI-Ly3 and TMD8 ABC-DLBCL cells in vitro, while it had minimal effect on SU-DHL-6 germinal center-DLBCL or A20 cells.Next, we tested the feasibility of using this strategy for targeting STAT3 in vivo.Mice with established, subcutaneously engrafted A20 lymphoma were treated using repeated daily intratumoral injections of 1 mg/kg CpG-STAT3dODN, control CpG-scrODN, or PBS.Whole tumors were harvested 1 day after the third injection to assess STAT3 activation using EMSAs.While both CpG-STAT3dODN and control CpG-scrODN induced similar NF-κB activity, only the CpG-STAT3dODN reduced STAT3 DNA-binding activity in A20 tumors.The observed STAT3 inhibition correlated with a suppression of target Bcl2l1 and Myc mRNA in A20 cells, as verified by real-time qPCR.We used the human ABC-DLBCL model to verify the therapeutic effect of CpG-STAT3dODN in immunodeficient NSG mice.As performed similarly in A20 lymphoma, three IT injections of CpG-STAT3dODN, but not CpG-scrODN or PBS, inhibited STAT3 DNA binding in human OCI-Ly3 lymphoma, as assessed using the EMSA.These results were further validated using the human gene-specific Nanostring assay on the total RNA isolated from whole OCI-Ly3 tumors treated using—similarly, as before—a short-term treatment with three IT injections of 1 mg/kg CpG-STAT3dODN, CpG-scrODN, or PBS.The heatmap analysis demonstrated clustering of the CpG-STAT3dODN gene expression pattern versus that of both CpG-scrODN and PBS.We also confirmed downregulation of known STAT3 gene targets—e.g., ABCB1, BCL2L1, CD46, or MICB—specifically in the CpG-STAT3dODN group, using Nanostring and qPCR analyses.Changes in the expression of genes regulating proliferation and cell death suggested the onset of growth inhibition and apoptosis.In addition, we observed co-expression of immune mediators, such as interferons
Growing evidence links the aggressiveness of non-Hodgkin's lymphoma, especially the activated B cell-like type diffuse large B cell lymphomas (ABC-DLBCLs) to Toll-like receptor 9 (TLR9)/MyD88 and STAT3 transcription factor signaling.Here, we describe a dual-function molecule consisting of a clinically relevant TLR9 agonist (CpG7909) and a STAT3 inhibitor in the form of a high-affinity decoy oligodeoxynucleotide (dODN).The CpG-STAT3dODN blocked STAT3 DNA binding and activity, thus reducing expression of downstream target genes, such as MYC and BCL2L1, in human and mouse lymphoma cells.of CpG-STAT3dODN inhibited growth of human OCI-Ly3 lymphoma in immunodeficient mice.Zhao et al.
and their targets, as well as proinflammatory cytokines and chemokines and their receptors by CpG-STAT3dODN-treated lymphoma cells.In immunodeficient NSG mice, these proinflammatory mediators of human origin could have only limited effect on innate, but not on adaptive, antitumor immunity.Nevertheless, these results suggested that the therapeutic effects of TLR9 triggering and STAT3 inhibition against BCL can be two-pronged: directly cytotoxic and immune mediated.To assess the efficacy of systemic CpG-STAT3dODN administration on OCI-Ly3 lymphoma, the immunodeficient NSG mice with established, disseminated-luciferase-expressing OCI-Ly3 lymphoma were treated using intravenous injections of 10 mg/kg CpG-STAT3dODN, CpG-scrODN, or only PBS for 10 days.The treatment with CpG-STAT3dODN significantly delayed lymphoma progression in contrast to CpG-scrODN control.The growth inhibitory effect of CpG-STAT3dODN on OCI-Ly3 lymphoma extended mouse survival versus both control groups.The effect of TLR9 stimulation alone-scrODN) was relatively weak and failed to significantly affect mouse survival.High nuclease resistance enables the systemic delivery of CpG-STAT3dODN to disseminated BCL cells and non-malignant lymphocytes.We used fluorescently labeled CpG-STAT3dODNCy3 to assess oligonucleotide biodistribution.BALB/c mice with established, disseminated A20 lymphomas received a single intravenous injection of CpG-STAT3dODNCy3.Peripheral blood, lymph nodes, spleen, and bone marrow were collected after 3 hr to determine CpG-STAT3dODN biodistribution using flow cytometry.The internalization by A20 cells in various tissues ranged from 25% to 65%.The uptake by non-malignant B cells was lower at 15%–25% and negligible for T cells in all locations, consistent with the in vitro uptake of CpG-STAT3dODN.Although intravenous injections resulted only in partial penetration of the BCL compartment, based on our previous study,34 such an internalization rate can be sufficient for the induction of systemic antitumor immunity using CpG-STAT3 inhibitors.Thus, we next assessed the antitumor efficacy of CpG-STAT3dODN administered systemically.BALB/c mice with disseminated A20LUC were treated using, every other day, intravenous injections of 5 mg/kg CpG-STAT3dODN, CpG-scrODN, or CpG alone or were treated with vehicle only.Mice were treated until day 25 and then monitored without further treatment using bioluminescent imaging and body condition scoring.Within 10 days, CpG-STAT3dODN treatments arrested A20LUC lymphoma progression and then led to complete lymphoma regression in 75% of mice.In contrast, control CpG-scrODN and clinically relevant CpG/CpG7909 or CpG7909, co-injected in equimolar amounts with the unconjugated STAT3dODN, had minimal and only transient inhibitory effects on lymphoma progression.Importantly, all CpG-STAT3dODN-treated mice that survived the initial A20 challenge were resistant to the rechallenge using the same lymphoma cells, while all naive mice showed A20 engraftment.CpG-STAT3dODN efficacy was also superior when compared to that of atovaquone, a recently described STAT3 inhibitor with activity against human multiple myeloma and acute myeloid leukemia.42,The extended therapeutic effect of the single 2-week treatment cycle using CpG-STAT3dODN and its long-term protective antitumor effect were indicative of immune-mediated antitumor responses.To assess the contribution of directly cytotoxic and immune-dependent antitumor effects, we engrafted A20LUC lymphoma into the immunodeficient NSG mice.Following the engraftment as verified by bioluminescent imaging, mice were treated for 2 weeks of daily injections of CpG-STAT3dODN, CpG-scrODN, or PBS alone.The CpG-STAT3dODN failed to eliminate tumors in NSG mice and only weakly improved their survival, similar to CpG-scrODN.These results likely indicate the limited effect of TLR9 stimulation on innate immune cells, such as granulocytes/neutrophils still active in NSG mice,19 while underscoring the crucial role of adaptive immune responses in generating durable lymphoma regression.We further verified the contribution of CD8+ and CD4+ T cells to these therapeutic effects, using antibody-mediated depletion.The CD8+ T cell neutralization had the strongest negative impact on the CpG-STAT3dODN effect against A20 lymphoma.In the absence of CD8+ T cells, lymphoma progression was accelerated compared to that of other treatment groups, and mouse survival did not differ from that of the control PBS-treated mice.The CD4-specific cell depletion completely alleviated the late onset of CpG-STAT3dODN-induced antitumor immunity at 2 weeks after treatment initiation.However, it did not prevent the early delay in A20 lymphoma progression, which resulted in minimally extended animal survival.In contrast, over half of CpG-STAT3dODN-treated reference mice-STAT3dODN/IgG) survived lymphoma challenge and remained tumor free for >80 days.After confirming the essential role of T cell immunity in mediating CpG-STAT3dODN effects in vivo, we assessed whether therapeutic efficacy of this strategy can be enhanced by blocking PD1/PD-L1 immune checkpoint regulation, which often results in the reduced activity of exhausted T cells.As shown in Figure 5C, systemic administration of CpG-STAT3dODN or anti-PD1 antibodies as single agents showed comparable effects against A20 lymphoma, with ∼50% mouse survival.Importantly, the combination of both strategies strongly improved therapeutic outcome, with 90% survival of mice.These results suggest that the presence and activity of T cells are critical for the overall antitumor activity of CpG-STAT3dODN against BCL.To gain insights into potential molecular mechanisms underlying the antitumor effect of CpG-STAT3dODN in vivo, we performed Nanostring gene expression analysis on A20 BCLs.Mice with s.c.-engrafted A20 tumors were treated using three IT injections of 1 mg/kg CpG-STAT3dODN, CpG-scrODN, or PBS every other day.We then examined gene expression profiles in whole A20 tumors from different treatment groups.Among the 770 immune-regulation-related genes, 465 genes were significantly altered, with 203 genes significantly upregulated more than 2-fold.The initial analysis revealed a clearly distinct transcription signature of CpG-STAT3dODN-treated tumors compared to that of both vehicle- and CpG-scrODN-treated groups, which clustered together, indicating a limited outcome of TLR9 triggering alone.In contrast, CpG-STAT3dODN resulted in the significant upregulation of signaling pathways involved in immunoregulation.Further analysis indicated that TLR9 triggering and STAT3 inhibition elevated genes involved in antigen processing/presentation, such as major histocompatibility complex class I-related genes and class II-related genes, and modulated the essential regulators of T cell activation
Moreover, systemic CpG-STAT3dODN administration induced complete regression of the syngeneic A20 lymphoma, resulting in long-term survival of immunocompetent mice.Both TLR9 stimulation and concurrent STAT3 inhibition were critical for immune-mediated therapeutic effects, since neither CpG7909 alone nor CpG7909 co-injected with unconjugated STAT3dODN extended mouse survival.The CpG-STAT3dODN induced expression of genes critical to antigen-processing/presentation and Th1 cell activation while suppressing survival signaling.These effects resulted in the generation of lymphoma cell-specific CD8/CD4-dependent T cell immunity protecting mice from tumor rechallenge.They demonstrate that combination of the TLR9 agonist (CpG) with the STAT3 decoy inhibitor into a single oligodeoxynucleotide conjugate generates two-pronged therapeutic effect by direct and T cell-mediated antitumor effects against B cell lymphoma cells in vivo.
cell-based approaches, offers a novel strategy for safer and more effective immunotherapy of BCL and, potentially, other hematologic malignancies.PBMCs from anonymous healthy donors were collected in accordance with the Declaration of Helsinki under the institutional review board protocol 13378.19,Cell viability was >90%, as confirmed using flow cytometry.Mouse A20 and human OCI-Ly3 cells were from ATCC or DSMZ, respectively, and TMD8 and U2932 ABC-DLBCL cells were kindly provided by Dr. G. Inghirami.Cells were cultured in RPMI 1640/10% fetal bovine serum media.To generate A20LUC and OCI-Ly3LUC cells, parental cells were transduced with luciferase/mCherry using a lentiviral vector.53,All animal experiments were followed established institutional guidance and approved protocols from the institutional animal care and use committee.BALB/c mice were purchased from the National Cancer Institute.NOD/SCID/IL-2RγKO mice, originally from the Jackson Laboratory, were maintained at the COH.Mice were injected intravenously or s.c. with 5 × 106 OCI-Ly3LUC or A20LUC in PBS, and lymphoma engraftment/progression was monitored using BLI on the AmiX.All oligonucleotides were synthesized in the DNA/RNA Synthesis Core by linking CpG7909 to STAT3dODN in a manner similar to one previously described.26,The resulting conjugates are shown below:For internalization/biodistribution studies, oligonucleotides were 3′-labeled using Cy3 fluorochrome.EMSAs to detect STAT3 DNA-binding activity were performed as described previously.26,Briefly, 10 μg nuclear extracts were incubated with hSIE 32P-labeled oligonucleotide probes specific to STAT354 for supershift control using anti-STAT3 antibody.Western blot detection to detect STAT3, pSTAT3, BCL-XL, c-Myc, and β-actin expression was performed as described earlier.19,104 DLBCL lymphoma cell lines were seeded in 100 μL 1% FBS Iscove’s Modified Dulbecco’s Medium in a 96-well plate and incubated in the presence of designated reagent for 3 days.10 μL Vita-Orange Cell Viability Reagent was added into each well, and the mixture was incubated at 37°C with 5% CO2 for 4 hr.Optical density at 450 nm was detected using Citation3, with OD at 690 nm as reference.For extra-/intracellular staining, fluorochrome-labeled antibodies and previously described staining protocols were used.17,29,Data collected on the BD-Accuri C6 and BD-Fortessa were analyzed using FlowJo software.The ELISPOT assay was performed following manufacturer’s protocol, as described previously.28,Immunohistochemistry using anti-CD8 antibody was performed as described previously19 and was analyzed on the Observer II microscope.For confocal microscopy, cells were fixed in 2% paraformaldehyde; then, nuclei were stained using DAPI, and slides were analyzed using an inverted confocal microscope and the LSM Image Browser.27,Total RNA was extracted from whole OCI-Ly3 or A20 tumors using the mirVana miRNA Isolation Kit.RNA quality was verified using the Bioanalyzer-2100.Gene expression was analyzed using the PanCancer Immune Profiling panel for human or mouse, XT-CSO-HIP1-12 or XT-CSO-MIP1-12, respectively, on the nCounter system following the manufacturer’s recommendations.Results were analyzed using nSolver 3.0 software and automated normalization of raw data.An unpaired t test was used to calculate the two-tailed p value to estimate statistical significance of differences between two experimental groups.A one-way ANOVA plus Bonferroni posttest were applied to assess the statistical significance of differences between multiple treatment groups.The relationship between two groups was assessed using correlation and linear regression.The p and r2 values are indicated in the figures with asterisks: *p < 0.05; **p < 0.01; ***p < 0.001.Data were analyzed using Prism v.6.03 software.Study design: M.K. and X.Z.; Conducting experiments: X.Z., Z.Z., D.M., Y.-L.S., H.W., and T.A.; Data analysis/interpretation: M.K., S.F., L.K., X.Z., Z.Z., H.H.Y., and R.K.P.; Providing reagents: Y.L., Z.D., P.S., and L.K.; Writing manuscript: Z.Z., X.Z., and M.K.
describe a synthetic oligonucleotide-based strategy for therapy of disseminated B cell lymphomas.
been improved significantly, more attention must be paid to the engineering of interfaces such as semiconductor/metal and semiconductor/insulator.How to realize efficient carrier injection from metal electrodes to semiconductors, and how to reduce trap states at the semiconductor/insulator interfaces still remain open question.There are also many chioces of printing methods such as inkjet printing, screen printing, gravure printing and reverse-offset printing.Since they have advantages and disadvantages in spatial resolution, the range of film thickness, the range of ink viscosity, and printing speed, it would be necessary to choose different methods for respective layers of electronic devices.The performance of printed OTFT devices, not only mobility but also stability and reproducibility, has been improved significantly in the past decade.After the optimization of materials and process conditions, the printed devices are not necessarily inferior to the devices fabricated by vacuum deposition.Solution processes of organic semiconductors also have advantages in large crystalline domains and a clean semiconductor/insulator interface as a result of phase separation from mixed solution.There had been many reports on the digital circuits of OTFT devices, since they are more resistant to the variation of device parameters than analog circuits.Recent improvement of stability and reproducibility of OTFT devices is paving the way for printed analog circuits, which may be utilized in internet of things society.One of the most important building blocks for sensor applications is operational amplifiers, which possess high versatility in signal processing.Wearable smart sensors are one of the most promissing applications of printed and flexible OTFT devices.The sensor applications based on printed OTFT devices has been demonstrated successfully for ion and lactate sensing.We believe that further research and development on functional materials, processes, devices, and circuit systems will open a new field of flexible and printed electronics industry.
This article reports on the recent progress in the research and development of flexible and printed organic thin-film transistor (OTFT) devices, including organic materials, fabrication processes, electronic devices, and integrated circuits, and highlights their application to healthcare sensors.The fabrication process flow for printed OTFT devices is described with various printing methods such as inkjet printing, dispensing, reverse-offset printing, and spin-coating.Silver nanoparticle inks are commonly used to form interconnect and electrode layers with inkjet printing and reverse-offset printing.Highly crystalline small-molecule organic semiconductors are patterned on the printed source and drain electrodes.Various integrated circuits such as inverters, D flip-flops (D-FFs), and operational amplifiers (OPAs) are fabricated based on the printed OTFT devices, and their electrical performance is discussed in detail.The potential applications of printed integrated circuits to biosensors are successfully demonstrated on plastic film substrates, enabling various flexible and printed sensor systems for potential use in the Internet of things (IoT) society.
consisted of a rapid ramp to the desired setpoint and a 30s dwell-time.The lamps were then turned off and the sample allowed to cool to room temperature naturally.Once the optimum setpoint had been determined, a second sample was treated at this temperature.The electrical performance of CdTe device was measured before and after successive RTP treatments under simulated AM1.5 g light using a commercial solar simulator which was calibrated using a certified standard silicon cell .5 to 7 individual devices were measured each time.Transmission electron microscopy was used to investigate cell microstructure.High resolution TEM cross sections were obtained using an FEI Technai F20 field emission gun transmission electron microscope.The TEM samples were prepared by a standard in-situ lift out method using a dual beam FEI Nova 600 Nanolab scanning electron microscope .ZnTe:Cu film thickness was found to be proportional to sputter time, with an observed deposition rate of 0.33 nm/s at 220 kHz.As a result, a 300 nm thick ZnTe:Cu layer was grown in only 15 min.The deposition rate can be further increased either through increasing the amount of power applied to the pulsed-DC gun or through decreased pulse frequency.Pulse frequency was shown to have a small and non-linear impact on deposition rate.The bright field TEM image of the full CdTe device with as-deposited ZnTe:Cu layer is shown in Fig. 3 and Fig. 4 shows the detailed view of the back contact region.All the thin film layers are well defined.Nanostructured ZnTe:Cu layer is uniformly deposited on the top of the CdTe grains.To optimise the RTP process, a number of subsequent treatments were performed at progressively higher temperatures starting at 260 °C, increasing in 20 °C increments and finishing at 400 °C after device performance started to degrade.RTP treatment resulted in increased efficiency compared to as-deposited devices, and the optimum temperature was found to be 380 °C.A second sample was exposed to a refined RTP treatment, whereby the temperature was increased in 10 °C increments around the optimum found previously.The devices exposed to treatment at 380–390 °C exhibited the best efficiency.This value is slightly higher than that found in the literature for a co-evaporated ZnTe:Cu layer .Fig. 7 compares J-V curves of devices in the as-deposited state, after an optimal RTP treatment and from an under- and an overheated sample.Their main performance indicators are summarised in Table 1.All devices show similar carrier collection, with the main improvement coming from the increase in VOC and FF.After the optimum RTP treatment, VOC increased by more than 200 mV, reaching 727.3 mV, and FF by more than 13%, up to 70.32%.This is consistent with the elimination of the barriers at the back contact.The overall efficiency increased from 6.44% to 11.25%.At slightly higher than optimum RTP temperature, the FF started to decrease and a roll-over appears on the J-V curve.The current starts to decrease slightly, which can be attributed to the excess of copper migrating towards the CdTe/CdS junction and creating recombination centres and causing shunting.This needs to be proven by further investigation of copper behavior after RTP of CdTe solar cells containing sputtered ZnTe:Cu interface layer.In order to explain the device performance, an attempt was made to measure the carrier density using capacitance-voltage technique.These measurements gave unreliable and unrepeatable results however, rendering them unusable.CV measurement of CdTe solar cells is often complicated by the presence of deep level defects, back contact barriers, finite absorber thickness and non-uniform carrier density.All of these can contribute to erroneous carrier density readings .This study demonstrates that inclusion of a ZnTe:Cu buffer layer at the back contact of CdS/CdTe solar cells improves device performance, dramatically increasing both VOC and FF.The ZnTe:Cu layer was successfully deposited by high rate pulsed-DC magnetron sputtering giving devices which performed well overall.The rapidity of the process could give a substantial time and money saving in CdTe manufacturing.Device performance is however sensitive to post-deposition annealing temperature.Highly effective and versatile RTP separates the deposition and activation of the ZnTe:Cu layer and perhaps reduces Cu migration into the CdTe absorber, which could be a step towards the improvement of the device stability.
Copper is commonly used to lower the back contact barrier in CdTe solar cells, but an excessive amount of copper diffusing through the cell is harmful for the device performance and stability.In this work a copper-doped ZnTe (ZnTe:Cu) buffer layer was incorporated in between CdTe and gold metal contact by high-rate pulsed DC magnetron sputtering.The back contact was then activated by rapid thermal processing (RTP) resulting in spectacular improvement in key device performance indicators, open circuit voltage (VOC) and fill factor (FF).
was indeed validated by studying various genotypes of maize.Maize ‘FL’ genotypes with few but long LRs performed better under low-N than ‘MS’ genotypes with many but short LRs .LR density was found to be negatively correlated with the root depth which was positively correlated with plant N content, shoot growth and grain yield under low N conditions.One possible explanation is that allocation of resources required for root growth such as sugars is more efficiently relocated in FL genotypes, thus decreasing the cost of root growth while improving exploration of the soil for nitrate.Similarly, rice genotypes that showed less inhibition of root length exhibited higher nitrate uptake, nitrogen use efficiency and yield under low N conditions .In the case of P deficiency, a topsoil-foraging root system is considered to be better than a deep root system as phosphate often accumulates in topsoil layers .Indeed, in maize genotypes higher LR branching density was correlated with higher yield on P deficient soils .Evidence for the importance of total root length and root surface area for P acquisition can also be found in phenotypes related to natural and transgenic alleles of PHOSPHORUS STARVATION TOLERANCE 1, a gene underlying a major QTL of P-deficiency tolerance in rice .Furthermore, overexpression of expansins, for example, GmEXPB2 in soybean or TaEXPB23 in tobacco, increased LR number and improved phosphorus use efficiency .Root hair elongation also contributes to P acquisition under P deficient conditions as shown for the root hair-less Arabidopsis line, NR23 .Using inbred lines of common bean differing for shallow basal root angle and root hair length and density, it was found that the traits synergistically improved P acquisition under phosphate deprivation .While these studies strongly support the importance of RSA for nutrient uptake and yield, they are mostly correlative and often limited by poor control of the nutrient profile in the soil and lack of knowledge on depletion kinetics in the rhizosphere.Furthermore, some of the observed differences may be based on differences in overall root size rather than differences in the spatial arrangement of individual root parts.To obtain a better understanding of the exact consequences, costs and constraints of different RSAs in different root environments it will be necessary to develop an experimental system that allows precise spatiotemporal control and monitoring of nutrient concentrations in the rhizosphere while enabling continuous measurement of RSA features and physiological parameters.Nutrient sensing and signalling continues to be an important and active field of research, and quantification of root system architecture as a phenotypic output has provided excellent opportunities to identify crucial signalling components.Over recent years novelty has particular arisen from firstly, the integration of transcriptional, translational, redox and cell-wall based regulatory processes, secondly, the discovery of peptide signals in local and systemic nutrient signalling, thirdly, the identification of molecular hubs underpinning interactive effects of different nutrients, and finally, the correlation of RSA with nutrient use efficiency and yield in crop genotypes.A concerted effort is now required to better control and monitor the nutrient profile around the root and to systematically manipulate several nutrients as well as other environmental factors that determine growth and development.Combined with genetics, particularly the exploration of natural variation, such a systemic approach could generate the necessary knowledge to generate models that can predict RSA for any genotype for any given combination of nutrients.Linking RSA to nutrient uptake, growth and yield would be the next step.It is possible that nutrient-use efficiency under a moderate fertilizer regime could be increased by de-sensitizing the plant for nutrient-deficiency signals.Exploration of this and other crop improvement strategies requires a precise understanding of how nutrient-signalling networks are hard-wired.Papers of particular interest, published within the period of review, have been highlighted as:• of special interest,•• of outstanding interest
The spatial arrangement of the plant root system (root system architecture, RSA) is very sensitive to edaphic and endogenous signals that report on the nutrient status of soil and plant.Signalling pathways underpinning RSA responses to individual nutrients, particularly nitrate and phosphate, have been unravelled.RSA is potentially an important trait for sustainable and/or marginal agriculture.It is generally assumed that RSA responses are adaptive and optimise nutrient uptake in a given environment, but hard evidence for this paradigm is still sparse.
The data of this study were collected from an animal in vivo study aiming at quantitative assessment of epileptogenesis in a rapid kindling model in rats .Considering the unique features of EEG for seizure prediction , these data present the raw data of spectral analyses of the field potentials recorded during the progression of Amygdala kindling in rats to determine the quantitative features of main phases of kindling acquisition.In this paper, stages 1 and 2 of kindling were considered initial seizure stages, stage 3 as localized seizure stage, and stages 4 and 5 as generalized seizure stages.Tables 1–3 present the spectral powers of different sub bands of EEGs in ISSs, LSSs, and GSSs of the kindling process, respectively.Moreover, Table 4 presents percentage of different sub bands power in the control group.Adult male rats weighing 200±10 g were housed individually under standard conditions and 12-h light: 12-h dark: 12-h light cycle).Rats were randomly divided into two groups and anesthetized under intraperitoneal injection of ketamine and Xylazine mixture .One tripolar stainless steel electrode was implanted in amygdala using Paxinos and Waston atlas coordinates: for amygdala targeting, anteroposterior: -2.5 mm; lateral: 4.8 mm; vertical: 7.2 and 0.2 mm below the skull .Three holes were drilled, one for positioning a monopolar electrode attached to a screw which was located near the frontal lobe as ground and reference, the two for anchor screws.Electrodes and screws were fixed using acrylic dental cement and attached to a socket.The protocol of this study was approved by local ethics committee of Ahvaz Jundishapur University of Medical Sciences that was in complete compliance to the guide for the care and use of laboratory animals by the National Institutes of Health.Following a 10-day recovery period after surgery, the threshold intensity was determined using a 3 s of monophasic square wave of 50 Hz initially applied at 30 µA and it was increased in step of 15 µA at 15 min intervals until emerging at least 6 s of afterdischarges.All rats in the kindle group were subjected to daily stimulation using a 3 s train of 50 Hz monophasic pulses of 1ms duration with threshold intensity which were applied 12 times daily with 5 min intervals , whereas sham animals only experienced stimulation condition and received placebo stimulation.Therefore, the EEG of sham animals can be considered as a baseline.Behavioral development of kindling acquisition was scored according to Racine stages .This process was continued until emerging stage 5 of kindling.EEG signals recorded from the implanted electrode in the amygdala and monitored with electro module system which was connected to computer using e-probe software.During kindling acquisition, we could save the starting and ending time of each stage of kindling as a text file which can be considered in extracting each stage.Data were digitized at a sampling rate of 10 KHz.Moreover, the electro module automatically applied a filter on 50 Hz frequency to remove DC effect from the signals.Recorded EEG signals were saved as binary files.These binary files were then imported into EEGLAB software for pre-processing stage.Moreover, a band-pass filter between 0.5–60 was applied to remove the effect of other frequencies.In the EEGLAB, we separated the EEG signals of each stage and the obtained signals were saved as dataset files which can be imported into MATLAB.These signals were then transferred into frequency domain by Fast Fourier Transform and MATLAB 2013b was used to calculate their power spectrum and power of each sub bands including delta, Theta, alpha, beta, and gamma.
The data represented here are in relation with the manuscript “Quantitative assessments of extracellular EEG to classify specific features of main phases of seizure acquisition based on kindling model in Rat” (Jalilifar et al., 2017) [1] which quantitatively classified different main stages of the kindling process based on their electrophysiological characteristics using EEG signal processing.The data in the graphical form reported the contribution of different sub bands of EEG in different stages of kindling- induced epileptogenesis.Only EEG signals related to stages 1–2 (initial seizure stages (ISSs)), 3 (localized seizure stage (LSS)), and 4–5 (generalized seizure stages (GSSs) were transferred into frequency function by Fast Fourier Transform (FFT) and their power spectrum and power of each sub bands including delta (1–4 Hz), Theta (4–8 Hz), alpha (8–12 Hz), beta (12–28 Hz), gamma (28–40 Hz) were calculated with MATLAB 2013b.Accordingly, all results were obtained quantitatively which can contribute to reduce the errors in the behavioral assessments.
other constituents of buffer solution and due to other environmental factors .This experimental study investigates the decay pattern of insulin in Hepes buffer solution on diluting up to 35 pM concentration.Experimental measurement were spread over three days.On comparing the results for each day it is observed that on first day there is no consistent pattern whereas on the second day consistency was observed in measurement of varying concentrations of insulin.Therefore, it can be concluded that solution takes 24 h to stabilize.It has been reported that 24–37 h required for homogenization at 4°C on diluting the protein solution.Results obtained here also show the same behaviour.Experimental data for third day with same samples of insulin solution starts showing random results with smaller variation in measured values as compared to second day data for all the concentrations of insulin.These results indicate that insulin solution became electrically less active for all concentration and degrading is started.This finding is consistent with the study presented by Putz et al. , that biological activity of the protein is directly depends on its dipole moment and it is known fact that dipole moment is linked with permittivity and relaxation time.But from results presented here for second day data it was observed that with the increase in insulin concentration there was an increase in permittivity and relaxation time.This indicates that now results are According to Basey et al. , in which it was found that for lower concentration of proteins in solution, ratio of change is permittivity to change in relaxation time remains constant i.e. they both are directly proportional.Furthermore, these results indicate that for second day data for Insulin solutions are in active state according to Putz et al. .Analysis presented in the present research work is to identify the activity of insulin solution with time over three days.Data is analysed by studying the change in permittivity, relaxation time and time delay at 17 GHz.From these analysed results it was observed that when an insulin solution of pM concentration is prepared in buffer solution it requires 24 h to saturate and stabilize.After 48 h it starts decaying even if the sample is maintained at 5°C.This method shows that permittivity, relaxation time, and time delay can be the indicative parameters to study the activity of biomolecules in different medium.This study is central in drug designing and delivery and this method can prove to be a quick method to understand the behaviour of biomolecules under different conditions.
Bio-molecule when isolated from its natural ecological condition is subjected to rapid decay.This decay leads to change in polarization and permittivity of molecule.This study presents an experimental analysis of the decay pattern of pM concentration of insulin using whispering gallery mode (WGM) dielectric resonator (DR) method.Analysis is carried out by comparing the permittivity, relaxation time and time delay for three days.It is observed that different pM concentrations of insulin solutions start to decay after 24 h at 5°C.Salient features of the present method are: .This method presents time dependent analysis to determine the activity of protein solution by measurement of permittivity, relaxation time and time delay..In the present paper activity of pM concentration of Insulin in buffer solution is tested for three days..This method is a general method and can be a fundamental basis to test the activity of bio-molecules in solution.
.More specifically, by incorporating RNA modules into the gRNA molecules, researcher can now locate specific protein complexes, catalytic RNAs or aptamers to target genes.The vast majority of lncRNAs exhibit a nuclear-specific localization .Recent evidences point towards a role for these molecules in nuclear architecture, formation of higher order chromosomal structures and therefore control of gene expression .Only now we begin to understand the 3D principle behind such exquisite organization and we can predict a future application in genetic regulation by simply targeting specific lncRNAs to elicit controlled structural changes in the assembly of chromatin domains that in turn will profoundly affect the expression of desired targets.With the recent advances in tools to explore RNA biology it has become evident the importance of lncRNAs in the regulation of multiple physiological and pathological pathways.We expect an exponential increase in the discovery rate and functional characterization of novel transcripts involved in the control of several cellular pathways.The use of lncRNAs as pharmacological targets and therapeutic tools is still in its infancy.For the past five decades molecular biology and biochemistry have focused their efforts on protein coding genes and their products.Unlike their protein counterparts, we still do not possess a thorough understanding for the rules of engagement of lncRNAs and complete picture of their mechanisms of action.Given the versatility of these molecules in terms of targets and mechanisms, only by cracking and deciphering the lncRNA genetic code of life will allow scientists to fully take advantage of a new set of toolkits for therapeutic approaches.
Long non-coding RNAs constitute the major transcriptional output of human genome.They are conveniently defined as RNA molecules longer than 200 nucleotides and not encoding proteins.Recent studies have highlighted the central role played by these transcripts in several physiological and pathological processes.Here we highlight how long non-coding RNAs have been used for diagnostic and therapeutic approaches and discuss how in the near future we can envision their application as pharmacological agents.
commitment, and cognitive complexity."Sixth, we used a modified version of Chung and Ding's Sales Locus of Control Scale. "The adapted Chung and Ding scale, which is in turn based on Levenson's Locus of Control Scale, needs further empirical validation in the CEO context.While our adapted scale was well received by the five CEOs in the pilot study and as Appendix A shows acceptable loading, reliability, and confirmatory factor analysis tests, we encourage future studies to further validate our scale and also develop and apply alternative measures for CEO locus of control."For example, Hodgkinson's Strategic Locus of Control Scale could be preferable when the interest is locus of control beliefs in relation to issues of strategic management.Seventh, it is possible that burnout may be associated with contingent compensation that may or may not be fully associated with CEO effort.Systematic and unsystematic risk may influence firm performance, so CEO compensation may be significantly influenced by factors beyond felt and actual discretion.As the nature of contingent compensation can vary from one industry to another, we call on future studies to assess the effect of variations in contingent compensation within and between industries.Further, while the CEO pay gap is not significant in Sweden, compensation can also be a buffer mechanism such that significantly higher pay could counter perceptions and the realization of burnout.However, individual characteristics play a key role in explaining these relationships.We call on future studies to assess the role of personal characteristics in explaining the relationship between cognitive and monetary buffers in reducing burnout.In a similar vein, we encourage future studies to evaluate whether block shareholding—that is, having a large share of stock ownership held by a small number of investors—influences CEO burnout.Finally, our findings do not cover the effects of job-person fit on the relationship between burnout and firm performance.In their theoretical model, Maslach and Leiter proposed that the greater the mismatch between the person and his or her job, the greater the likelihood of burnout.We encourage further CEO burnout studies to cover the job-person fit paradigm along with a broader and more complex conceptualization of the person situated in the job context.While the recent practitioner literature has increasingly highlighted that CEO burnout is a critical problem, prior research has not concentrated on CEO burnout, its effects on firm performance, or ways to manage it.This article drew upon upper echelons theory and existing research on employee burnout in organizations and suggested and confirmed that CEO burnout has a negative association with firm performance.The results of our study show that of the managerial discretion components related to CEO structural power, CEO duality ameliorates the negative relationship between CEO burnout and firm performance.Further, of the internal organizational forces shaping CEO discretion, firm size and resource availability influence the CEO burnout–firm performance relationship.More specifically, performance is significantly reduced in larger and resource-constrained organizations whose CEOs report higher burnout."Interestingly, our study did not find support for the ameliorating role of CEOs' internal locus of control.The current research suggests that burnout should be managed if it cannot be avoided and that CEO duality, firm size, and resource availability play critical roles in this outcome.
Despite the possibility of burnout resulting from dynamics in firms' upper echelons, little if any work has focused on chief executive officer's (CEO's) burnout and firm performance.Drawing on managerial discretion theory, this article analyzes the influence of CEO burnout on firm performance and the moderating roles of the individual (CEO locus of control), structural power (CEO duality and CEO tenure), and organizational characteristics (size, age, and resource availability) related to managerial discretion.Using a sample of 156 CEOs in Swedish firms, we find a negative association between CEOs who report higher burnout and firm performance.Our results confirm that CEO duality and resource availability ameliorate and firm size exacerbates the negative association between CEO burnout and firm performance.Contrary to our expectations, CEO locus of control, CEO tenure, and firm age do not influence this relationship.We discuss the implications of our research for upper echelons theory and strategic leadership theory.
been reported to strongly influence the binding of neutralising antigenic site 2 mAbs in serotype O FMDV .Similarly VP2 191 has been recently shown to be linked to serotype O and A antigenic site 2 using a reverse genetics approach .VP1 45 involving antigenic site 3 and VP1 137–141, 197 involving antigenic site 1 have been reported to be critical in serotype O mar-mutant studies .In addition neutralising antigenic site 2 has been reported to be immunodominant in the polyclonal response of serotype O FMD-vaccinated animals followed by site 1 .It therefore, appears that the changes in these residue positions could be tolerated as it has not altered the antigenicity of these viruses greatly.However continuous monitoring of the field isolates needs to be carried out to identify emergence of antigenic variants in future.In summary, the serotype O vaccine strains used in this study are a good match with the circulating field isolates in East Africa.This indicates that serotype O vaccine strains are broadly cross-reactive and, therefore could be used as vaccines to control the disease in the region.In addition to the locally available vaccine, O/KEN/77/78, internationally available commercial vaccines like O/PanAsia-2 and O/Manisa could also cater to the needs of the region.There is always a risk of introduction of new topotypes/lineages of the viruses to the African countries as exemplified by the introduction of A-Iran-05 viruses and O-Ind-2001d viruses in Libya in 2009 and 2013, respectively, and their subsequent spread to other North African countries.In addition, most of the countries in the region are preparing to enter and/or entering to the FMD Progressive Control Pathway as described by OIE/FAO, and would require a robust matching vaccine in the initial stage of the program.Therefore, close monitoring of the outbreak strains in the region along with regular vaccine matching studies is crucial to identify emergence of antigenic variants and also to evaluate the suitability of v/s for use in FMD control programmes in the region.The need to develop a new v/s should also be identified in a timely fashion to prevent future outbreaks.
Foot-and-mouth disease (FMD) is endemic in Eastern Africa with circulation of multiple serotypes of the virus in the region.Most of the outbreaks are caused by serotype O followed by serotype A.The lack of concerted FMD control programmes in Africa has provided little incentive for vaccine producers to select vaccines that are tailored to circulating regional isolates creating further negative feedback to deter the introduction of vaccine-based control schemes.In this study a total of 80 serotype O FMD viruses (FMDV) isolated from 1993 to 2012 from East and North Africa were characterized by virus neutralisation tests using bovine antisera to three existing (O/KEN/77/78, O/Manisa and O/PanAsia-2) and three putative (O/EA/2002, O/EA/2009 and O/EA/2010) vaccine strains and by capsid sequencing.Genetically, these viruses were grouped as either of East African origin with subdivision into four topotypes (EA-1, 2, 3 and 4) or of Middle-East South Asian (ME-SA) topotype.The ME-SA topotype viruses were mainly detected in Egypt and Libya reflecting the trade links with the Middle East countries.There was good serological cross-reactivity between the vaccine strains and most of the field isolates analysed, indicating that vaccine selection should not be a major constraint for control of serotype O FMD by vaccination, and that both local and internationally available commercial vaccines could be used.The O/KEN/77/78 vaccine, commonly used in the region, exhibited comparatively lower percent in vitro match against the predominant topotypes (EA-2 and EA-3) circulating in the region whereas O/PanAsia-2 and O/Manisa vaccines revealed broader protection against East African serotype O viruses, even though they genetically belong to the ME-SA topotype.
a holistic simulation of the energy and resource use within a factory.Based on the literature reviewed Table 5.2 illustrates the following;,There is currently no evidence in the reviewed literature of any of the software tools considered having the functionality to model across all production facility layers and link those models together.AnyLogic , IBPT including the Adapted IBPT , IDA ICE , IES VE , Microsoft Excel , Modelica Buildings Library , SIMFLEX/3D , Simulink including MATLAB , TRNSYS and WITNESS all offer the potential functionality to link the simulation of different production facility layers.In all cases further research is required to fully confirm this hypothesis.Any of the software tools identified could be utilised, if deemed appropriate, in a co-simulation manner to simulate specific aspects or an individual production facility layer.The identification of open source software tools is useful for future research as those only available through commercial licensing arrangements may not easily allow for development towards a holistic production facility energy use simulation.Energy use by industry is composed of several different main end-uses that traditionally utilise separate and segregated simulation methods for energy prediction.A holistic factory simulation would enable energy savings to be identified within elements across the entire operating spectrum and is therefore more likely to achieve the greatest energy efficiency savings or reduction in energy use.Previous work contained within this paper identified the efforts taken to date in order to achieve this goal.These efforts can be categorised into two types of holistic simulation;,Co-simulation – utilising multiple “best in discipline” software platforms and coupling them to share data between simulation iterations.Hybrid simulation – utilising a single software platform capable of modelling all entities, including interdependencies, to achieve a holistic factory simulation.Prior to any simulation efforts, the scale of simulation required is assessed.For a SME it may be more appropriate to use energy metering and numerical approaches to identify potential energy savings or efficiency improvements.Whereas, a holistic facility approach could possibly be better suited to the heavier, more complex, industries such as the manufacture of coke, refined petroleum and chemical products, as well as the automobile manufacturing industry.Some useful objective functions have been identified within the literature review to aid in the identification of more beneficial changes to the manufacturing environment over other potential options.These include minimising energy use, total production time, total energy used and total water consumed as well as maximising energy efficiency and throughput.These objective functions can be extended as required to include the minimisation of any resource used within a manufacturing facility.Modelling buildings and associated manufacturing processes from the beginning can be time consuming and costly which is a disincentive to many companies that may have already developed a BIM or manufacturing production model for the building under assessment.As such the use of existing models/simulations or a rapid method of determining building geometry, using site measurements or existing BIM data, within a VE would be extremely beneficial.Some newer buildings would have an associated BIM however this contains a lot of information not required for energy modelling and would most likely need converting into appropriate formats depending on the software selected.Combining the benefits of BEM and MPS to achieve a holistic manufacturing facility or factory simulation requires a unique simulation approach.This paper reviewed the modelling and simulation tools available, or developed as part of previous research, that combine elements of BEM and MPS.In doing so the challenges of combining BEM and MPS were highlighted.This paper has only focused on modelling and simulation tools that have been developed or applied to a manufacturing facility ignoring the tools available solely for BEM of residential and commercial buildings or MPS that do not consider the use of energy.Consideration was given to the different challenges associated with modelling and simulating existing as-built facilities or future building designs.The software commercially available that the existing modelling and simulation tools are embedded within or are “bolt-ons” to have been discussed.An emphasis was placed on any existing tools that offer a holistic simulation of a manufacturing facility in terms of energy use.This paper excluded research on the calibration, sensitivity and validation of the building and process energy modelling and simulation tools described within this paper as these are extensive topics by themselves.All existing modelling and simulation tools discussed were assumed to be appropriately calibrated and validated in the existing literature.In addition, methods of renewable energy generation for a factory were also excluded with the focus being placed on increasing energy efficiency or reducing energy use.This paper has highlighted the challenges of BEM in manufacturing through a review of existing literature.The review identified that progress has been made in attempting to simulate the energy use across different system levels within a manufacturing facility including interdependencies; machines, process lines, TBS and building shell.However, the progress to date has generally been simplistic and “proof of concept” in nature resulting in possible solutions towards a holistic energy simulation but requiring further development to obtain a comprehensive simulator.This paper has reviewed the developed modelling approaches and the tools available for use in future research.Requirements have been identified for the development of a holistic energy simulation tool for use in a manufacturing facility, that is capable of simulating interdependencies between different building layers and systems, and a rapid method of 3D building geometry generation from site data or existing BIM in an appropriate format for energy simulations of existing factory buildings,In addressing these research areas, industry will be empowered to make
Manufacturing is a competitive global market and efforts to mitigate climate change are at the forefront of public perception.Current trends in manufacturing aim to reduce costs and increase sustainability without negatively affecting the yield of finished products, thus maintaining or improving profits.Effective use of energy within a manufacturing environment can help in this regard by lowering overhead costs.Significant benefit can be gained by utilising simulations in order to predict energy demand allowing companies to make effective retrofit decisions based on energy as well as other metrics such as resource use, throughput and overhead costs.Traditionally, Building Energy Modelling (BEM) and Manufacturing Process Simulation (MPS) have been used extensively in their respective fields but they remain separate and segregated which limits the simulation window used to identify energy improvements.This review details modelling approaches and the simulation tools that have been used, or are available, in an attempt to combine BEM and MPS, or elements from each, into a holistic approach.Such an approach would be able to simulate the interdependencies of multiple layers contained within a factory from production machines, process lines and Technical Building Services (TBS) to the building shell.Thus achieving a greater perspective for identifying energy improvement measures across the entire operating spectrum and multiple, if not all, manufacturing industries.In doing so the challenges associated with incorporating BEM in manufacturing simulation are highlighted as well as gaps within the research for exploitation through future research.This paper identified requirements for the development of a holistic energy simulation tool for use in a manufacturing facility, that is capable of simulating interdependencies between different building layers and systems, and a rapid method of 3D building geometry generation from site data or existing BIM in an appropriate format for energy simulations of existing factory buildings.
Point clouds, as a form of Lagrangian representation, allow for powerful and flexible applications in a large number of computational disciplines.We propose a novel deep-learning method to learn stable and temporally coherent feature spaces for points clouds that change over time.We identify a set of inherent problems with these approaches: without knowledge of the time dimension, the inferred solutions can exhibit strong flickering, and easy solutions to suppress this flickering can result in undesirable local minima that manifest themselves as halo structures.We propose a novel temporal loss function that takes into account higher time derivatives of the point positions, and encourages mingling, i.e., to prevent the aforementioned halos.We combine these techniques in a super-resolution method with a truncation approach to flexibly adapt the size of the generated positions.We show that our method works for large, deforming point sets from different sources to demonstrate the flexibility of our approach.
We propose a generative neural network approach for temporally coherent point clouds.
of a rule in their sequential order.To gain better insights into the rules distribution in relation to their occurrence and length, KAMAS provides two different histograms, each combined with a range slider which can be used as filtering option.R2 Visual representation: In general, the decision for an interface similar to programming IDEs was well received by the participants.It was easy for them to understand the handling and to work with it."Additionally, the participants and the focus group members appreciated the prototype's wide range of coordinated features, which they regarded as useful for data exploration while not being overloaded.A particularly interesting outcome of the tests from a visualization design perspective is that the arc-diagrams did not provide the benefits we expected.One participant realized that something interesting was in the data, but he could not pinpoint the meaning.Yet, the simple connection line between the rule overview table and the rule detail table, which originally was considered “nice to have” by designers, turned out to be a much appreciated and valuable feature.Thus, the connection line supports the analysts in finding the connection between these two representations.R3 Workflow: All included filter methods were very well received.Additionally, the dynamic query concept and its fast response was understood very well by the participants.In general, they described the relationships of the filters as intuitive and the usage of the KDB by drag and drop actions as easy to use.By subsequent focus group meetings, further improvements were integrated, such as the colored background of the filters, used to emphasize visualization elements and connections.One of the major inputs of the industry focus group was that not only malignant rules, but also benign rules are important for good analysis.Therefore, participants suggested changing the three-step red color scale to a five-step scale with a neutral color in the center.Based on the insights we gained from the tests we found that for the participants, the visualization of the expert knowledge and also the handling of the KDB was easy to understand and use.R4 Expert knowledge: As previously mentioned, the KDB tree was well received by the participants and focus groups members.Another improvement added as a result of the user study was the addition of brackets with numbers of included rules at the end of each node.Additionally, we added a counter for each type of represented knowledge in the interface.The industry focus group members noted that the newly added numbers were helpful for getting a better overview of the loaded data.Reflecting on the insights we gained in the performed tests, we found out that the analysts appreciated and could benefit from the externalized expert knowledge by sharing and activating or deactivating KDB elements during the analysis process.Categorization of KAMAS: If we categorize KAMAS along the Malware Visualization Taxonomy, we can see that KAMAS can be categorized on the one hand as a Malware Forensics tool which regards to the analysis of malicious software execution traces.On the other hand, KAMAS can also be categorized as a Malware Classification tool for malware comparison in relation to the automated analysis.This automated analysis works on the included call sequences contained in a loaded file and is based on the explicit knowledge stored in the KDB.Lessons learned: During this design study, we learned that explicit knowledge opens the possibility to close the gap between different categories of malware analysis systems.Thus, it combines features for Malware Forensics and Malware Classification.In contrast to other malware analysis systems which build their visual metaphors directly on the output of the data providers, KAMAS uses an input grammar generated by a combination of Malheur and Sequitur for cluster and data classification.Therefore, we use analytical and visual representation methods to provide a scalable and problem-tailored visualization solution following the visual analytics agenda.For keeping up with the large number and dynamic evolution of malware families, malware analysts need to continuously adapt the settings of their visualization systems, whereby interactivity is a key strength of visualization systems.Malware analysis in particular profits from extensive interaction and annotation features as it is a very knowledge-intensive job."By providing knowledge-oriented interactions, externalized knowledge can subsequently be used in the analysis process to improve analysts' performance.Transferability: The knowledge generation loop can be generalized for other domains taking into account domain-specific data structures and patterns of interest.On a general level, the workflow for knowledge generation and extraction is mostly similar and always includes the system user as an integral part of the loop.Focusing on n stepped colored highlighting and easy to understand summarization techniques it is faster and more effective to find similarities in the data.In future work, we will extend our research to a different problem domain in order to generalize our results."Therefore, we have to adapt and extend the knowledge-assisted visualization methods and the interface as necessary in relation to the users' needs.
IT-security experts engage in behavior-based malware analysis in order to learn about previously unknown samples of malicious software (malware) or malware families.For this, they need to find and categorize suspicious patterns from large collections of execution traces.Currently available systems do not meet the analysts’ needs which are described as: visual access suitable for complex data structures, visual representations appropriate for IT-security experts, provision of workflow-specific interaction techniques, and the ability to externalize knowledge in the form of rules to ease the analysis process and to share with colleagues.To close this gap, we designed and developed KAMAS, a knowledge-assisted visualization system for behavior-based malware analysis.This paper is a design study that describes the design, implementation, and evaluation of the prototype.We report on the validation of KAMAS with expert reviews, a user study with domain experts and focus group meetings with analysts from industry.Additionally, we reflect on the acquired insights of the design study and discuss the advantages and disadvantages of the applied visualization methods.An interesting finding is that the arc-diagram was one of the preferred visualization techniques during the design phase but did not provide the expected benefits for finding patterns.In contrast, the seemingly simple looking connection line was described as supportive in finding the link between the rule overview table and the rule detail table which are playing a central role for the analysis in KAMAS.
adjuvant effect when measuring anti-TTCF specific antibodies.This also suggests the oligomerisation domain of hC4BP does not provide CD4+ T cell help for any anti-TTCF B cells in the mouse.Furthermore, no advantage of oligo-antigen valency over monomeric antigen was realized, although this was likely compounded by the lack of interaction between hC3d and m hCR2.The full testing of these oligomeric hC3d antigen compounds in the recently generated hCR2 BAC mice crossed onto the C3−/− background would be possible given time.However, these experiments will be hampered by the loss of endogenous C3 to drive and support the immune response and therefore give a true idea of how that will translate to the clinical setting that these pre-clinical studies are supposed to emulate.We concede that only construction of mC3d fused to mC4BP constructs will allow the adjuvant capabilities of similar oligomeric C3d compounds to be fully tested in mouse models.This remains an important line of investigation as there are still many unknowns regarding the mode of action of C3d as an adjuvant.The linear trimer construct of C3d has been shown to be an effective stimulator of B cell responses but monomeric C3d-Ag is considered immunosuppressive.Thus, would an arrayed C3d structure function as a group of monomers in this context or act similar to the linear trimer?,Potentially, C3d-containing multimers may also result in reduced immune response due to excessive CR2 cross-linking that has been reported with the use of high doses of C3dg-streptavidin complexes.In summary, these experiments clearly demonstrate that hC3d does not bind to mCR2 as previously reported, while mC3d does bind hCR2 with an increased affinity to that measured for hC3d.Importantly we also demonstrate for the first time that even following the replacement of mCR2 with the hCR2 homologue in transgenic animals no hC3/hCR2 binding is seen in the present of mC3.These 2 new findings severely hamper the testing of novel strategies to improve the efficiencies of hC3d-containing recombinant adjuvants in mice.Importantly these studies thus highlight the importance of extensive testing of candidate C3-based vaccines and that, even when attempts are made to maximize species conservation in animal models, considerable care needs to be taken when analyzing cross species receptor/ligand interactions.There are no conflicts of interest.
The use of C3d, the final degradation product of complement protein C3, as a “natural” adjuvant has been widely examined since the initial documentation of its immunogenicity-enhancing properties as a consequence of binding to complement receptor 2.Subsequently it was demonstrated that these effects are most evident when oligomeric, rather than when monomeric forms of C3d, are linked to various test protein antigens.In this study, we examined the feasibility of enhancing the adjuvant properties of human C3d further by utilizing C4b-binding protein (C4BP) to provide an oligomeric arrayed scaffold fused to the model antigen, tetanus toxin C fragment (TTCF).High molecular weight, C3d-containing oligomeric vaccines were successfully expressed, purified from mammalian cells and used to immunize groups of mice.Surprisingly, anti-TTCF antibody responses measured in these mice were poor.Subsequently we established by in vitro and in vivo analysis that, in the presence of mouse C3, human C3d does not interact with either mouse or even human complement receptor 2.These data confirm the requirement to develop murine versions of C3d based adjuvant compounds to test in mice or that mice would need to be developed that express both human C3 and human CR2 to allow the testing of human C3d based adjuvants in mouse in any capacity.
that are more relevant to vegetation dynamics, land-atmosphere interaction, and energy utilization processes.Regardless of the lumped nature of calibration, we found that the M3 noticeably increased the accuracy of SWAT’s ET simulation compared to M1 and M2.This configuration also resulted in improved streamflow simulations during model verification, indicating a transition of the model state towards increased predictability and reduced equifinality.Despite the seemingly improved water and energy balance, streamflow calibration performance in M3 was distinctively lower than M1 and M2 configurations.We argued that this sub-optimality problem is not specific to a model, remotely sensed dataset or optimization algorithm.Instead, it is an implementation problem due to the aggregation of too many objectives, which imparts an “averaging effect” on the optimized solution.This may end up producing no net improvement of the model in large spatial scales.Our results confirmed the superiority of M4 over M3 for overall model accuracy, although both configurations used the same total volume of remotely sensed data.Further, a one-to-one comparison between the two extreme configurations indicated that an M4-type setup may exhibit reduced uncertainty in the simulation of landscape’s water yield, with distinctly different temporal and spatial variability of streamflow and more accurate representations of vegetation growth with respect to MODIS LAI data.We therefore conclude that modifying certain process descriptions of a hydrologic model may not sufficiently improve overall partitioning of water and energy in a watershed unless the model is calibrated with remotely sensed ET data, and that including biophysical parameters.The substantial disparity between the two extreme configurations bears immense practical implications.The choice of model configurations can impact how scientists and managers quantify and interpret the hydrological and biogeochemical influence of surface depressions under different climate, land use, and potential depression loss/restoration scenarios.Using M1 for modeling landscape hydrological and nutrient hotspots/hot moments may lead to inefficient watershed management decisions.Therefore, we suggest that the spatially distributed calibration approach using remotely sensed ET data and including important biophysical parameters will lead to a more accurately calibrated model that does improve the representation of a watershed’s water and energy balance.While our methodological insights benchmarked efficient utilization of remotely sensed big data, the findings further provided scientific evidence on the evolution of a hydrologic model in the continuum of equifinality.
A hydrologic model, calibrated using only streamflow data, can produce acceptable streamflow simulation at the watershed outlet yet unrealistic representations of water balance across the landscape.Recent studies have demonstrated the potential of multi-objective calibration using remotely sensed evapotranspiration (ET) and gaged streamflow data to spatially improve the water balance.However, methodological clarity on how to “best” integrate ET data and model parameters in multi-objective model calibration to improve simulations is lacking.To address these limitations, we assessed how a spatially explicit, distributed calibration approach that uses (1) remotely sensed ET data from the Moderate Resolution Imaging Spectroradiometer (MODIS) and (2) frequently overlooked biophysical parameters can improve the overall predictability of two key components of the water balance: streamflow and ET at different locations throughout the watershed.We used the Soil and Water Assessment Tool (SWAT), previously modified to represent hydrologic transport and filling-spilling of landscape depressions, in a large watershed of the Prairie Pothole Region, United States.We employed a novel stepwise series of calibration experiments to isolate the effects (on streamflow and simulated ET) of integrating biophysical parameters and spatially explicit remotely sensed ET data into model calibration.Results suggest that the inclusion of biophysical parameters involving vegetation dynamics and energy utilization mechanisms tend to increase model accuracy.Furthermore, we found that using a lumped, versus a spatially explicit, approach for integrating ET into model calibration produces a sub-optimal model state with no potential improvement in model performance across large spatial scales.However, when we utilized the same MODIS ET datasets but calibrated each sub-basin in the spatially explicit approach, water yield prediction uncertainty decreased, including a distinct improvement in the temporal and spatial accuracy of simulated ET and streamflow.This further resulted in a more realistic simulation of vegetation growth when compared to MODIS Leaf-Area Index data.These findings afford critical insights into the efficient integration of remotely sensed “big data” into hydrologic modeling and associated watershed management decisions.Our approach can be generalized and potentially replicated using other hydrologic models and remotely sensed data resources – and in different geophysical settings of the globe.
way.Based on QNP staff and community leaders, the mobility of the populations inside of the QNP, associated to the discovery of new lands, created opportunities for deforestation and slaughter of animal species through the hunting of medium-sized animals for commerce and food.According to the results presented, the classification was statistically significant.According to Lands & Koch Kappa values = 20%; 20% < Kappa = 40%; 40% < Kappa = 60%, 60% < Kappa = 80%, Kappa = 80%.In this study, Kappa coefficient was used as an accuracy measure of the LULC map of the Quirimbas National Park.Authors Lunetta and Lyon, understand that Kappa allows us to verify and test whether a given land use and land cover map generated from data and using remote sensing techniques and analyzes is very significant or not.Our Kappa for 1979 = 71.84% and Overall Accuracy = 86.55%; 1989 = 83.49% and Overall Accuracy = 93.01%; 1999 = 85.03% and Overall Accuracy = 90.07%; 2009 = 79.57% and Overall Accuracy = 86.42%; 2017 = 80.24% and Overall Accuracy = 86.95%, reveal that the accuracy presents a classification between very good and excellent.It is our understanding that a better classification could be achieved.However, a set of additional procedures should be introduced, such as increasing the number of sample sizes collected for classification, to consider image quality and resolution, image size and shape, and additional information about the sensors.On that research, three land cover changes categories were under analysis in the last 38 years.In 1979, only one category was detected.We understand that the non-detection of the two categories, was associated the fact that the vegetated lands dominates and are distributed uniformly throughout the Quirimbas National Park, making it impossible to identify and collect sufficient ground samples for classification, it is considered that the image quality and spatial resolution was a decisive factor, since it did not allow us to distinguish accurately the different objects types on the image.Image quality and resolution are discussed by the authors, and findings clearly show that the differences are remarkable and significant between a high-resolution and low-resolution image, and that has an impact on the accuracy of the classification of land use.There was no record with clear evidence of the emergence of human settlements and non-vegetated lands before 1979."The research showed that the community's notable mobility and appearances of human settlements and non-vegetated lands in the Quirimbas National Park may have begun after 1979.It is likely that this mobility, until 1995, is motivated by the combination of several factors: historical and cultural, associated with colonial and civil wars, as well as traditional nomadism and socioeconomic needs.figures– and illustrate combinations of different forms and types of land use and landscape changes.The rapid growth of human settlements and non-vegetated land since 1989, gives evidence of how rural population growth and local needs are closely related to the exploitation of land and natural resources."However, it is not conclusive that the exponential population growth in the Quirimbas National Park, associated with the community's direct dependence on local natural resources, are the only promoters of the changes that have occurred from 1979 to 2017.Similar studies developed in Africa confirm that resources exploitation, agriculture, population growth, built-up, are considered the main carriers landscape change.However, we associate with the identified promoters the governmental fragility in the application of the biodiversity conservation and natural resources exploitation laws, control, monitoring and non-implementation of territorial planning and human settlements programs.The vegetation reduced about 41.67%, in 38 years.Meanwhile, the population continues to grow year after year.This trend of human population growth and disorderly mobility inside Quirimbas National Park tends to put pressure on this ecosystem and direct QNP to unsustainable levels, with irreversible consequences.The reports are incipient in Africa and elucidate similar cases.On that issue, it was not possible to identify any report carried out in conservation and protected areas that pointed to a drastic vegetation reduction of more than 40% throughout history.Similar cases have been widely reported by research conducted outside protected areas and urban centers.It is true that studies of land use and land cover carried out in conservation areas identify problems of deforestation, illegal land occupation, communities invasion, especially for the agricultural practice.However, when quantifying the impacts, the conclusions focus on reducing forests to a worrying state, but at sustainable levels and with lower deforestation rates.We understand that we are likely to be facing an emerging case on the African continent, which should require a new paradigm of natural resource management, conflict resolution, community involvement and protected area management.According to our analysis and understanding, QNP has lost its status as Category II “National Park” according to the overall classification of protected areas proposed by IUCN.Dudley, refer that National Parks are generally large-scale areas with the core goal to protect natural biodiversity, based on the ecologic structure and environmental processes, to promote environmental education, scientific, tourism, cultural development and process regulation.These areas are not intended to implement management systems to promote and guarantee the sustainable use and exploitation of natural resources by the communities.The presence of human population and settlements are discouraged."Buildings, public infrastructure constructions, people's movements with visible impact are not allowed, except for tourism and visitation proposal.The presence of huge numbers of villages, and accentuated territory fragmentation in the Quirimbas National Park do not give a guarantee that the current implemented model or strategy can succeed in continuing to protect biodiversity, landscape, and habitat."In this section, we intend to discuss
The results show that the overall map classification obtained was between very good and excellent: 1979 - Kappa 71.84%, Overall Accuracy 86.55%; 1989 - Kappa 83.49%, Overall Accuracy 93.01%; 1999 - Kappa 85.03%, Overall Accuracy 90.07%; 2009 - Kappa 79.57%, Overall Accuracy 86.42%; 2017 - Kappa 80.24%, Overall Accuracy 86.95%.
II should be the responsibility of the government.While level III should be fully managed by local communities.Government action will serve to regulate processes and oversee the programs and policies implementation.According to the objectives and methodology proposed, this research provided a historical analysis of land use and land cover of the Quirimbas National Park useful to serve as a basis for future LULC analysis, as well as to guide the preparation of concrete local plans for the use and exploration of resources, area sustainability and the protection of forests and animals.Quirimbas National Park has experienced unsustainable levels of threats and destruction of the territory, which hinder the management of natural resources and biodiversity conservation.On average, the results indicate that the Quirimbas National Park has lost around 10.42% of the vegetated lands every 10 years, corresponding to 74.431,1, and the vegetation has not recovered over the years.For 38 years, QNP has lost about 301,761.7 ha of vegetated lands.The main land change promoters are associated with the expansion of uncontrolled and disordered settlements, intensive agriculture, forest resources trade, uncontrolled fires, private and public infrastructures, trade and artisanal exploitation of mining resources.The impacts range from the fragmentation of the territory, isolation of habitats, reduction of native forest, death of animal species and conflicts.The recovery of the Quirimbas National Park and the reduction of deforestation levels should require a new paradigm for the management of natural resources, based on the re-dimensioning and re-qualification of the Park, creation of a compensation area or natural corridors, as well as the inclusive and community-centered management of natural resources.
National parks are established with the aim of guaranteeing and protecting natural biodiversity and ecosystems, in a multi-level and integrated approach.The Quirimbas National Park (QNP) has been suffering from severe and constant threats originating from different sources and changes in land use and land cover.These changes, and in the context of global climate change, pose permanent challenges to the managers of this conservation area of Mozambique.The research aimed to analyze the historical and recent LULC over the last 38 years, to provide consistent and scientific information for decision making on biodiversity conservation approaches; to identify the main changes and their impacts on the ecosystem to implement/develop appropriate mitigation strategies.A combined and integrated methodological approach has been developed from satellite imagery analyzes of Landsat 2 and 5 MSS (Multispectral Scanner); Landsat 5 TM (Thematic Mapper), Landsat 8 OLI (Operational Land Imager), and fieldwork (field observation and communities and QNP staff meetings).For 38 years, the QNP lost about 301,761.7ha, corresponding to 41.67% of the total QNP coverage land.The main causes are associated with intensive agriculture, human settlements, population growth, illegal exploitation of forest resources and miners inside of the Quirimbas National Park.The impact extends from territory reduction and fragmentation to vegetation and animal biodiversity loss, human-wildlife conflicts, habitat connectivity loss, species isolation and scaring, and basic resources scarcity for the community's livelihoods.
earthquake, with maximum power between 4 and 5 mHz, and the second around 0.90 h with maximum power at a slightly lower frequency of about 3 mHz.These components of the signal are present in the UA plus SMF source, where the simulation produces the two distinct high-frequency peaks, with localization in both time and frequency agreeing well with the recorded data, although they are each shifted to slightly lower frequency.They are absent, however, in the simulation based on the UA earthquake source alone.Similar results are also seen at the Central and South Iwate buoys, where a single dominant high-frequency peak persists for about 0.2 h.The main elements of the recorded GPS buoy data, therefore, are simulated by the UA plus SMF source, but not by the UA earthquake source alone, for which the corresponding high-frequency signal content is absent.In the data recorded at DART buoy #21418 there is a prominent ridge of high-frequency energy, which gradually increases from 3 to 5 mHz, between 0.55 to 0.65 h after the start of the earthquake.This high-frequency wave signature indicates the presence of a highly dispersive wave train in the signal that cannot be explained by the UA earthquake seismic source alone, where the energy is concentrated at much lower frequencies.Only the dual UA plus SMF source captures the structure of this high-frequency dispersive tail, although it is shifted slightly later in time.Overall, it is clear from this analysis of the buoy data that the considerable amount of energy at frequencies above 3 mHz observed in the recorded data is inconsistent with the UA earthquake source alone.In contrast, the high-frequency signal is simulated relatively well with the addition of the proposed SMF source.For the earthquake UA source the simulated surface elevations uniformly lack high-frequency content above 3 mHz.The absence of this high-frequency content is notable in simulations for GPS buoys facing the Sanriku coast and DART buoy #21418.In contrast, simulated GPS/DART buoy wave elevations for the dual source result in time-frequency signatures which match the observed high-frequency content above 3 mHz and which localize this content correctly in space and time.We compare both our synthetic waveforms for the dual source and those computed by four other studies to observed data at the three Iwate GPS buoys and DART buoy #21418.The four other studies are Iinuma et al., whose model is based on the seafloor displacement field calculated from the estimated seismic interplate slip distribution; Gusman et al. whose model is based on a joint inversion using a combination of tsunami waveforms, GPS data and seafloor crustal deformation data; Romano et al. who use a joint inversion of earthquake-only rupture from static on- and offshore geodetic data in combination with on- and offshore tsunami waveform data; and Satake et al., whose model was obtained by inversion of onshore GPS data and tsunami waveforms at offshore buoys.As we already illustrated in panels d–f and j of Fig. 14, the synthetics for our dual source model fit both the initial pulse and the later high-frequency arrivals at these four stations.By contrast, from Fig. 19 it can be seen the synthetics of Iinuma et al. do not fit the first pulse at any of the three GPS buoys; they fit the first pulse at the DART buoy to some extent, but not the later high-frequency arrivals.The synthetics for the other three studies have large components of very high-frequency noise at the three GPS stations, but these are not present in the observed data.The red/brown trace and purple trace fit the first two cycles of the data well in panel d of Fig. 19, but the green trace does not.
Many studies have modeled the Tohoku tsunami of March 11, 2011 as being due entirely to slip on an earthquake fault, but the following discrepancies suggest that further research is warranted.(1) Published models of tsunami propagation and coastal impact underpredict the observed runup heights of up to 40. m measured along the coast of the Sanriku district in the northeast part of Honshu Island.(2) Published models cannot reproduce the timing and high-frequency content of tsunami waves recorded at three nearshore buoys off Sanriku, nor the timing and dispersion properties of the waveforms at offshore DART buoy #21418.(3) The rupture centroids obtained by tsunami inversions are biased about 60. km NNE of that obtained by the Global CMT Project.Based on an analysis of seismic and geodetic data, together with recorded tsunami waveforms, we propose that, while the primary source of the tsunami was the vertical displacement of the seafloor due to the earthquake, an additional tsunami source is also required.We infer the location of the proposed additional source based on an analysis of the travel times of higher-frequency tsunami waves observed at nearshore buoys.We further propose that the most likely additional tsunami source was a submarine mass failure (SMF-i.e., a submarine landslide).A comparison of pre- and post-tsunami bathymetric surveys reveals tens of meters of vertical seafloor movement at the proposed SMF location, and a slope stability analysis confirms that the horizontal acceleration from the earthquake was sufficient to trigger an SMF.Forward modeling of the tsunami generated by a combination of the earthquake and the SMF reproduces the recorded on-, near- and offshore tsunami observations well, particularly the high-frequency component of the tsunami waves off Sanriku, which were not well simulated by previous models.The conclusion that a significant part of the 2011 Tohoku tsunami was generated by an SMF source has important implications for estimates of tsunami hazard in the Tohoku region as well as in other tectonically similar regions.
Samples were collected within the Russian Arctic and sub-Arctic territories and the locations are indicated in Fig. 1.Fish species were selected with the guidance of a food-intake questionnaire administered during May 2017 to July 2018.Details about the average quantities of fish species consumed based on the questionnaire results are summarized in Table 1, while the relative contributions of various fish species to the total consumption are provided in a pie-chart format in Fig. 2.The raw data used to generate Table 1 and Fig. 2 are provided as Supplementary Material, as well as an English template of the questionnaire in Russian used.The raw elemental data measured in fish and examined in our recent article are tabulated in Table 2.As these are to be updated later and due to the extent of the data, a Mendeley Data repository was created .The data set will remain publicly available to local populations and authorities/agencies and is to be complemented by future field and analytical activities.It includes the following information: the age and weight of the fish, sampling dates, geographic coordinates and concentrations of Hg, As, Se, Cd, Pb, Co, Ni, Cu and Zn measured in muscle tissues.The moisture content of each sample was determined during the freeze-drying step and this permitted the expression of the elemental concentrations in μg/kg or mg/kg wet-weight.Table 2 also features data for the fish species that were not included in the companion paper due to the small number of fish samples.Three villages with a combined total population of 3059 and of whom ∼65% identified themselves as Nenets constituted the study sites.These villages are located on the shore of the Barents Sea, and the latter constitutes their primary food source.Based on our questionnaire information, the average total fish consumption by the study population was approximately 57 kg/year.Generally speaking, fish are caught predominantly at near-shore locations and by the indigenous people themselves.Fish samples collected for analysis were bought from local fishermen on the same day they were caught.Sample collection spanned the period May 2017 to July 2018.The sampling sites for the fish species analysed are depicted in Fig. 1 and are also specified in Table 2."The names and geographic locations of the sampling sites and subsites are indicated in the Specifications Table above; see the project's data repository for additional information .The coordinates for the sampling collection sites were noted and provided by the fisherman.The most common fish species consumed were identified by the responses to the mentioned questionnaire.The participants were drawn from the villages of Krasnoe, Indiga and Nelmin-Nos and the mentioned questionnaire was administered by the researcher to obtain pertinent information about what type of fish species and quantities they consumed every month.The data on the amount and type of fish commonly eaten by the participants are presented in a pie-chart in Fig. 2.To calculate the annual average fish consumption for each participant interviewed, the total monthly intake by the entire study cohort was first calculated.The latter was subsequently divided by the number of participants and then multiplied by 12.For the analyses, 0.25 g of homogenized/freeze-dried fish muscle samples were treated with 5 ml concentrated nitric acid in 50 ml PP tubes, and subsequently were diluted to 25 ml and analysed by ICP-MS.The limit of quantification for the elements were estimated as: Hg; As; Se; Cd; Pb; Co; Ni; Cu in μg/kg, and Zn mg/kg of wet-weight.Full details of the sample preparation procedures, fish age determination and ICP-MS analyses have been provided in the companion paper .
The raw concentration data for the research article entitled “Essential and non-essential trace elements in fish consumed by indigenous peoples of the European Russian Arctic” (Sobolev et al., 2019) [1] are herein presented.Fifteen fish species were collected in the Nenets Autonomous and Arkhangelsk Regions of the Russian Federation and were analysed for 9 elements (As, Cd, Co, Cu, Hg, Ni, Pb, Se and Zn).The sampling sites were located in the European parts of the Russian Arctic and sub-Arctic territories.Within these territories, Nenets indigenous peoples commonly catch and consume local fish.Based on questionnaire data, local fish sources constituted ∼ 90% of the total fish consumed by endemic individuals living in these regions.The data summarized in this publication fill a gap in knowledge.
Diabetes is an important cause of mortality, morbidity, and health-system costs in the world.1,2,Therefore, there is an urgent need to implement population-based interventions that prevent diabetes, enhance its early detection, and use lifestyle and pharmacological interventions to prevent or delay its progression to complications.To motivate such actions, one of the global targets set after the 2011 UN High-Level Meeting on Non-Communicable Diseases is to halt, by 2025, the rise in the age-standardised adult prevalence of diabetes at its 2010 levels.3,Valid and consistent estimates of diabetes prevalence over time are needed to evaluate the effect of interventions, compare trends in different countries, and measure progress towards the agreed target.A previous study estimated trends in mean fasting plasma glucose from 1980 to 2008 and reported diabetes prevalence, but only as a secondary outcome and estimated based on mean fasting plasma glucose.4,The International Diabetes Federation periodically reports diabetes prevalence,5,6 but does not analyse trends; uses some sources that are based solely on self-reported diabetes; and does not fully account for differences in diabetes definitions in different data sources,7 even though diabetes prevalence varies depending on whether it is defined based on fasting plasma glucose, 2 h plasma glucose in an oral glucose tolerance test, or haemoglobin A1c.8,Furthermore, it is not known how trends in prevalence, together with population growth and ageing, have affected the number of adults with diabetes.Our aim was to estimate worldwide trends in the prevalence and number of adults with diabetes.We also estimated the probability of achieving the global diabetes target.Evidence before this study,We searched MEDLINE for articles published between Jan 1, 1950, and Dec 11, 2013, with the search terms AND.Articles were screened according to the inclusion and exclusion criteria described in the appendix.A few studies have reported diabetes trends in one or a few countries.A previous study reported diabetes prevalence trends to 2008 as a secondary outcome, which was estimated from mean fasting plasma glucose.This study was done before the global target on diabetes was agreed, hence there are no recent data.The International Diabetes Federation periodically reports diabetes prevalence but does not analyse trends, uses some sources that are only based on self-reported diabetes, and does not fully account for differences in diabetes definitions in different data sources.Added value of this study,This study provides the lengthiest and most complete estimates of trends in adult diabetes prevalence worldwide.We achieved this level of detail by reanalysing and pooling hundreds of population-based sources with actual measurements of at least one diabetes biomarker and systematically converting all data sources to a common definition of diabetes.We also systematically projected recent trends into the future, and assessed the probability of achieving the global diabetes target.Implications of all the available evidence,Since 1980, age-standardised diabetes prevalence in adults increased or at best remained unchanged in every country.The burden of diabetes, in terms of both prevalence and number of adults affected, has increased faster in low-income and middle-income countries than in high-income countries.If post-2000 trends continue, the probability of meeting the global diabetes target is lower than 1% for men and is 1% for women worldwide.We estimated trends in diabetes prevalence from 1980 to 2014, in 200 countries and territories organised into 21 regions, mostly on the basis of geography and national income.The exception was a region consisting of high-income English-speaking countries because cardiometabolic risk factors, especially body-mass index, an important risk factor for diabetes, have similar trends in these countries, which can be distinct from other countries in their geographical region.As the primary outcome, diabetes was defined as fasting plasma glucose of 7·0 mmol/L or higher, history of diagnosis with diabetes, or use of insulin or oral hypoglycaemic drugs.This definition of diabetes is used in the Global Monitoring Framework for NCDs;3 it also relies more directly on data from population-based health-examination surveys, which, for logistical reasons, are more likely to measure fasting plasma glucose than 2hOGTT.Our analysis covered men and women aged 18 years or older, consistent with the Global Monitoring Framework for NCDs.3,Our study had three stages, each described in detail below and in the appendix.First, we identified, accessed, and reanalysed population-based health-examination surveys that had measured at least one diabetes biomarker.We then converted diabetes prevalence in sources that had defined diabetes through 2hOGTT or HbA1c or used a cutoff other than 7·0 mmol/L for fasting plasma glucose, to a corresponding prevalence based on the primary outcome as defined above.Finally, we applied a statistical model to the pooled data to estimate trends for all countries and years.We included data sources that were representative of a national, subnational, or community population and that had measured at least one of the following diabetes biomarkers: fasting plasma glucose, 2hOGTT, and HbA1c.We did not use data from sources that relied entirely on self-reported history of diagnosis because this approach would miss undiagnosed diabetes, which forms a substantial share of all people with diabetes, especially in communities with little access to health care.9–11,Our methods for identifying and accessing data sources are described in the appendix.History of diabetes diagnosis was established with survey-specific questions, such as “have you ever been told by a doctor or other health professional that you have diabetes?,or the combination of “do you now have, or have you ever had diabetes?,and “were you told by a doctor that you had diabetes?,Similarly, the use of diabetes drugs was established with survey-specific questions, such as “are you currently taking medication for diabetes or high blood sugar?,or the combination of
INTERPRETATION: Since 1980, age-standardised diabetes prevalence in adults has increased, or at best remained unchanged, in every country.METHODS: We pooled data from population-based studies that had collected data on diabetes through measurement of its biomarkers.FUNDING: Wellcome Trust.BACKGROUND: One of the global targets for non-communicable diseases is to halt, by 2025, the rise in the age-standardised adult prevalence of diabetes at its 2010 levels.
372 000 participants aged 18 years or older.The studies covered 146 of the 200 countries and territories for which estimates were made."These 146 countries contained 90% of the world's adult population in 2014.Regionally, the average number of data sources per country ranged from less than one in central Africa to 24 in the high-income Asia Pacific region.21 of the 54 countries without data were in sub-Saharan Africa, 11 in the Caribbean, seven in central Europe, four in central Asia, and the remaining 11 in other regions.Nearly a third of surveys and studies were from years before 2000, and the other two-thirds for 2000 and later.From 1980 to 2014, worldwide age-standardised adult diabetes prevalence increased from 4·3% to 9·0% in men and from 5·0% to 7·9% in women; the posterior probabilities that these were true increases were 0·994 and 0·954, respectively.Over these years, crude adult prevalence increased from 3·6% to 8·8% in men, and from 4·7% to 8·2% in women.Age-standardised diabetes prevalence in women in 2014 was lowest in northwestern and southwestern Europe, which each had a prevalence of less than 5%.The lowest prevalence in adult men was also in northwestern Europe, at 5·8%.Crude adult prevalence in northwestern Europe was 5·9% for women and 7·9% for men in 2014.At the other extreme, age-standardised diabetes prevalence was higher than 20% in adult men and women in Polynesia and Micronesia, and around 15% in Melanesia and in the Middle East and north Africa.Over the 35 years of analysis, there was almost no change in age-standardised diabetes prevalence in women in northwestern and southwestern Europe, and only a small non-significant increase in central and eastern Europe.Adult men in northwestern Europe also had a smaller rise in prevalence than did other regions.By contrast, age-standardised prevalence in Polynesia and Micronesia rose by 15·0 percentage points in adult men and by 14·9 percentage points in adult women.Crude adult prevalence increased more than age-standardised prevalence in regions that had substantial ageing—eg, in high-income regions.In 1980, age-standardised adult diabetes prevalence was lower than 3% in men in 32 countries and in women in 23 countries.In the same year, age-standardised prevalence was higher than 12% in adult men and women in a few islands in Polynesia and Micronesia and women in Kuwait, reaching 25% in men and women in Nauru.By 2014, women in only one country had an age-standardised adult prevalence lower than 3% and women in only nine countries had one lower than 4%, with the lowest prevalence estimated in some countries in northwestern Europe such as Switzerland, Austria, Denmark, Belgium, and the Netherlands.In the same year, age-standardised prevalence in adult men was higher than 4% in every country; the lowest estimated prevalences were in the same northwestern European countries as those for women and in a few countries in east Africa and southeast Asia.At the other extreme, age-standardised adult diabetes prevalence in 2014 was 31% in men and 33% in women in American Samoa, and was also higher than 25% in men and women in some other islands in Polynesia and Micronesia.No country had a statistically significant decrease in diabetes prevalence from 1980 to 2014, although the relative increase over these 35 years was lower than 20% in nine countries for men, mostly in northwestern Europe, and in 39 countries for women.Over the same period, age-standardised adult prevalence of diabetes at least doubled for men in 120 countries and for women in 87 countries, with a posterior probability of 0·887 or higher.The largest absolute increases in age-standardised adult prevalence were in Oceania, exceeding 15 percentage points in some countries, followed by the Middle East and north Africa.Worldwide, if post-2000 trends continue, the probability of meeting the global diabetes target for men is lower than 1%; for women it is 1%.Only nine countries, mostly in northwestern Europe, had a 50% or higher probability of meeting the global target for men, as did 29 countries for women.Rather, if post-2000 trends continue, age-standardised prevalence of diabetes in 2025 will be 12·8% in men and 10·4% in women.The number of adults with diabetes will surpass 700 million.The number of adults with diabetes in the world increased from 108 million in 1980, to 422 million in 2014.East Asia and south Asia had the largest rises of absolute numbers, and had the largest number of people with diabetes in 2014: 106 million and 86 million, respectively.39·7% of the rise in the number of people with diabetes was due to population growth and ageing, 28·5% due to the rise in age-specific prevalences, and the remaining 31·8% due to the interaction of the two—ie, an older and larger population with higher age-specific prevalences.Half of adults with diabetes in 2014 lived in five countries: China, India, the USA, Brazil, and Indonesia."These countries also accounted for one half of the world's adult population in 2014.Although the top three countries on this list remained unchanged from 1980 to 2014, the global share of adults with diabetes who live in China and India increased, by contrast with the USA, where the share decreased.The changes in the share of adults with diabetes from India and the USA might be partly because of the changes in their shares.However, the share of the adult population of China remained virtually unchanged, while its share of the adult population with diabetes increased.Low-income and middle-income countries, including Indonesia, Pakistan, Mexico, and Egypt, replaced European countries, including Germany, Ukraine, Italy, and the UK, on the list of
Global age-standardised diabetes prevalence increased from 4.3% (95% credible interval 2.4-7.0) in 1980 to 9.0% (7.2-11.1) in 2014 in men, and from 5.0% (2.9-7.9) to 7.9% (6.4-9.7) in women.The number of adults with diabetes in the world increased from 108 million in 1980 to 422 million in 2014 (28.5% due to the rise in prevalence, 39.7% due to population growth and ageing, and 31.8% due to interaction of these two factors).Age-standardised adult diabetes prevalence in 2014 was lowest in northwestern Europe, and highest in Polynesia and Micronesia, at nearly 25%, followed by Melanesia and the Middle East and north Africa.Between 1980 and 2014 there was little change in age-standardised diabetes prevalence in adult women in continental western Europe, although crude prevalence rose because of ageing of the population.By contrast, age-standardised adult prevalence rose by 15 percentage points in men and women in Polynesia and Micronesia.In 2014, American Samoa had the highest national prevalence of diabetes (>30% in both sexes), with age-standardised adult prevalence also higher than 25% in some other islands in Polynesia and Micronesia.Only nine countries for men and 29 countries for women, mostly in western Europe, have a 50% or higher probability of meeting the global target.
had good predictive accuracy.The share of studies that used a portable device for measuring diabetes biomarkers has increased over time.We do not expect the rise in the use of portable devices to affect the estimated levels and trends because their higher use in population-based research is partly due to increasing similarity between their measurements and those in laboratory-based tests,47,48 facilitated by more advanced technologies and better standardisation.Further, although our primary outcome is consistent with the NCD Global Monitoring Framework, diabetes prevalence based on fasting plasma glucose alone is lower than that based on the combination of fasting plasma glucose and 2hOGTT.8,Age-standardised adult diabetes prevalence would be 10·0% for men and 8·8% for women, worldwide, if we applied the cross-walking regression to convert our estimates to prevalence of diabetes defined as fasting plasma glucose of 7·0 mmol/L or higher, or 2hOGTT of 11·1 mmol/L or higher, or history of diagnosis with diabetes or use of insulin or oral hypoglycaemic drugs.Finally, the survey data did not separate type 1 and type 2 diabetes because distinguishing between these disorders is difficult in adults.49–51,However, most cases of diabetes in adults are type 2,50,52 so the observed rise in diabetes prevalence in adults is quite likely due to increases in type 2 diabetes.Diabetes and its macrovascular and microvascular complications account for more than 2 million deaths every year,1 and are the seventh leading cause of disability worldwide.53,Diabetes is also a risk factor for tuberculosis, another disease with large burden in low-income and middle-income countries.54,Diabetes and its complications impose substantial economic costs on patients, their families, health systems, and national economies because of direct costs of treatment and loss of work and wages.2,On the basis of estimates for the number of people with diabetes in 2014 in this study, and cost estimates from a systematic review,2 the direct annual cost of diabetes in the world is Intl$825 billion, with China, the USA, India, and Japan having the largest costs.Nearly 60% of the global costs are borne by low-income and middle-income countries, where substantial parts of treatment costs are paid out-of-pocket,2 which affects treatment utilisation and adherence and leads to financial hardship for patients and their families.Glucose reduction with lifestyle modification and drugs in people with diabetes, especially if started early, can delay progression to microvascular complications.55–57,Although evidence is mixed from trials on the macrovascular benefits of intensive glucose lowering,55,58–60 long-term glycaemic control and lowering blood pressure and serum cholesterol also reduce the risk of adverse cardiovascular outcomes.61,62,However, the effectiveness of these interventions at the population level has been slight, both because many diabetes cases remain undiagnosed9–11 and because adherence to treatment is typically lower in general populations than in those enrolled in clinical trials.63–65,Efforts to reduce the global health and economic burden of diabetes should emphasise prevention of diabetes or delaying its onset, through enhancing healthy behaviours and diets at the population level, and early detection and management of high-risk individuals.There has been little success in preventing obesity,16 the most important risk factor for diabetes, at the population level although the global target on obesity could engender new efforts and policy innovations.As these policies are implemented, identifying people at high risk of diabetes—especially those with impaired glucose tolerance—through the primary care system, using advice and support to induce and maintain lifestyle change, possibly together with drugs such as metformin, might be the only short-term approach for global diabetes prevention.44,64,66,67,Such programmes have been implemented in a few high-income and middle-income countries,45,68,69 but their success elsewhere requires a financially accessible primary care system that prioritises diabetes prevention and management and is staffed and resourced to support lifestyle change and improve access to and adherence to medication.10,43,45,64,68,70
If post-2000 trends continue, the probability of meeting the global target of halting the rise in the prevalence of diabetes by 2025 at the 2010 level worldwide is lower than 1% for men and is 1% for women.Together with population growth and ageing, this rise has led to a near quadrupling of the number of adults with diabetes worldwide.The burden of diabetes, both in terms of prevalence and number of adults affected, has increased faster in low-income and middle-income countries than in high-income countries.We used a Bayesian hierarchical model to estimate trends in diabetes prevalence-defined as fasting plasma glucose of 7.0 mmol/L or higher, or history of diagnosis with diabetes, or use of insulin or oral hypoglycaemic drugs-in 200 countries and territories in 21 regions, by sex and from 1980 to 2014.We aimed to estimate worldwide trends in diabetes, how likely it is for countries to achieve the global target, and how changes in prevalence, together with population growth and ageing, are affecting the number of adults with diabetes.
carried out on the 3 clusters showing increased activation for the contrast moving versus static stimuli in the within-group comparison in both groups.While 2 of the 3 ROIs from the within-group comparison showed an increased response to moving stimuli, neither the clusters located in the right middle occipital gyrus nor the cluster located in the middle temporal gyrus showed a main effect of group or an interaction of group and condition.Similar to previous research, this ROI could only be identified reliably in the right hemisphere of 11/19 younger and 11/18 older participants.The statistical analysis showed no main effect of age group, no main effect of condition, and no interaction of these 2 factors.This ROI could be identified in the right hemisphere of 14/17 younger and 12/18 older participants and in the left hemisphere of 13/17 younger and 10/18 older participants.In order to analyze a larger sample, analyses were restricted to the right hemisphere.The statistical analysis showed no main effect of age group, no main effect of condition, and no interaction of these 2 factors.This ROI could be identified in the right hemisphere of 13/17 younger and 11/18 older participants and in the left hemisphere of 14/17 younger and 12/18 older participants.In order to analyze a larger sample, analyses were restricted to the left hemisphere.The statistical analysis showed a main effect of condition = 6.78, p = 0.005), with significantly higher activation during the normal compared to the random condition = 2.90, p = 0.008) as well as during the scrambled compared to the random condition = 3.65, p = 0.001).There was no main effect of age group, and no interaction of these 2 factors."Correlating d' with peak ROI activations during the low-level and the biological motion tasks yielded no significant results.This is to our knowledge the first fMRI study investigating the neural correlates of motion perception in healthy human aging.The effect of aging on low-level motion processing and biological motion processing was investigated in a group of younger and older adults with a mean age difference of 45.5 years.We found significant age-related differences in the processing of simple radial motion.Although peak activation was similar for both age groups, the second level between-group analysis showed that older participants recruited a more extensive area of middle/superior temporal gyrus for radial motion.In addition, older participants also showed activation in an area of right inferior frontal gyrus, which was inactive in younger adults.Increased activation in this area was found before in a sample of younger adults viewing biological motion stimuli with reduced motion information.This was interpreted as possibly reflecting increased effort caused by the more complex nature of the unusual motion stimuli, and a similar explanation might be appropriate for our finding."As this is a cross-sectional study, it is possible that the younger group's higher familiarity with computer generated low-level motion allowed them to focus on the attentional task more readily, thus expending less overall effort than the older group.However, studies on face processing also report increased frontal activation in older compared to younger adults.While Gunning-Dixon et al. interpreted this increased frontal activation as possibly reflecting increased task-related effort, Lee et al. used an adaptive behavioral task and found a positive association of frontal recruitment and better performance in the older group.Furthermore, Goh et al. reported overall increased frontal activation in an older compared to a younger group during a face adaptation task in the absence of behavioral performance differences."In line with these findings, older adults' increased frontal activation in our study might reflect compensatory activation for an age-related decline in overall cognitive functioning—specifically affecting low-level motion processing—as detailed in the scaffolding theory.Future studies of motion processing might therefore benefit from employing additional analysis methods such as dynamic causal modeling to identify functional links during task completion.Even though we found age-related differences in hMT+ activation for low-level motion, this area did not show age-related differences for biological motion as tested with scrambled, normal, and random-position point-light walkers.For both age groups, we replicated previous imaging results of increased sensitivity of hMT+ to low-level motion independent of the global stimulus form Grossman et al.In addition, areas traditionally associated with static face processing showed increased activation for biological motion in both age groups.The OFA showed increased activation to normal and scrambled, but not to random point-light walkers.Findings for FFA and for STS are less clear.Results nominally indicate larger activation for normal and for normal and random stimuli than for scrambled biological motion.However, these differences did not reach statistical significance, which might be due to the reduced sample size in the ROI analyses.Thus, future studies should investigate larger samples to obtain higher power for these difficult to define higher-level regions of interest.While STS can be difficult to locate in some participants, detection of FFA and OFA was more arduous in the older group.This is in line with a previous study on face processing in healthy aging which reports a dedifferentiation in the activation of face-specific regions, presumably making it more difficult to functionally locate these areas with the predefined contrasts.Future studies might thus consider employing different thresholds or different contrasts for the functional ROI definition in younger and older participants, although this would come at the high cost of reduced consistency in the analyses across groups.In addition, it might be worthwhile to include a localizer paradigm for body selective regions in future studies.While previous studies reported increased sensitivity of FFA/OFA to biological motion there is also some indication
This functional magnetic resonance imaging (fMRI) study investigated the neural correlates of low-level radial motion processing and biological motion processing in 19 healthy older adults (age range 62–78 years) and in 19 younger adults (age range 20–30 years).Whole-brain comparisons showed increased temporal and frontal activation in the older group for low-level motion but no differences for biological motion.
of increased extrastriate body area sensitivity, as well as of the FFA sensitivity being driven by an overlap of FFA and the fusiform body area.Alternatively, future studies could strive to develop a localizer contrasting point-light walkers with face and/or body stimuli to identify areas which are particularly sensitive to the point-light stimuli.Our findings are in line with the results of previous psychophysiological studies: Age-related decline has been shown to strongly affect tasks of low-level motion, which in turn are strongly related to activation in dorsal area hMT+.In contrast, the perception of biological motion has been found to remain relatively intact in healthy aging, possibly because there is an increasing reliance on form perception over motion processing with increasing age.It seems somewhat surprising that the results did not show increased STS activation for the normal compared to the scrambled condition as this has previously been reported in the literature.There are a few potential explanations for this: first and foremost, it has to be noted that the behavioral task in our study was stimulus-unrelated.This served to avoid potentially confounding effects of attentional allocation to the stimuli in the different conditions and also across age groups.Such attention-related differences would have confounded with any differences in neural activation between age groups or conditions.In line with the rationale guiding this design, no correlations between behavioral task performance and stimulus-related activation were found for any of the 2 groups.Thus, alternative interpretations related to distraction suppression and interference control can be excluded with reasonable confidence.However, tasks in the behavioral studies mentioned above were mostly stimulus-related, and activation in STS for normal compared to scrambled conditions was mostly found in studies where attention was allocated to the stimulus."Grossman et al., for example, asked participants to perform a 1-back task that directed participants' attentional focus on the presented stimuli, and Michels et al. asked participants to perform a stimulus-related forced choice discrimination task.Therefore, focusing attention away from the stimuli in our study might have attenuated potentially existing differences between these 2 stimulus categories.It is therefore possible that an increased sample size might have been required to obtain the same results as previous studies, which focused attention directly on the presented stimuli.Second, the addition of random point-light walkers besides the usually tested normal and scrambled point-light walkers might have influenced the results.The only other fMRI study that previously investigated random point-light walkers did not use scrambled walkers, and as the ROIs in this study were defined based on anatomical landmarks, these results are very difficult to compare to the results obtained here.While previous studies on dynamic facial motion still found the hypothesized effects despite a very difficult task that was unrelated to the presented stimuli, effects were more pronounced when the behavioral task increased attentional focus on the presented stimuli.Although future studies might benefit from focusing the behavioral task on the presented stimuli while still taking care to avoid attentional biases in the different conditions, finding such a task that furthermore causes no major performance differences between the two age groups might not be possible.It further has to be noted that the behavioral task employed here was a simple target detection task with participants showing hit rates of around 70%.In line with attentional load theory as well as corresponding previous research, it should therefore be assumed that participants were easily able to focus on the task at hand while also processing the simultaneously presented task-irrelevant motion stimuli.As previous research points to an age-related decline in the processing and tracking of multiple objects, it might indeed be preferable to present a very simple central task while investigating stimuli presented “in the background”.However, as psychophysiological studies of biological motion perception point to a possibly age-related decrease in direction discrimination with increasing stimulus complexity, it might also be promising to employ stimulus material of increased complexity to test if neural processing under these conditions still remains unchanged with increasing age.This first fMRI study of motion perception in healthy aging supports the notion of an age-related change in low-level motion processing.Our results indicate that this change in visual area hMT+ necessitates the recruitment of additional regions in the temporal and frontal cortex for processing low-level motion.In contrast, there is little activation difference between younger and older adults in areas related to biological motion processing.This supports the hypothesis that the processing of biological motion is less affected by healthy aging than the processing of low-level motion.The authors have no actual or potential conflicts of interest.
Behavioral studies have found a striking decline in the processing of low-level motion in healthy aging whereas the processing of more relevant and familiar biological motion is relatively preserved.Brain regions related to both types of motion stimuli were evaluated and the magnitude and time courses of activation in those regions of interest were calculated.Time-course analyses in regions of interest known to be involved in both types of motion processing likewise did not reveal any age differences for biological motion.Our results show that low-level motion processing in healthy aging requires the recruitment of additional resources, whereas areas related to the processing of biological motion processing seem to be relatively preserved.
from simulation runs with different combinations of parameter estimates need to be compared to real data.Yet, the number of simulation runs required for such an analysis increases exponentially with the number of parameters; hence, calibration remains a modelling process that is often neglected, but which should be incorporated in individual-based modelling to generate more accurate predictions.Different approaches have been used to optimize model calibration, such as manual calibration, statistical calibration techniques using maximum likelihood or Bayesian approaches, or inverse modelling.For instance, Wiegand et al. use a Monte Carlo Filtering method, in which the different cost measurements are successively applied as filters.We used a different approach: we used the highest cost out of the four costs for home range characteristics, hourly displacements, depth preference, and group size distribution to represent the maximum cost per parameter combination.Using this method, the worst fit of the model to the empirical data represents a parameter combination’s cost, which will result in similar results as the method proposed by Wiegand et al.To improve the current model, more survey data is needed, especially on fish distributions.In our model, we assumed a homogeneous distribution of prey, as data on fish distributions is lacking.Yet, it is widely known that fish are not distributed evenly across the ocean, but are rather clustered in patches of high nutritional value.Dolphins subsequently search for such dense aggregations, a behaviour which is currently not incorporated in our model.Information on fish distributions would much improve the current Maui dolphin movement model.Distribution maps generated with this model will improve the current spatial distribution estimates of the Maui dolphin population and provide key input for comparison of different conservation measures.By overlaying the estimated Maui dolphin distribution map with a map indicating current fishing activities, we can pinpoint the areas where human-dolphin interactions are most likely to cause casualties.Furthermore, for different future scenarios of protection zones, we will be able to estimate the impact on the Maui dolphin population.Calculation of the overlap between Maui dolphin habitat and fishing activity as well as predictions of future scenarios will be important future directions of research.
The current anthropogenic impacts on nature necessitate more research for nature conservation and restoration purposes.To answer ecological and conservation questions concerning endangered species, individual-based modelling is an obvious choice.Individual-based models can provide reliable results that may be used to predict the effects of different future conservation strategies, once calibrated correctly.Here, we calibrate an individual-based model of Maui dolphin movement, which generates Maui dolphin probability distribution maps.We used sighting data for calibration of the chosen parameter combinations; for each simulation run, collected simulated data was compared to the empirical survey data, resulting in cost (Badness-of-Fit) estimates.Using costs of four different aspects of dolphin behaviour, we estimated the most likely parameter combinations.With optimized parameter values, Maui dolphin probability distribution maps were created, resulting in distributions that fall well outside of the current protection zones where either gillnets or trawling or both are prohibited.With these results, protected areas can be properly adjusted to the estimated distribution of this critically endangered species and so aid in their conservation.
The AS04-adjuvanted human papillomavirus HPV-16/18 vaccine, was first authorized in 2007 and is now licensed in more than 130 countries.The vaccine is indicated from 9 years of age for the prevention of premalignant cervical lesions and cervical cancer causally related to certain oncogenic HPV types.The target population for AS04-HPV-16/18 includes young women of child-bearing age in whom pregnancy is frequent.Thus, determining pregnancy outcomes following inadvertent exposure to AS04-HPV-16/18 during pregnancy is important in this population.As for most vaccines, pre-licensure clinical studies were not designed to evaluate the safety of AS04-HPV-16/18 in pregnant women, and data at the time of licensure were insufficient to recommend vaccination during pregnancy.In the clinical program, inadvertent exposures during pregnancy occurred despite precautionary measures to prevent pregnancy.In a pooled analysis of pre-licensure clinical trial data, there were 1737 pregnancies reported over the follow-up period .Pregnancy outcomes were very similar between groups exposed to AS04-HPV-16/18 or to control vaccines, except for an imbalance in the number of spontaneous abortions in women 15–25 years of age who became pregnant around the time of AS04-HPV-16/18 vaccination .The Center for Biologics Evaluation and Research in the United States requested to further investigate the pregnancy outcome of SA.To this end, the Pharmacovigilance plan was designed to explore initiatives to generate more data on outcomes of exposed pregnancies.This included close monitoring of exposed pregnancies in ongoing/active clinical trials, an observational epidemiological study, passive surveillance through spontaneous reporting and establishment of a pregnancy exposure registry.The requested post-licensure epidemiological study investigated the rate of SA among women vaccinated with AS04-HPV-16/18 around pregnancy onset using the Clinical Practice Research Datalink in the United Kingdom.The UK was selected because of high uptake of AS04-HPV-16/18 following a national school-based immunization campaign .The study assessed the frequency of SA among 15-to-25-year-old women who had received AS04-HPV-16/18 between 30 days before and 45 days of gestation, compared to a non-exposed control group .The study showed no evidence of an increased risk of SA in young women inadvertently vaccinated with AS04-HPV-16/18 during the defined exposure period.The frequency of SA was 11.6% among 207 exposed women, and 9.0% among 632 non-exposed women .A second pooled analysis of clinical safety data evaluated more than 57,000 women of whom more than 33,000 received AS04-HPV-16/18 in clinical trials and follow-up studies .Of the 10,476 reported pregnancies, 935 were exposed to vaccination within 60 days prior to pregnancy onset until delivery.Among the 935 exposed pregnancies, congenital anomalies were reported in 12 cases in AS04-HPV-16/18 recipients, and in 11 cases in women who had received control vaccines.No concerns were raised with regards to pregnancy outcomes including SA, stillbirth, elective termination and congenital anomalies, or other indicators such as weight and gestational age at delivery .The pregnancy exposure registry was established in the UK and the US.The goal of the registry was to evaluate the risks of adverse pregnancy outcomes, including major teratogenic effects, in the offspring of women inadvertently exposed to AS04-HPV-16/18 during pregnancy.The registry was closed on 17 November 2015 and the results communicated to the European Medicines Agency.Here we report the findings from GSK’s pregnancy registry, which completes the pharmacovigilance plan to assess outcomes in pregnancies exposed to AS04-HPV-16/18.Congenital anomaly: any morphological, functional and/or biochemical developmental disturbance in the embryo or fetus whether detected at birth or not.The term congenital anomaly is broad and includes congenital abnormalities, fetopathies, genetic diseases with early onset, developmental delay and others .Congenital anomalies were defined following the Centers for Disease Control and Prevention Metropolitan Atlanta Congenital Defects Program .Structural defects were classified as minor or major .Exposure: vaccination with AS04-HPV-16/18 within 60 days before the estimated conception date and delivery.Prospective report: where the outcome of the pregnancy was not known at the time of reporting.Retrospective report: where the outcome of pregnancy was known at the time of reporting.Spontaneous abortions: intrauterine death up to 22 weeks of gestation .Includes miscarriage and missed abortion.Elective termination: Elective/therapeutic termination or induced abortions.Stillbirth: Intrauterine death occurring after week 22 of gestation.First trimester: from last menstrual period through week 13 of gestation.Second trimester: weeks 14–27 of gestation.Third trimester: week 28 through term.In September 2008, universal immunization using AS04-HPV-16/18-vaccine was initiated in the UK for 12-to-13-year-old girls, with a catch-up program among 14-to-18-year olds.Coverage of the 3-dose regimen was 81% in 2010 .As part of an enhanced safety surveillance established by the Medicines and Healthcare products Regulatory Agency , GSK set up a Pregnancy Exposure registry for AS04-HPV-16/18 in the UK in collaboration with Public Health England.A registry managed entirely by GSK was established after the registration of AS04-HPV-16/18 in the US in September 2009.Registration to either registry was always voluntary and could be prospective or retrospective.In each country, reporting of adverse events and vaccine-exposed pregnancies to the registry was made by telephone through GSK’s ‘Call in’ system.Members of the public had access to the registry through the GSK registry website .Enrolment to the UK Registry: UK healthcare providers were informed through the Green Book, HPV vaccine campaign materials and the PHE website , to report all cases of pregnant women who received HPV vaccines to PHE.On receipt of initial reports made by telephone or via the website, PHE sent out a registry enrolment form.Each case was followed up until approximately 10 weeks after the estimated date of delivery, at which time a second form was sent to the healthcare provider.At each step, two follow-up requests and one phone-call were made to encourage completion of
Exposure was defined as vaccination with AS04-HPV-16/18 within 60 days before the estimated conception date and delivery.These registry data complement other data from clinical trials and post-marketing surveillance showing no evidence that vaccination with AS04-HPV-16/18 during the defined exposure period (within 60 days before conception until delivery) increases the risk of teratogenicity.
had structural anomalies and two had non-structural defects.The nine minor structural defects affected multiple organs and included benign conditions that are usually self-limiting or require no treatment.The rate of ankyloglossia is within the reported range in an unexposed population of 0.02–4.8% .In seven out of the nine cases of minor structural defect, the exposure to AS04-HPV-16/18 occurred outside the sensitive period for organogenesis.There were two cases in which a temporal association to vaccination could not be excluded: a male infant born with mild hypospadias whose mother was exposed to AS04-HPV-16/18 at 12 weeks gestation.Hypospadias results from a ventral fusion defect of the urethra during embryonic development, and occurs between 8 and 12 weeks of gestation in humans , indicating exposure to AS04-HPV-16/18 at the very end of development.No surgical intervention was required to repair the defect.The same infant was diagnosed with unspecified pyloric stenosis after birth and underwent a successful pyloroplasty.The second case was an infant born with a pilonidal cyst.The mother was exposed to AS04-HPV-16/18 20 days after the estimated pregnancy onset.The development of the neural tube starts around the third week of gestation, and a temporal association to vaccination could not be excluded.Pilonidal cyst is a common anomaly and the incidence is approximately 0.26 per 1,000 in the total population .The rate of major structural defects among exposed pregnancies resulting in live births was 4.5%, which is higher than the expected rate of 2%-3% reported in the US and the UK , but which may have resulted from reporting bias.In three cases the mother was exposed to AS04-HPV-16/18 outside the sensitive period of organogenesis.In three cases temporal association to vaccination was either unassessable due to limited data or unlikely.The remaining case had a temporal association to second dose of vaccination: unilateral congenital cystic kidney disease occurred in a male child born to a 17-year-old mother who was exposed to two doses of AS04-HPV-16/18, the first dose prior to conception and the second dose during week 6 of gestation.The mother also received prochlorperazine during the first and second trimester of her pregnancy.There were 49 reports with exposure to at least two doses of AS04-HPV-16/18 during the defined risk period.Of these, four infants had a congenital anomaly, three SA had no apparent congenital anomalies and no patterns were observed.For 11/18 live births with congenital anomalies, maternal vaccination occurred either before the estimated pregnancy onset or within the first 14 days after conception.Exposure to hazardous substances during this peri-conceptual period usually causes embryonic death rather than injury .The available data in this registry are insufficient to assess the likelihood of a causal relationship with AS04-HPV-16/18 during pregnancy and the risk of congenital anomalies.However, it is important to note that the nature of the congenital anomalies does not appear to concentrate in a single organ disorder and no clustering was observed with respect to the timing of exposure and the sensitive period of organogenesis.In addition, all reported congenital anomalies are known and do not constitute a new syndrome.These findings are consistent with published data .Potential limitations in the data include the risk of reporting bias; because reporting was voluntary, prospectively reported pregnancies may have led to reporting bias towards high-risk pregnancies.The number of reports compiled in the registry over the 8-year reporting period was low, reflecting the age-group targeted for the national immunization program in the UK.The number of exposed pregnancies is likely to be under-reported.Finally, normal outcomes are less likely to be reported than abnormal outcomes, and there may have been more loss of follow-up among pregnancies with normal outcomes.The proportion of reports that were lost-to-follow-up in this registry was approximately 24%.Our results are aligned with those reported from a pregnancy registry established in Canada, France and the US to assess exposure to the quadrivalent HPV vaccine .In this larger registry of 1752 evaluable reports, the rate of SA was approximately 6%, and 2.5% of prospectively reported exposed pregnancies had a major congenital anomaly .As part of the pharmacovigilance plan to investigate the effects of exposure to AS04-HPV-16/18 during pregnancy, pregnancy outcomes have been collected and evaluated in clinical trials, spontaneous worldwide sources, a large post-authorization observational study, and in the pregnancy registry .The data from the available sources are not suggestive that vaccination with AS04-HPV-16/18 during pregnancy or around pregnancy onset increases the risk of abnormal pregnancy outcomes, in particular congenital anomalies.Cervarix is a trademark of the GSK group of companies.
Objective: To assess pregnancy outcomes after exposure to AS04-HPV-16/18 vaccine (Cervarix, GSK, Belgium) prior to, or during pregnancy, as reported to a pregnancy registry.Methods: A pregnancy exposure registry was established to collect data in the United Kingdom and the United States.Reporting was voluntary.There was no clustering of outcomes with respect to the timing of exposure.There were 18 infants born with a congenital anomaly of which nine were minor structural defects, seven were major structural defects, one was a hereditary disorder and one was likely the result of a congenital infection.In three cases of structural defect (two minor and one major), there was a temporal association to vaccination during the critical developmental period of gestation.There was no cluster or constellation of congenital anomalies suggestive of possible teratogenesis.Conclusion: The pharmacovigilance plan to investigate the effects of inadvertent exposure to AS04-HPV-16/18 vaccine during pregnancy included assessment of pregnancy outcomes among women enrolled in clinical trials, evaluation of pregnancy exposure reports from all countries as part of routine passive safety surveillance, a large, well conducted post-authorization observational study, and the pregnancy registry.
efficiencies are displayed in Fig. 6a.The coulombic efficiency remains constant around 82% over the cycling test as well as the voltage efficiency which is one of the highest reported , and ascribed to the fast kinetics and remarkable low overpotentials exhibited by the battery.On account of this, the averaged energy efficiency of the battery is 67% which is comparable and even higher than the one obtained in conventional neutral aqueous RFB .Compared to our previous work based on an IL- ABS , the PEG-based ABS battery reported here is more environmental friendly, less expensive, contains significantly higher concentration of active species, demonstrates a stable performance over longer cycling and exhibit good electrochemical performance.The stability was also confirmed by the potential profiles of the individual electrolytes and the interface which remain stable over cycling.Likely, the excellent long-term stability of the battery is credited due to the absence of crossover during the battery operation.As mentioned before, the crossover/cross-contamination of the active species in this battery concept was appraised by the determined partition coefficients.Once the equilibrium is established, there is no significant diffusive migration of the active species between the phases through the interface.Thus, the study of partition affinity offers the possibility to control the composition and to avoid the crossover during operation, which guarantees a stable and reliable cycling battery performance.In this work we demonstrate that Aqueous Biphasic Systems can act as a Total Aqueous Membrane-Free Battery, using organic redox compounds with proper redox potentials and suitable partitioning coefficients.Through the study of the partitioning and electrochemical behaviour of several organic active molecules in PEG-based ABS, TEMPO and MV were found to be the most suitable for the catholyte and anolyte, respectively.The different composition of the phases/electrolytes demonstrated to have a significant influence on the battery performance since the polymer-rich phase showed lower conductivity and smaller diffusion coefficient.However, both electrolytes, as well as the interface, displayed low overpotentials during the charge-discharge tests, favourably comparing with conventional RFB where the membrane contributes greatly to increase the internal resistance.The open circuit voltage of the battery coincides with the theoretical one and the capacity utilization was as high as 68% at high current density.The maximum power density was as high as 23 mW cm−2 at 30 mA cm−2, being more than 12 times higher than our pioneering example of Membrane-Free Battery.In addition, the battery showed an excellent long-term cycling with a capacity retention 99.9% over 550 cycles and an exceptional round-trip efficiency.This excellent long-term performance of the battery confirms the thermodynamic study of the partition of the active species as a good strategy to avoid the crossover during battery operation.These results highlight the potential of the Membrane-Free Batteries based on ABS as a new energy storage technology by overcoming some technical hurdles of the conventional RFB related to membrane issues, corrosive electrolytes or expensive and limited metallic reactants.Besides the complete removal of the membrane, this battery offers additional advantages since the employed PEG-based ABS is non-expensive, non-corrosive, non-flammable and environmental-friendly.On account of these reasons, this membrane-free battery can be suitable for future applications in non-industrialized areas where contamination with toxic and corrosive vanadium electrolytes can be devastating.As an inherent aspect of Membrane-Free battery technology, we anticipate that the self-discharge phenomena will be one of the most important challenges that this new technology should address in near future.Some aspects such as cell design, operation conditions and fluidodynamics, and their effect on the battery performance, including self-discharge processes, are currently being investigated in our laboratory.
Redox Flow Batteries (RFB) stand out as a promising energy storage technology to mitigate the irregular energy generation from renewable sources.Recently, we presented a revolutionary Membrane-Free Battery based on organic aqueous/nonaqueous immiscible electrolytes that eludes both separators and vanadium compounds.Here, we demonstrate the feasible application of this archetype in Aqueous Biphasic Systems (ABS) acting as an unprecedented Total Aqueous Membrane-Free Battery.After evaluating several organic molecules, methylviologen (MV) and 2,2,6,6-Tetramethyl-1-piperidinyloxy (TEMPO) were selected as active species due to their optimum electrochemical behavior and selective partitioning between the phases.When connected electrically, this redox-active ABS becomes a Membrane-Free Battery with an open circuit voltage (OCV) of 1.23 V, high peak power density (23 mWcm−2) and excellent long-cycling performance (99.99% capacity retention over 550 cycles).Moreover, essential aspects of this technology such as the crossover, controlled here by partition coefficients, and the inherent self-discharge phenomena were addressed for the first time.These results point out the potential of this pioneering Total Aqueous Membrane-Free Battery as a new energy storage technology.
Intensive insulin therapy is the standard of care in the management of type 1 diabetes.1,Although modern insulin therapy has led to a reduction in the frequency of severe hypoglycaemic events,2 tight glycaemic control remains a predisposing factor to hypoglycaemia and its effect is amplified by duration of the disease.3,Recurrent exposure to hypoglycaemia might lead to attenuated counter-regulatory response to subsequent hypoglycaemic events and, ultimately, impaired hypoglycaemia awareness.4,Frequent hypoglycaemic episodes might have a profound effect on behaviour and diabetes self-management, adversely affecting quality of life.5,The advent of continuous glucose monitoring has led to improved glycaemic control and reduced exposure to hypoglycaemia, including severe hypoglycaemia.6,7,The benefits of hypoglycaemia reduction are enhanced in hypoglycaemia-prone individuals when CGM is integrated with the threshold suspend feature of insulin pumps, which allows insulin delivery to be suspended automatically for up to 2 h when the pre-set glucose threshold is reached8 or predicted.9,Although these technologies have been shown to reduce the incidence of severe hypoglycaemic events, including those leading to hypoglycaemic seizure or coma,10 they do not address the issue of variability in insulin requirements,11 which remains an unmet need in patients with type 1 diabetes.Evidence before this study,We searched PubMed from database inception until Oct 24, 2016, using the search terms “type 1 diabetes” AND AND, for reports of randomised controlled trials published in English only.We identified 14 randomised trials assessing the use of closed-loop insulin delivery outside hospital settings.In two randomised home studies in participants with mean HbA1c above 7·5%, long-term use of closed-loop insulin delivery led to a significant decrease in HbA1c and improvement in mean glucose and proportion of time spent within, below, and above the glucose target range.However, no studies have so far assessed closed-loop use in non-pregnant adults with HbA1c below 7·5%.Added value of this study,To our knowledge, this study is the first randomised controlled study in free-living adults with type 1 diabetes whose HbA1c is below 7·5%.We showed that, compared with usual insulin pump therapy, day-and-night closed-loop insulin delivery significantly improved overall glycaemic control while reducing the burden of hypoglycaemia.Beneficial effects on glycaemic outcomes included increased time with glucose concentration in target range, reduced time with glucose concentration below and above target range, and decreased mean glucose concentration and glycaemic variability.The findings of increased time in target range and reduced overall hypoglycaemia risks and sensor glucose variability were similarly observed and consistent during night-time and daytime periods of closed-loop use.These outcomes were achieved without change in total insulin delivered.Closed-loop application thereby provides a novel therapeutic approach to optimise glycaemic control in hypoglycaemia-prone adults with HbA1c below 7·5%.Closed-loop application was well tolerated in this population with advanced self-management skills, and might provide clinically significant benefits to their overall diabetes care.Implications of all the available evidence,The use of day-and-night closed-loop insulin delivery might further improve glycaemic control while reducing the risk and burden of hypoglycaemia in adults with type 1 diabetes whose HbA1c is below 7·5%.Results from our study, together with those from previous studies in different target groups, support the benefits of closed-loop insulin delivery in a broad population with type 1 diabetes.Closed-loop insulin delivery—also known as the artificial pancreas—is a therapeutic approach that is progressing quickly.Closed-loop delivery differs from conventional pump therapy and threshold suspend technology; it has a control algorithm that autonomously increases and decreases subcutaneous insulin delivery in response to real-time sensor glucose levels.12,Results from randomised trials13–16 of day-and-night closed-loop use during unsupervised free-living conditions in children, adolescents, and adults have shown improved glycaemic outcomes, reduced risk of non-severe hypoglycaemic events, and positive user experience.However, outside of pregnancy,15 none of the studies have focused specifically on patients with well controlled diabetes who might have a heightened, but masked, risk of hypoglycaemia and glucose variability.In this study, we aimed to investigate whether day-and-night hybrid closed-loop insulin delivery—in which manual administration of prandial bolus was implemented by the user—under free-living conditions in adults with HbA1c below 7·5% can improve glucose control while alleviating the risk of hypoglycaemia, thus informing whether the use and reimbursement of closed-loop systems is justified in this particular population.In this two-centre, open-label, randomised, crossover trial, we showed that, in adults with type 1 diabetes and HbA1c below 7·5%, day-and-night hybrid closed-loop insulin delivery significantly improved overall glucose control while reducing hypoglycaemia progressively by 50–75% at lower glucose thresholds compared with usual insulin pump therapy.Beneficial effects on glycaemic outcomes included increased time spent with glucose concentration in target range, reduced time with glucose concentration above and below the target range, and decreased mean glucose concentration and glycaemic variability.The findings of increased time spent in the glucose concentration target range, reduced hypoglycaemia, and decreased glycaemic variability were similarly observed during night-time and daytime periods.These outcomes were achieved without change in total insulin delivery.Hypoglycaemia is associated with increased morbidity and mortality in patients with type 1 diabetes.25,A reduction of at least 30% in risk of hypoglycaemia, as observed in our study, is considered clinically relevant.26,Threshold and predictive low-glucose suspend insulin delivery systems8–10 cannot step-up insulin delivery and thus do not address the issue of hyperglycaemia.The advantage of a closed-loop system such as ours is the responsive, graduated modulation of insulin delivery, both below and above the pre-set pump regimen.This notion is supported by findings from our study, which showed that reduction in mean glucose concentration was accompanied by significant reduction in all the measured hypoglycaemia parameters.The multiplicity of beneficial outcomes—including increased time with optimum glucose control and reduced time
We aimed to investigate whether day-and-night hybrid closed-loop insulin delivery can improve glucose control while alleviating the risk of hypoglycaemia in adults with HbA1c below 7.5% (58 mmol/mol).Interpretation Use of day-and-night hybrid closed-loop insulin delivery under unsupervised, free-living conditions for 4 weeks in adults with type 1 diabetes and HbA1c below 7.5% is safe and well tolerated, improves glucose control, and reduces hypoglycaemia burden.
might have increased the participants' device burden and negatively affected some aspects of user feedback.The heterogeneity of sensor use in the control period might have confounded the reported glycaemic outcomes.However, the individualised therapy approaches used in the control period reflect present clinical strategies adopted by this population to achieve their baseline HbA1c, and do not diminish the incremental effects of closed-loop use.20,To conclude, day-and-night closed-loop insulin delivery in adults with type 1 diabetes and HbA1c below 7·5% significantly improved glycaemic control while reducing the risk of hypoglycaemia.Thus, in adults who are actively engaged with self-management, closed-loop insulin delivery might provide additional benefits, justifying its use in this particular population.The overall positive feedback from participants reflected the acceptance of closed-loop technology during daily diabetes management, albeit with some limitations to its use, which might affect user adherence and experience.Larger and longer studies are needed to validate our findings."In this open-label, randomised, crossover study, we recruited adults attending diabetes clinics at Addenbrooke's Hospital and Medical University of Graz.Patients were eligible if they had type 1 diabetes, non-hypoglycaemic C-peptide concentration less than 100 pmol/L, and HbA1c less than 7·5%; had been using insulin pump therapy for at least 6 months; had knowledge of insulin self-adjustment; and had been self-monitoring their blood glucose concentration at least six times per day.Exclusion criteria included established nephropathy, neuropathy, or proliferative retinopathy; total daily insulin dose of 2·0 U/kg or more; hypoglycaemia unawareness; severe visual or hearing impairment; pregnancy; or breastfeeding.The study was approved by the local ethics committees and national competent authorities in the UK and Austria, and the protocol is shown in the appendix.All participants provided written informed consent before the start of study-related procedures.Participants were randomly assigned to receive either day-and-night closed-loop insulin delivery followed by usual pump therapy with blinded CGM, or vice versa.Following the run-in period, the order of the two study periods was randomly determined with an automated web-based programme with locally stratified, randomly permuted blocks of four.Participants and investigators analysing study data were not masked to treatment allocation."After screening, all participants underwent a 2–4 week run-in period, during which they were trained to use the study insulin pump and CGM device, and calibrated the real-time CGM device according to manufacturer's instructions.Compliance assessment after the run-in period was based on at least 10 days of CGM use in the past 2 weeks.Participants then received insulin via the day-and-night closed-loop system for 4 weeks and via usual pump therapy with blinded CGM for 4 weeks, in the order assigned at randomisation, with a 2–4 week washout period in between.During the washout period, participants returned to their usual care and did not use the study CGM device.Identical study insulin pumps and CGM devices were used during the two treatment periods.Participants used rapid-acting insulin analogue normally applied in their usual clinical care.The built-in bolus wizard of the study insulin pump was used by participants during both treatment periods to calculate insulin boluses at mealtimes and when administering correction boluses.The study was done under free-living conditions without direct or remote supervision by clinical investigators.Participants were not restricted in their dietary intake or daily activities.Support was available at all times to assist participants in case of clinical or technical issues that arose during the study.Standard local hypoglycaemia and hyperglycaemia treatment guidelines were followed.The FlorenceD2A closed-loop system comprised a model-predictive control algorithm on a smartphone, which communicated wirelessly with a purpose-made translator unit and the study pump through a Bluetooth communication protocol.The CGM receiver was inserted into the translator, which translated a serial USB protocol into a Bluetooth communication protocol.The calculations used a compartment model of glucose kinetics17 describing the effect of rapid-acting insulin and the carbohydrate content of meals on glucose levels.We applied a hybrid closed-loop approach in which participants were required to count carbohydrates and use a standard bolus calculator for pre-meal boluses according to usual practice."The bolus calculations provided by the study pump's built-in bolus calculator took into account carbohydrate content of meals, insulin on board, and entered capillary blood glucose readings.The algorithm was initialised by pre-programmed basal insulin rates downloaded from the study pump."Participants' bodyweight and total daily insulin dose were entered at set-up.During closed-loop operation, the algorithm adapted itself to a particular participant.The treat-to-target control algorithm aimed to achieve glucose concentrations of 5·8–7·3 mmol/L, and adjusted the actual concentration depending on fasting versus postprandial status and the accuracy of model-based glucose predictions.Control Algorithm version 0.3.46 was used which, compared with that used in our previous study,13 included enhanced adaptation of insulin needs based on analysis of past performance and identification of the time of day when insulin needs are consistently lower or higher.Participants were trained to perform a calibration check before breakfast and evening meal.If sensor glucose was above capillary glucose by more than 3 mmol/L, participants were advised to recalibrate the CGM device.These instructions resulted from in-silico assessment of hypoglycaemia and hyperglycaemia risks using the validated Cambridge simulator.18,Safety rules limited maximum insulin infusion and suspended insulin delivery at sensor glucose at or less than 4·3 mmol/L, or when sensor glucose was rapidly decreasing.During the control period, the display of the study CGM device was masked.Participants were allowed to use their own glucose monitoring devices if they were part of their pre-study usual care."The control intervention was chosen according to existing clinical practice and participants' preferences.20",The rationale for the control period was to reflect usual clinical practice and national reimbursement policies, and to
Methods In this open-label, randomised, crossover study, we recruited adults (aged ≥18 years) with type 1 diabetes and HbA1c below 7.5% from Addenbrooke's Hospital (Cambridge, UK) and Medical University of Graz (Graz, Austria).After a 2–4 week run-in period, participants were randomly assigned (1:1), using web-based randomly permuted blocks of four, to receive insulin via the day-and-night hybrid closed-loop system or usual pump therapy for 4 weeks, followed by a 2–4 week washout period and then the other intervention for 4 weeks.Treatment interventions were unsupervised and done under free-living conditions.Funding Swiss National Science Foundation (P1BEP3_165297), JDRF, UK National Institute for Health Research Cambridge Biomedical Research Centre, and Wellcome Strategic Award (100574/Z/12/Z).
the control period, day-and-night closed-loop insulin delivery reduced the proportion of time with glucose concentration below 3·9 mmol/L by 50%, below 3·5 mmol/L by 65%, below 3·3 mmol/L by 70%, and below 2·8 mmol/L by 76%, as well as the burden of hypoglycaemia by 73%.Closed-loop insulin delivery also reduced the number of nights when glucose concentration was below 3·5 mmol/L for at least 20 min as well as the mean duration of such periods.Compared with usual pump therapy, closed-loop insulin delivery reduced the proportion of time with glucose concentration above the target range by 6·9 percentage points, above 13·9 mmol/L by 3·0 percentage points and above 16·7 mmol/L by 1·2 percentage points.Moreover, all measures of glycaemic variability were significantly lower in the closed-loop period than in the control period: SD of sensor glucose was 0·5 mmol/L lower, coefficient of variation of sensor glucose within days was 5·0% lower, and coefficient of variation of sensor glucose between days was 7·5% lower.Total daily insulin was similar between study periods.Weekly trends in glucose control and insulin delivery are shown in the appendix.Outcomes at night time and daytime were in concordance with outcomes from the combined day-and-night period.Night-time use of closed-loop insulin delivery significantly increased the proportion of time with glucose concentration in target range by 17·2 percentage points, reduced mean glucose concentration by 0·4 mmol/L, and decreased the burden of hypoglycaemia by 89% compared with the control period.SD and between-night coefficient of variation of sensor glucose were significantly reduced by closed-loop insulin delivery.Daytime use of closed-loop insulin delivery increased time spent with glucose concentration in the target range by 8·1 percentage points and reduced the burden of hypoglycaemia by 61%.Closed-loop insulin delivery significantly reduced the SD and between-day coefficient of variation of sensor glucose, in line with measured outcomes of night-time glycaemic variability.Overall mean absolute relative deviation of sensor glucose, using capillary glucose as the reference, was 15·3% and median absolute relative deviation was 10·1%, on the basis of 8447 paired capillary-CGM values.Sensor alarm settings were not altered by participants during closed-loop intervention.No significant difference was seen in sensor glucose availability between study periods.Day-and-night closed-loop delivery was used for 90% of the closed-loop period.The user feedback questionnaire was fully completed by 26 participants, and four of the six questions were answered by all participants.27 of 29 participants were happy to have their glucose levels automatically controlled by the closed-loop system.20 participants stated that they spent less time managing their diabetes while using the closed-loop system, but seven disagreed with this statement.18 expressed fewer concerns about their glycaemic control while using the closed-loop system.14 participants reported improved sleep during the closed-loop period.23 of 26 participants reported feeling safe while using the closed-loop system, and 26 of 27 would recommend it to others.No serious adverse events, episodes of severe hypoglycaemia, or episodes of hyperglycaemia with ketosis were reported.Skin irritations related to sensor use occurred in four participants.Two participants had mild respiratory tract infections.One participant had cystitis during the closed-loop period and one reported allergic rhinoconjunctivitis during the control period.All reported adverse events were resolved without sequelae.
Background Tight control of blood glucose concentration in people with type 1 diabetes predisposes to hypoglycaemia.During the closed-loop period, a model-predictive control algorithm directed insulin delivery, and prandial insulin delivery was calculated with a standard bolus wizard.The primary outcome was the proportion of time when sensor glucose concentration was in target range (3.9–10.0 mmol/L) over the 4 week study period.The proportion of time when sensor glucose concentration was in target range was 10.5 percentage points higher (95% CI 7.6–13.4; p<0.0001) during closed-loop delivery compared with usual pump therapy (65.6% [SD 8.1] when participants used usual pump therapy vs 76.2% [6.4] when they used closed-loop).Compared with usual pump therapy, closed-loop delivery also reduced the proportion of time spent in hypoglycaemia: the proportion of time with glucose concentration below 3.5 mmol/L was reduced by 65% (53–74, p<0.0001) and below 2.8 mmol/L by 76% (59–86, p<0.0001).No episodes of serious hypoglycaemia or other serious adverse events occurred.Larger and longer studies are warranted.
We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders and generative adversarial networks.It is known that GAN can produce very realistic samples while VAE does not suffer from mode collapsing problem.Our model optimizes λ-Jeffreys divergence between the model distribution and the true data distribution.We show that it takes the best properties of VAE and GAN objectives.It consists of two parts.One of these parts can be optimized by using the standard adversarial training, and the second one is the very objective of the VAE model.However, the straightforward way of substituting the VAE loss does not work well if we use an explicit likelihood such as Gaussian or Laplace which have limited flexibility in high dimensions and are unnatural for modelling images in the space of pixels.To tackle this problem we propose a novel approach to train the VAE model with an implicit likelihood by an adversarially trained discriminator.In an extensive set of experiments on CIFAR-10 and TinyImagent datasets, we show that our model achieves the state-of-the-art generation and reconstruction quality and demonstrate how we can balance between mode-seeking and mode-covering behaviour of our model by adjusting the weight λ in our objective.
We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders (VAE) and generative adversarial networks (GAN)
therefore likely due to marsh development confining the channel, then periods of more active currents, storm induced waves, boat wakes and relative sea-level rise disturb this equilibrium, resulting in the outer margin becoming undercut and retreating through beam failure.Of the overall Spartina cover found by GIS analysis to be 374 ha, Type-1 marshes covered 240 ha while Type-2 marshes covered 134 ha.The total of 374 ha is a decrease of 46 ha since 1997, likely resulting from the Type-1 marsh retreat shown in this study and ongoing control efforts.Since 2002 DPIW, the community and NRM North have been successful in maintaining areas north of Middle Point as a ‘Rice Grass Free Zone’, with the aim to eradicate Spartina and control any further spread north.The Spartina management plan for the Tamar Estuary qualitatively estimated that the 2006 Spartina extent was >450 ha.Fig. 9 shows changes to Spartina extent since 1945, showing that the Spartina extent has been relatively stable since 1990, although spread north may be balanced by retreat of Type-1 marshes.Since the introduction of S. anglica in the Tamar Estuary, this study shows that a total of 1,193,441 m3 of material has been trapped, which is comprised of approximately 17% Spartina-derived organic matter and 83% inorganic matter that is predominantly silts and clays.Based on historical profiles, sedimentation rates since the introduction of S. anglica are between 8.7 and 52.4 mm a−1, which is mid-range relative to Spartina-induced net sedimentation rates found in Europe, China and New Zealand.The accuracy of the volume estimate is based on the depth found on profiles of Spartina-trapped sediment, GIS mapping, grouping of like-sized swards and the assumption that the observed sediments depths along transects are representative of the adjacent intertidal zones.It is also dependent on the accuracy of the available imagery."The exclusion of Nelson's Shoal from volume estimates was necessary due to the logistical difficulties with site access and surveying.It is however the largest Spartina sward in the Tamar Estuary and could increase the volume estimations for the entire estuary by 5–10%.Organic content results show that Spartina-derived organic matter typically accounts for between 15 and 28% of material within the marshes throughout the estuary.The remaining 72–85% is therefore lithic material trapped by the Spartina sward since establishment.The total lithic component of approximately 981,397 m3 is likely derived from the North and South Esk River catchment which have a high sediment discharge with landuse change over the last century, and historical dredging of the upper Tamar that released dredge material adjacent to growing Spartina banks.This study demonstrates the extensive morphological change that has occurred in the intertidal zone of the Tamar Estuary since the introduction and establishment of Spartina.Previously unvegetated intertidal areas have been transformed into marsh terraces of sediment buildup that today dominate the estuary.Two Spartina marsh morphologies have established within the Tamar Estuary over the last 50 years.Type-1 marshes are typically characterised by having accreted between 0.5 m and 2.0 m of sediment above the pre-Spartina surface.Surface topography of Type-1 marshes is independent of the pre-Spartina surface morphology, exhibiting a flat to slightly concave upper marsh, a convex ridge in the outer mid marsh, and a relatively steeply graded convex lower marsh.Type-2 marshes are found in the lower estuary and are considerably thinner than Type-1 marshes.Surface topography is generally dictated by the underlying pre-Spartina surface, often with the basement material outcropping within Spartina swards, but accretion towards Type-1 marsh morphology is limited fine grained sediment supply.Assessment of temporal change in marshes through comparing recent profiles with historical baseline studies concludes that while marshes throughout the estuary continue to increase in vertical elevation, rates of vertical accretion have slowed, and Spartina marsh retreat has occurred of between 10 and 15 m since 1989.Furthermore, seaward edge micro-cliff morphology of upper estuary Type-1 marshes indicates that erosion of the seaward edge throughout the upper estuary has been significant.This study demonstrates the potential for use of survey, coring and analysis of organic and macrofossil content in stratigraphy to allow volumetric determination of sediment accumulated.The Spartina extent within the Tamar Estuary of approximately 374 ha has in the last 50 years trapped sediment comprising of approximately 1,193,441 m3 of material, 14–28% of which is Spartina-derived organic matter.Spartina infestation has therefore led to an increase in organic content within sediment deposits of the Tamar Estuary, combined with a fining of textures of coastal sediments, and a reduction of accommodation space within the estuary.
A difference was found between marshes in upper and lower estuary.Surface topography of Type-1 marshes of the upper estuary was found to be independent of the pre-Spartina surface morphology, with deeper vertical development and exhibiting a flat to slightly concave upper marsh, a convex ridge in the outer mid marsh, and a relatively steeply graded convex lower marsh.Type-2 marshes of the lower estuary were thinner in vertical development, and with surface topography dictated by the underlying pre-Spartina surface.While Spartina marshes are accretionary, surveys demonstrated retreat of the seaward margins throughout the estuary over the past 17 years, and the development of erosional scarps in Type-1 marshes.Spatial mapping identified 374ha of S.anglica infestation within the Tamar Estuary, with Type-1 marshes occupying 240ha and Type-2 marshes occupying 134ha.Topographic profiles and stratigraphic data were used to estimate total sediment volumes trapped by Spartina in the Tamar Estuary, finding approximately 1,193,441m3 of material to have been trapped beneath Spartina since its introduction in 1947, of which between 14 and 28% has been Spartina-derived organic matter.
type, agro-ecological zones and population density.Long term fallowing was found to be associated with recovery of soil fertility with improvement in Phosphorus, Total Nitrogen, Sodium, Magnesium, Organic Carbon and Reactive Carbon.A novel finding that can be relevant for the densely populated areas which are constantly facing pressure from intensive cultivation is that better management practices can safeguard crop land from imminent deterioration regardless of the period under cultivation."The current study didn't find any proof that farm lands that were cultivated for longer periods were worse off so long as farmers engaged in practices that restored organic matter and prevented soil erosion.Innovations in soil and water conservation can delay the threshold after which crop land would enter into a phase indefinite deterioration.The study concludes that although some soil nutrients are deficient in the land in pristine conditions due to the attributes of parent materials and environmental factors, cultivation in densely populated areas has resulted to deterioration in soil quality.This deterioration can be linked to endogenous factors such as the agricultural practices, soil physical, chemical and biological attributes and exogenous drivers such as environmental factors that define agro-ecological zones and institutional factors.Long term fallowing was found to be a strategy that can effectively restore some of the lost soil nutrients.Nevertheless, there is need for incentives for organic fertilizer application and delimited and targeted inorganic fertilizer application.Inorganic fertilizers should be applied with caution, avoiding the application of acidic fertilizers in soils with low pH. Blending of commercial fertilizer brands already in the markets with critical micro-nutrients that are naturally deficient in soils is encouraged.Practices that facilitate soil organic matter build-up such as organic fertilizer application and incorporation of crop residues should be encouraged to boost organic carbon which is a key determinant of soil fertility which in return determines the availability of all major soil nutrients.
The current study seeks to assess sustainability of agricultural land use by identifying the effect of land use change on soil quality using cross-sectional data collected through a household survey among 525 farm households in densely populated areas of Kenya.Soil samples were collected, analyzed and compared across three land use types: undisturbed, semi-disturbed and cultivated.Results indicate that within a period of five decades, agricultural land use has led to a decline in Total Organic Carbon (−72%), Magnesium (−65%) and Boron (−61%), Iron (−22%) and Total Nitrogen (−15%).The drivers of deterioration identified were cutting across inherent properties such as soil chemical (pH), physical (soil mapping unit) and biological (organic carbon) attributes, farmer practices (agricultural commercialization) and exogenous factors (population density and Agro-ecological zones).The study concludes that indeed conversion of land from natural vegetation is associated with deterioration in soil quality and therefore policy needs to create incentives for the build-up of soil organic matter, replenishment of soil macro and micro nutrients.Blending of commercial fertilizers with targeted micro-nutrients, maintenance of soil conservation techniques and long term fallowing are encouraged.
using predictive computer models.All members of the mixture contain a glucopyranoside ring, with four polyethoxylate chains of differing lengths branching from the ring structure.Low molecular weight components of the mixture are expected to be readily biodegradable.However, when all four branches include two or more OCCO subunits, the molecule is not expected to be readily biodegradable, and the predicted half-life values in water and sediment are 180 and 1621 days, respectively, yielding a persistence score of zero.In contrast, glycerin is readily biodegradable and aquatic toxicity benchmarks are >100 mg/L.Post hoc evaluation of potential environmental risk averted due to this single change in the cream cleanser was completed to track the impact of GAIA-based decision-making in product formulation.The SimpleTreat model was used to estimate aquatic exposure to glycerin and to methyl gluceth-20, had it been retained in this formula.The exposure calculation was based on 2015 sales of the cleanser in the US, Canada, and Caribbean islands, yielding a risk ratio on the order of 10−5 for methyl gluceth-20, representing the averted potential risk, and 10−7 for glycerin, representing an estimate of the actual, GAIA-informed risk incurred.Both risk ratios are exceedingly small, but the substitution resulted in a 207-fold risk reduction.While the absolute risk is de minimus in both cases, incremental risk reduction in wastewater can help reduce the potential for stress to aquatic organisms in exposures to chemical mixtures in STP receiving waters.In order to aggregate environmental data into a simple form readily usable for decision making by non-experts in the field, we used a consistent algorithm and effects endpoints, and incorporated end user judgment regarding the relative value of different hazard categories.A consistent database and algorithm were developed for a large portfolio of chemically diverse substances by calculating a Base Score using widely available PBT hazard data for all substances, and modifying it using penalties to account for uncommon situations or emerging hazard data.The algorithm we describe is specific to PCP end-use exposures and integrates user preferences for weighting disparate hazards, and assigning numeric values to scoring parameters and penalties."We demonstrate how the algorithm's use for PCP products can lead to environmental risk reduction and provide sufficient information so that the approach can be tailored to other end users, by applying different numeric weights, penalties, and exception rules.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Personal care products (PCPs) are used globally wherever there is human activity, and are typically emitted to the environment in wastewater under normal use.Regulators and scientists have thus begun focusing on PCPs as a potential water quality concern.PCP manufacturers are increasingly motivated to leverage available data on potential exposure and hazards to the natural environment to guide sustainable decision making and minimize the potential for products to cause adverse environmental effects.We describe a novel algorithm to: select data on environmental exposure and hazard potential relevant to PCP ingredients after use, evaluate and interpret those data, and translate the information to a single numeric score usable by non-specialists to incorporate environmental protection goals into PCP sustainability decision making.The algorithm has been implemented as the Global Aquatic Ingredient Assessment (GAIA) database tool, incorporating information on environmental persistence, bioaccumulation potential, aquatic toxicity of the parent compound and degradants, excess toxicity from ecological endocrine disruption effects, and the potential for producing photochemical smog.GAIA quantifies environmental hazard potential using an algorithm allowing it to be used as a risk surrogate for PCP product use.GAIA data are also used in environmental risk assessments with product-specific exposure data as a final check during product reformulation or as a post hoc measure of progress toward corporate sustainability goals.Scoring results are demonstrated for eight representative substances: benzophenone-4, ethylene diamine tetraacetate salts, ethylhexylglycerin, menthol, methyl salicylate, musk xylene, phenoxyethanol, and zinc oxide.Case studies show how GAIA scores, used as a front-line decision tool, led to environmental risk reductions in two cases: a newly developed surfactant and a reformulated cleansing product.
the geometries seen in the Achnashellach Culmination if only a single fault was active at any given time.Alternative valid models might be identified that result from even more complex thrusting sequences.As mentioned previously, simultaneous movement on a pair of ramp-flat faults could result in complex geometries seen throughout the Moine Thrust Belt, including isolated horses.This simultaneous movement therefore could be an alternative model to out of sequence thrusting such as that proposed by Holdsworth et al. for complex regions of the Moine Thrust Belt, and the sequential detachment activation model we propose here.Indeed the geometry of the isolated horse of Basal Quartzite seen on Sgorr Ruadh is very similar to the model results described by Pavlis.This, however, cannot prove simultaneous fault activity, only that some form of break-back thrusting has occurred.Cross-section restoration is a powerful tool in validating and, in this case, deriving structural models, however it can only distinguish between valid and invalid interpretations, it cannot resolve unique solutions.In order to determine a valid structural evolution of a region it is necessary to use all data available.Creation of a single cross-section may provide a solution that appears to balance in two dimensions but this needs to be tested in the rest of the study area to check it is valid along strike.Length imbalances determined from detailed cross-section restoration in our study area indicate the location of multiple detachment horizons.This suggests a more complex structural evolution model than originally derived in our initial cross-section.Other observations such as map cut-off patterns can help to reinforce the new structural model.The original cross-section must be adjusted to comply with the new model, and field observations.Through an iterative modelling process, combining cross-section construction, with section balancing and detailed observations of field geometries we propose a new structural model for the Achnashellach Culmination.The structural model we propose involves the sequential activation of two foreland propagating thrust systems, each splaying off detachments at different stratigraphic levels.The workflow we outline should be applicable to other situations.We suggest that even small mismatches in bed-lengths should be confronted when restoring cross-sections, rather than accepting these as unavoidable errors.In examples where pre-existing stratigraphic variations and distributed strains are more strongly developed than in our case, complex thrust sequences might have gone unrecognised.
Many thrust systems, including parts of the Moine Thrust Belt, are commonly interpreted as rather simple imbricate fans, splaying from a master detachment (floor thrust) at depth.We use field observations and geological map data to construct cross-sections through the Achnashellach Culmination, southern Moine Thrust Belt, Northwest Scotland, to test this interpretation.Initially cross-sections are constructed by assuming a single lower detachment; line length imbalances and thrust trajectory mismatches between deformed and restored-state sections indicate an invalid model.Significant differences in horizon lengths between two rock units are seen, indicating the position of a second detachment which, when incorporated into the deformed-state cross-section creates a valid structural model.The presence of this second detachment accounts for complex geometries seen at outcrop, and indicates that the Achnashellach Culmination is likely to have formed by the sequential activation of two detachment horizons.This new structural model has been derived using an iterative workflow involving cross-section construction, section balancing and integration of field observations from across the study area, ensuring model validity in three dimensions.This workflow is applicable to other systems in general.© 2014 The Authors.
Cleaning is one of the methods at the disposal of domestic cooks for curtailing the dispersal of micro-organisms in varying processes of managing and preparing food at home.Domestic cleaning must be understood as a multi-facetted practice that is stimulated by a range of priorities and concerns, involving a complexity of materials and cultural understandings.At least three cleaning concerns may be identified that relate to food preparation.Cleaning for aesthetic considerations that return the domestic environment to a desired state of visual appearance has been, and is connected with, a broad range of social and cultural concerns.When preparing food, a second rationale for cleaning is dealing with the visible dirt that is brought into the kitchen with food items, or that enters the kitchen in other ways.The removal of grit and other ‘dirty’ materials from foods before cooking and consumption, and the prevention of foods gathering visible dirt once in the kitchen, is closely guided by experiences of disgust that are associated with understandings of what materials can and should not be incorporated into the human body.Cleaning is also done for hygiene considerations, when action is undertaken to prevent the spreading of harmful micro-organisms in ways that may jeopardise the health of family members.Whilst micro-organisms have a very real material presence, one of the challenges for dealing with these entities in the domestic environment is that they are invisible to the human eye.A range of technologies are used by professionals to overcome this challenge, such as bacterial enumeration after culturing, ATP tests, visual detection under UV light and protein tests.However, consumers making food in their kitchens do not have access to these monitoring devices and procedures.As microbial pathogens are invisible to the human eye, yet pose a health hazard, of substantial interest is how consumers understand these entities as they engage with food.Is understanding of pathogens displayed in practices that curtail their spreading?,Is awareness of the potential presence of pathogens extrapolated onto specific materials that are perceptible, whether through vision or touch?,And does the presence of such materials become inducement to take care and to clean up?,From a food safety perspective, kitchen hygiene is about keeping the numbers of pathogens at safe levels.The kitchen environment potentially contains large numbers of bacteria.In most cases these bacteria are harmless, and thus unproblematic from a food safety point of view.However, the pathogens Listeria monocytogenes, Salmonella and Campylobacter have been found in the kitchen environment.Among these, Salmonella and Campylobacter, which may be present on various raw foods, can cause illness even at low numbers.When transferred from contaminated equipment and surfaces to food, they represent a risk, and several studies have shown that consumers may use contaminated surfaces/equipment when preparing food.As discussed above, consumers have limited possibilities for knowing when their kitchen surfaces are contaminated with pathogens, and the question is therefore if the priorities that initiate cleaning are sufficient to reduce risk.The kitchen environment may also contain a range of different types of soil.Some types of soil and dirt may be harmful as they may contain pathogens, but not all.How consumers perceive different types of soils and how it affects their motivation to clean is not known and to our knowledge there have been no studies comparing visual evaluation of the cleanliness of kitchen surfaces with bacterial numbers.Elsewhere in the food sector, studies comparing visual inspection of food production/retail/restaurants premises with respective microbial counts have been conducted, and the majority report poor or no correlation.However, a study from the UK, reports about correlation between hygiene ratings and microbial contamination.There are reports showing that visual evaluation of the hygienic level is less sensitive than ATP measurement and bacterial enumeration in food industry and hospital settings.However, bacterial levels are often several magnitudes higher in domestic kitchens than in food industry and hospitals.Also, the type of soiling and surface materials may be different between domestic kitchens, food industry and hospitals.Thus the conclusions from commercial food and hospital settings may not be valid for domestic kitchens.In the present transdisciplinary work, motivations for kitchen surface cleaning were examined in a survey among consumers in 10 European countries.Also, consumers’ ability to see dried food soils as well as the correlation between visible dirt/soil and bacterial numbers on kitchen surfaces, were studied in practical experiments.Finally, survival of the pathogenic bacteria Salmonella and Campylobacter was tested in drying and dried food soils.A household online survey on food safety was conducted between December 2018 and April 2019 with a total of 9966 participants from 10 European countries: Denmark, France, Germany, Greece, Hungary, Norway, Portugal, Romania, Spain and the UK.The data collection was subcontracted to a professional survey provider.The survey contained a question on household behaviours related to kitchen hygiene: “In general: when would you normally clean your kitchen countertop or other surfaces where you do your food preparation?”.The following answers were available: Before preparing food, After preparing food, When they are sticky, When they are dirty, When in contact with something dirty, Before receiving visitors, Other, specify and None of the above.The levels of food soils, dried on cutting boards and countertop surfaces, that could be visually detected by consumers in their own kitchens were tested.Poultry soil was made by adding 4.8 l of dH2O to 1200 g minced poultry raw meat, followed by 1 min stomacher treatment and collection of the suspension passing through the stomacher filter.Lettuce soil was made by cutting two iceberg lettuce heads with a food processor for 1 min, and
Cleaning is a method at the disposal of domestic cooks for curtailing the dispersal of foodborne pathogens in the process of preparing food.The social science research included analysis of a consumer survey in 10 European countries where 9966 respondents were asked about motivations for cleaning in the kitchen.This paper draws also on three microbiological tests.
of scores for the three soil samples.The number of positive differences and differences of zero or less was counted for groups using the pivot table function."Fisher's exact test was used to calculate the statistical significance of differences between groups of consumers and material types.For the consumer tests with swabbing kitchen surfaces, the bacterial numbers were log transformed and the scores for visual cleanliness reported as categorical data from 1 to 4.The statistical significance of differences between groups was calculated using the general linear model in Minitab.For the inactivation tests, bacterial numbers were log transformed and mean values and standard error of the mean calculated in Minitab.One-way ANOVA was used to calculate statistical differences between means.Forty-three percent of the 9966 participating consumers in the 10 different countries reported that they clean food preparation surfaces when they are dirty.This was the third most commonly reported behaviour, following cleaning after food preparation and before preparation of food.Further analyses showed differences between age groups, with the age group 16–25 years more often reporting that they clean when the kitchen surface is dirty compared to the consumer group of >65 years.On the other hand, the >65 years consumer group reported that they more often clean before food preparation, compared to the 16–25 years group.In the survey, multiple answers were allowed, and a small proportion of consumers across the countries reported cleaning kitchen surfaces only when they are dirty.This suggests that motivations to clean surfaces are diverse.We continued by dividing the options for cleaning motivation into three categories, which we call ‘routine cleaning’, ‘stimulated cleaning’ that is based on perception during food handling and cleaning for social reasons.Routine cleaning was very common.Almost half cleaned both before and after preparing food, and more consumers cleaned only after than only before preparing food.Overall, the responses from the consumers indicate that most consumers use several motivations for cleaning."There may be a change towards cleaning being more motivated by sensory perception during food handling, as suggested by Martens, as younger persons' cleaning was motivated more in this way compared with older consumers.Stimulated cleaning was common, and there are also other studies indicating that consumers may base their cleaning frequency on evaluation of cleanliness of surfaces.In an observational study among 10 households in England, consumers reported that most home cleaning was motivated by the sight of dirt.In a large survey from 12 countries world-wide with 12239 consumers, it was found that surface cleaning was linked to having a cleaning routine and to the perception that one is living in a dirty environment, and that women and children clean more frequently than men.As some consumers clean their food preparation surfaces based on visual cleanliness it can be asked whether it is safe or sufficient to clean only when it is visually dirty.To answer this question, knowledge about the sensitivity of visual detection of food soils, as well as whether the visual perception of cleanliness is linked to bacterial numbers, is necessary.We wanted to test to what degree the visibility of dirt depends on the type of soil that is considered in conjunction with surface materials.As expected, the higher the concentration of food in the soil, the more likely consumers were able to detect it visually.Similar scores were obtained for the lowest two dilutions as for water.For the two highest concentrations about 40% and >60%, respectively, reported that they could see the soil applied on the surface.Differences in total scores given for 1% soil were not significant.However, soil was more often detected when applied on smooth surfaces compared with more rough surfaces.Soils were more easily detected on countertops than cutting boards, but this was likely because the majority of countertops were made of laminate or stone and the majority of cutting boards were of wood or plastic.All consumers detected 10% egg-based soil, which appeared opaque/white after drying on all surfaces.Colour of the soil and surface may influence the detection, e.g. green salad was difficult to spot on green cutting board but easier to spot on white cutting board.In addition, particle size may play a role, soils with particles may be more easily spotted visually than fully dissolved soils.As the density of the different food soils varied, it is not possible to compare the detection limit for the different soils directly.For consumers that report visually dirtiness as a motivation for cleaning, the cleaning frequency will increase with a surface/material where dirt is easily detectable.To our knowledge this is not a parameter included in standards for hygienic design for the food industry, and we are not aware of standards for hygienic design for products intended for home kitchens.In a popularised article on hygienic design, Moerman states that walls and ceilings should be light-coloured because that permits fast detection of dirt and soil on their surfaces.In the literature on the history of kitchen and domestic design, as per example in Forty, there is a lot of discussion on the introduction of artificial interior design surfaces with the potential to demonstrate clean lines and cleanliness, e.g. formica and linoleum.The results from the present study indicated that it seems to be difficult to detect dirt/soil on wood.Wood has long term history for use as material for food contact surfaces, but it is disputed whether its porosity is positive or negative for food safety and the use of wood as a food contact material is consequently limited in the food industry.The bacteria on the surfaces died over time, but
Secondly, 15 Norwegian consumers tested if they could visually detect different types of food soils, as these dried on kitchen surfaces.Cleaning food preparation surfaces “after food preparation” (73%), “before preparing food” (53%) and “when they are dirty” (43%) were the three most common self-reported behaviours.Routine was the most common motivation to clean, but this was age dependent.Visual detection of soils was dependent on type and concentration of food soils and material of the surface; the soils were more easily detected on laminate surfaces than plastic and wood.
levels to cause a food safety threat from dried poultry and lettuce soils.Therefore, cleaning motivated by stickiness/visible dirt may reduce risk of foodborne infection, but should be combined with a habit of cleaning before and after preparing food.Although the presence of bacteria in itself is not necessarily a health threat, one could argue that bacterial pathogens are seldomly introduced into the kitchen as pure cultures, but together with soil and other bacteria, and that high bacterial numbers indicate a niche where pathogens may have been introduced and could survive.As our results suggests, visible soil is not a universal indicator for a niche with high bacterial numbers or pathogens, and on the contrary, lack of visible soil is not an indicator of low bacterial numbers or absence of pathogens.Therefore, a narrow approach aiming to keep the kitchen aesthetically clean, is not an effective strategy to reduce the risk of foodborne infection.On the other hand, choosing material surfaces that are light, smooth and signalling cleanliness will aid the domestic cook in spotting spills.From a food safety view and in an effort to reduce time and chemicals used for cleaning, motivation for cleaning linked to avoiding spread of invisible numbers of pathogens can be seen as ideal.However, such a targeted hygiene approach demands high expertise on the occurrence of pathogens in different foods, how they spread, how and when to break the chain of infection, and the vulnerability of family members and guests."Changing people's habitual practices is a big challenge, and information and education should start before habits are established.The benefit of a highly educated population would mean that it would be more robust towards new threats.However, one should not underestimate the role of other priorities in life, that could come in conflict with a targeted hygiene approach, such as beliefs connected to the hygiene hypothesis and the harmfulness of cleaning chemicals and practices such as selecting surface materials that are not easy to keep clean.Cleaning in the kitchen, and countertops in particular, usually takes place in routine ways before and after food preparation whilst also stimulated by sensory perception through vision and touch.Cleaning that is stimulated when cooking suggests that domestic cooks pay attention and make decisions during food preparation about when the need to clean arises.Whereas the survey results suggest that ‘stimulated cleaning’ is not as common as ‘routine cleaning’, it is possible that the differences are not so distinct, as research has shown that domestic practitioners bracket the practices they engage in sequentially, such that cooking and cleaning follow one another rather than co-occur.Domestic cooks are therefore less likely to think about cleaning whilst preparing food as ‘cleaning’.From a food safety perspective, stimulated cleaning that relies on sensory input while food is being prepared is especially interesting, as it suggests cooks do work to prevent cross-contamination.To develop understanding of how effective reliance on visual input is for hygienic cleaning in the domestic kitchen, during and after food preparation, and how Campylobacter and Salmonella survive in food soils, three experiments were conducted.There was little correlation between visual cleanliness and the level of bacteria in the kitchen environment in general.For food soils, specifically, consumers could detect relatively low concentrations on smooth surfaces, but not on rough surfaces.While Campylobacter was inactivated quite rapidly in food spills during drying, Salmonella could survive for at least a week in dried food spills.In rare events, where highly contaminated foods are introduced to the kitchen, the consumer would not necessarily be able to spot spills with high numbers of pathogens.As a conclusion, all motivations and habits for cleaning reported by European consumers will contribute to reduce the risk of cross-contamination from food in the kitchen.A combination of establishing cleaning habits before and after making food with cleaning surfaces that have been in contact with raw foods, should be promoted.The latter strategy requires good understanding of how microbes spread or the use of smooth kitchen materials where low levels of soil can be detected by the eye.Lydia Martens: Conceptualization, Writing - original draft, Writing - review & editing, Funding acquisition.Paula Teixeira: Conceptualization, Methodology, Investigation, Writing - original draft, Writing - review & editing, Project administration, Funding acquisition.Vânia B. Ferreira: Conceptualization, Methodology, Investigation, Writing - original draft, Funding acquisition.Rui Maia: Formal analysis, Visualization, Writing - original draft.Tove Maugesten: Conceptualization, Methodology, Visualization, Writing - original draft, Writing - review & editing, Project administration, Funding acquisition, Methodology, Investigation.Solveig Langsrud: Conceptualization, Methodology, Formal analysis, Writing - original draft, Writing - review & editing, Project administration, Funding acquisition.
The observation of visible dirt/soil ‘in the wrong place’ operates as one of the stimuli for action.This paper makes a transdisciplinary contribution to understandings of cleaning as a practice for ensuring safety in the kitchen, and it is mainly focused on the (in)visibility of soil or dirt.Finally, the survival of Campylobacter and Salmonella in the same soil types was tested under lab conditions as the soil dried out.There was low correlation between visual detection of dirt/soil and bacterial enumeration.Campylobacter died rapidly, while Salmonella survived for at least one week in food soils drying on a countertop laminate surface.Presence of food soils in concentrations that can be detected visually, protected Salmonella against drying.In conclusion, selecting materials where soil/dirt can easily be detected visually in the kitchen surfaces, may motivate consumers to clean and will reduce risk, but establishing a habit to clean surfaces soon after food preparation is also important from a food safety perspective.
Infections caused by multidrug resistant bacteria are currently an important problem worldwide.Taking into account data recently published by the WHO, lower respiratory infections are the third cause of death in the world with around 3.2 million deaths per year, this number being higher compared to that related to AIDS or diabetes mellitus .It is therefore important to solve this issue, although the perspectives for the future are not very optimistic.During the last 30 years an enormous increase has been observed of superbugs isolated in the clinical setting, especially from the group called ESKAPE which show high resistance to all the antibacterial agents available .We will focus on Acinetobacter baumannii, the pathogen colloquially called “iraquibacter” for its emergence in the Iraq war.It is a Gram-negative cocobacillus and normally affects people with a compromised immune system, such as patients in the intensive care unit .Together with Escherichia coli and P. aeruginosa, A. baumannii are the most common cause of nosocomial infections among Gram-negative bacilli.The options to treat infections caused by this pathogen are diminishing since pan-drug resistant strains have been isolated in several hospitals .The last option to treat these infections is colistin, which has been used in spite of its nephrotoxic effects .The evolution of the resistance of A. baumannii clinical isolates has been established by comparing studies performed over different years, with the percentage of resistance to imipenem being 3% in 1993 increasing up to 70% in 2007.The same effect was observed with quinolones, with an increase from 30 to 97% over the same period of time .In Spain the same evolution has been observed with carbapenems; in 2001 the percentage of resistance was around 45%, rising to more than 80% 10 years later .Taking this scenario into account, there is an urgent need for new options to fight against this pathogen.One possible option is the use of antimicrobial peptides , and especially peptides isolated from a natural source .One of the main drawbacks of using peptides as antimicrobial agents is the low stability or half-life in human serum due to the action of peptidases and proteases present in the human body , however there are several ways to increase their stability, such as using fluorinated peptides .One way to circumvent this effect is to study the susceptible points of the peptide and try to enhance the stability by protecting the most protease labile amide bonds, while at the same time maintaining the activity of the original compound.Another point regarding the use of antimicrobial peptides is the mechanism of action.There are several mechanisms of action for the antimicrobial peptides, although the global positive charge of most of the peptides leads to a mechanism of action involving the membrane of the bacteria .AMPs has the ability to defeat bacteria creating pores into the membrane , also acting as detergents , or by the carpet mechanism .We have previously reported the activity of different peptides against colistin-susceptible and colistin-resistant A. baumannii clinical isolates, showing that mastoparan, a wasp generated peptide, has good in vitro activity against both colistin-susceptible and colistin-resistant A. baumannii .Therefore, the aim of this manuscript was to study the stability of mastoparan and some of its analogues as well as elucidate the mechanism of action of these peptides.Taking into account that our objective was to improve the stability of mastoparan and design analogues that enhance this stability, a human and mice serum stability assay with the original mastoparan was performed.Both results were exactly the same, observing the similarity when using both serums.Surprisingly at 6 h the intact mass of mastoparan was observed by MALDI-TOF.Lineal peptides are normally very attractive targets for peptidases and proteases.The most abundant peak observed by MALDI-TOF, apart from the original mass of our peptide, was the deletion of the isoleucine present at the N-terminal, which generated a peptide with a mass of 1366 Da.It was not possible to calculate the half-life of the peptide by HPLC due to its almost coelution between the original peptide, mastoparan, and one of the resulting products after the incubation with serum.Neither could the half-life be calculated after using an isocratic gradient, especially in the presence of a low quantity of mastoparan.The resulting peptide after the incubation with human serum was synthesized in order to test its activity against several colistin-resistant A. baumannii strains.However, the MIC values for this peptide increased to very high levels.Taking into account, the information obtained in the stability assay and the fact that the resulting peptide is not active, ten peptides in addition to mastoparan and the resulting peptide after the action of proteases and peptidases, were synthesized using solid-phase peptide synthesis.They were obtained with a purity higher than 95%.Characterization of the peptides by HPLC is provided in the supporting information.In order to enhance the stability of the peptide in human serum, several options were adopted.The first option was to introduce d-amino acids, resistant to proteases and peptidases, in the susceptible positions.Therefore the peptide with a d-isoleucine, another with a d-asparagine, and the peptide with both d-isoleucine and d-asparagine were synthesized.Another common strategy followed when designing new peptidic drugs is to synthesize the retro, enantio and retroenantio versions of mastoparan.These three peptides have been found to be less cytotoxic than mastoparan .The last strategy followed was to modify both the C- and N-terminus of the original mastoparan, without perturbing the original sequence of the peptide.Modifications at the C-terminal were performed using a special resin, thereby
Abstract The treatment of some infectious diseases can currently be very challenging since the spread of multi-, extended- or pan-resistant bacteria has considerably increased over time.On the other hand, the number of new antibiotics approved by the FDA has decreased drastically over the last 30 years.The main objective of this study was to investigate the activity of wasp peptides, specifically mastoparan and some of its derivatives against extended-resistant Acinetobacter baumannii.We optimized the stability of mastoparan in human serum since the specie obtained after the action of the enzymes present in human serum is not active.
obtaining an extra positive charge by the addition of an ethylamine moiety to the amide at the C-terminal of mastoparan and retro-mastoparan sequences.The modifications at the N-terminal were performed by acetylating the free amine group, in which the action of the enzymes present in the serum diminished by the generation of steric hindrance of the acetyl group; however the peptide lost the positive charge present in the N-terminal.Another modification of the N-terminal was made by adding a guanidinium group, which also generates steric hindrance, in addition to maintaining the positive charge at the N-terminal.These peptides were tested against four extended-resistant A. baumannii strains.The resistant profiles of these strains can be seen in, highlighting that all of these strains were highly resistant to colistin.To test its increase in stability we also performed stability assays.In terms of activity, the only peptides that maintained activity were peptides 8, 9 and 12 with the same MIC values for A. baumannii as those for mastoparan.On comparing peptides 11 and 12 valuable information was obtained regarding the importance of conserving the positive charge in the N-terminal of these two peptides, with the MIC of the latter peptide increasing 8 and 16-fold, only with the removal of the positive charge.It is also important to highlight that with the change of only one single amino acid from the original sequence of mastoparan a high decrease in the antimicrobial activity of the peptide can be observed.It was also of note that most of the peptides synthesized showed high stability in human serum compared to mastoparan.The peptides built with all d-amino acids, and peptide 2 showed very high stability with values of more than 90% after 24 h of incubation with human serum.Other peptides such as peptides 3, 4, 5, 11 and 12, also showed an increase in stability, reaching values of between 70 and 80% after 24 h.These values were calculated by integrating the peaks obtained in the HPLC spectra at 0 and 24 h.Other peptides synthesized that were built using l-amino acids in the most susceptible position were unprotected.Similar values compared to mastoparan were observed with peptide 9, with even lower values found for peptides 6 and 10 with 12 and 8.5% at 2 h, respectively.With the synthesis of all these peptides we found that most had high stability after 24 h in the presence of human serum, and some had the same activity as mastoparan, thereby suggesting that they may be useful in the treatment of infections caused by extended-drug resistant A. baumannii strains.We have also find out that the present of both an isoleucine and a positive charge in the N-terminal is really important for the activity of the compound according to the results obtained by the analogues synthesized.It is also possible to observe, that by introducing just one d-amino acid in the sequence a significant decrease in the activity of the peptides is observed, however when all the peptides are in the same L- or D-form the activity of these peptides is the same such us mastoparan and the retro version compared to its enantiomers.This fact could be affected by the loose of helicity when introducing a different amino acid form, and therefore decrease its activity.Another important feature to take into account before starting the in vivo assays is cytotoxicity; therefore some MTT assays of the most active compounds were performed using HeLa cells.Most of the active peptides showed similar cytoxicity values, furthermore some hemolysis experiments were performed using mastoparan and the enantiomer version, in which low hemolysis was observed at MIC concentrations.On review of the scientific literature, few effective peptide compounds have been described against colistin-resistant bacteria.Rodríguez-Hernández and colleagues reported that the in vitro activity of cecropinA-melittin was similar against A. baumannii as the best peptides described in our study.However, optimization of the cecropinA-melittin was not performed.They also tested the in vivo activity with this peptide and only observed a local effect due to low in vivo stability .Another peptide that has been described is api88 , which was found to be active against the most common Gram-negative pathogens such as E. coli, P. aeruginosa, K. pneumoniae or A. baumannii with MIC values below 1.8 μM.It was optimized in terms of activity and afterwards tested in vivo, showing good in vivo response against E. coli.Two peptides isolated from frog-skin secretions have been tested against both colistin-susceptible and -resistant Acinetobacter species and showed similar values for all the strains tested.Nonetheless, the cytoxicity and stability of these peptides were not optimized, and therefore, their activity in vivo may actually be very low .Leakage assays were performed using two different membrane mimetics, a negatively charged membrane mimicking bacterial membranes, and a neutral membrane.Negatively charged membranes are composed by phosphatidylethanolamine, cardiolipin and phosphatidylglycerol 63:14:23.Although this is the composition of P. aeruginosa, its high similarity with A. baumannii allows it to also be used .The concentrations of peptides used to test their ability to release carboxyfluorescein from the liposome were 0.1, 0.25, 0.5, 1, 10 and 50 μM.Fig. 2 and Table S4-A shows the percentage of release of each peptide at the concentrations mentioned above.No release was observed at concentrations below 1 μM.At 10 μM some differences were observed between the peptides, and most of the active peptides against highly-resistant strains of A. baumannii showed a higher ability to release the fluorophore from the negative liposomes.Peptide 8 reached 87% of release followed by mastoparan and peptide 12, with 72
Mastoparan analogues (guanidilated at the N-terminal, enantiomeric version and mastoparan with an extra positive charge at the Cterminal) showed the same activity against Acinetobacter baumannii as the original peptide (2.7 mM) and maintained their stability to more than 24 h in the presence of human serum compared to the original compound.The mechanism of action of all the peptides was carried out using a leakage assay.It was shown that mastoparan and the abovementioned analogues were those that released more carboxyfluorescein.These results suggested that several analogues of mastoparan could be good candidates in the battle against highly resistant A. baumannii infections since they showed good activity and high stability.
other two peptides were synthesized using 1,2-diamino-ethane trityl resin from Novabiochem.The coupling reagents used were: 2--1,1,3,3-tetramethyluronium tetrafluoroborate from Albatros Chem, Inc.; Trifluoroacetic acid was from Scharlab S.L.Piperidine, dimethylformamide, dichloromethane and acetonitrile were from SDS; N,N-diisopropylethylamine was obtained from Merck and Tri-isopropylsilane was from Fluka.Peptides were synthesized by solid-phase peptide synthesis using the 9-fluorenylmethoxycarbonyl/tert-butyl strategy.Nα-Fmoc-protected amino acids/TBTU, and DIEA were used.The Fmoc protecting group was cleaved by treatment with a solution of 20% piperidine in DMF.For the acetylated peptides 50eq of Ac2O and 50eq of DIEA in DCM were used.For guanidination, 5 eq of 1,3-Di-Boc- guanidine and 5 eq of triethyl amine in DCM were used.Peptides were cleaved from both resins using 95% TFA, 2.5% TIS, 2.5% water for 3 h.The peptides have been cleaved using TFA and the HPLC solvents contain also TFA, this have been removed after liophilization.The peptides are all in their trifluoroacetate form as counter ion.The peptides were analysed at λ = 220 nm by analytical HPLC .Flow rate = 15 mL/min; solvents: A = 0.1% trifluoroacetic acid in water, and B = 0.05% trifluoroacetic acid in acetonitrile.Peptides were characterized by MALDI-TOF mass spectrometry and a high resolution ESI-MS model.The peptides were incubated at 37 °C in the presence of 100% human serum.At different times, 200 μL aliquots were extracted and serum proteins were precipitated by the addition of 400 μL of acetonitrile at 4 °C to stop degradation.After 30 min at 4 °C, the samples were centrifuged at 10,000 rpm for 10 min at 4 °C.The supernatant was analysed by HPLC.The fractions were also analysed by MALDI-TOF mass spectrometry.Aliquots containing the appropriate amount of lipid in chloroform/methanol were placed in a test tube, the solvents were removed by evaporation under a stream of O2-free nitrogen, and finally traces of solvents were eliminated under vacuum in the dark for more than 3 h. Afterwards, 1 mL of buffer containing 10 mM HEPES, 100 mM NaCl, 0.1 mM EDTA, pH 7.4 buffer and carboxyfluorescein at a concentration of 40 mM was added, and multilamellar vesicles were obtained.Large unilamellar vesicles with a mean diameter of 200 nm were prepared from the multilamellar vesicles by the LiposoFast device from Avestin, Inc., using polycarbonate filters with a pore size of 0.2 μm.Breakdown of the vesicle membrane led to content leakage, i.e., carboxyfluorescein fluorescence.Non-encapsulated carboxyfluorescein was separated from the vesicle suspension through a Sephadex G-25 filtration column eluted with buffer containing 10 mM HEPES, 150 mM NaCl, and 0.1 mM EDTA, pH 7.4.Leakage of intraliposomal carboxyfluorescein was assayed by treating the probe-loaded liposomes with the corresponding amount of peptide in Costar 3797 round-bottom 96-well plates, with each well containing a final volume of 100 μl.The micro titre plate was incubated at RT for 1 h to induce dye leakage.Leakage was measured at various peptide concentrations.Changes in fluorescence intensity were recorded using the FL600 fluorescence microplate reader with excitation and emission wavelengths set at 492 and 517 nm, respectively.Total release was achieved by adding Triton X-100 to a final concentration of 1% v/v to the microtitre plates.Fluorescence measurements were initially made with probe loaded liposomes, followed by the addition of the peptide and, finally the addition of Triton X-100 to obtain 100% leakage.The results were expressed as percentage of carboxyfluorescein released relative to the positive control.HeLa cells were used for these experiments.Their doubling time and the lineal absorbance at 570 nm were taken into an account for seeding purposes.Cell viability in the presence of peptides was tested using a 3--2,5-diphenyltetrazolium bromide assay.For each assay, 5 × 103 HeLa cells were seeded on a 96-well plate and cultured for 24 h. Samples were added at concentrations ranging from 0.1 μM to 500 μM depending on the peptide.Cells were incubated for 24 h at 37 °C under a 5% CO2 atmosphere.After 20 h, medium with compounds was removed, and MTT was added to a final concentration of 0.5 mg/mL.MTT was incubated for a further 4 h, and the medium was then discarded.DMSO was added to dissolve the formazan product, and absorbance was measured at 570 nm after 15 min.Cell viability percentages were calculated by dividing the absorbance value of cells treated with a given compound by the absorbance of untreated cells.Bacteria were grown in LB media and in mid-log exponential were incubated with MIC concentrations with mastoparan and mastoparan enantiomer for 1 h at 37 °C.A control without peptide was also performed.After the incubation, centrifugation at 3500 rpm 4 °C was done.The pellets were then fixed for 1 h with 2% of gluteraldehyde, washed three times with water and then fixed again with 1% of OsO4.The post-fixation positive stain was carried out with 3% of uranyl acetate acqueous solution during 1.5 h, after which graded ethanol series were carried out every 15 min for dehydration purposes.The samples were embedded in an epoxy resin.A Tecnai Spiritmicroscope equipped with a LaB6cathode was used.Images were acquired at 120 kV and room temperature with a 1376 x 1024 pixel CCD camera.
Thus, 10 derivatives of mastoparan were synthetized.
FIB damage would be existent, a change in slope of the J-Δa curve at around that crack extension would be evident.Furthermore, Wurster et al. compared naturally cracked specimen with FIB notched specimen and found no distinguishable difference in fracture toughness.Thus, no or only minor influence of FIB damage on the evaluated fracture parameters can be assumed.Fig. 11 shows the comparison of all tested specimens, where Ki is the fracture initiation toughness evaluated using the crack length over displacement data as proposed herein, KQ is the fracture toughness using the blunting line method and Kfracture is the stress intensity at the final unstable crack extension, which was only seen for samples 1 and 3 in this work.The indices “J” and “LEFM” describe whether the K value was calculated from a J-integral value or calculated purely from a linear elastic fracture mechanics approach.When comparing the LEFM and EPFM K-values at initiation, it is obvious that the linear elastic approach underestimates the apparent fracture toughness, giving a rather conservative parameter for further engineering purposes.One major criterion for LEFM to hold is that the plastic zone size rpl = 1/6π2 has to be considerably smaller than both the crack length a as well as the ligament size b0.Even considering the lowest measured KQ,LEFM = 3.4 MPa m1/2and highest σy = 1750 MPa, as well as plane strain condition, the plastic zone results in rpl ≈ 0.25 μm, which is still a considerable part of the starting ligament size b0 ≈ 2 μm, rendering LEFM inapplicable for the given experiments.On the other hand, the validation criteria for JQ to JIC are that b0, B > 10 JQ/σy.Again, considering the extreme case of a maximal JQ = 119.3 J/m2 and minimal σy = 650 MPa the validation value results in ~1.8 μm.Not all three specimens fulfil that criterion, but the largest shortfall thereof is ~100 nm.Thus, measuring macroscopic fracture parameters on the microscale with the present technique seems possible for the given material.For comparison the results on 〈001〉 oriented sxx W, from macroscopic fracture mechanical investigations by Riedle et al. and microscopic experiments on similarly shaped cantilevers by Ast et al. are shown.Notably, the data of other works is not evaluated exactly the same way as proposed herein, which should be taken into consideration.However, the literature data in Fig. 11 is depicted with the closest conformity in regards to the presented data, showing an overall correspondence with KQ,J. However, as stated previously, given the fact that there is no consent evaluation criterion for fracture mechanical testing on the micro-scale, it is preferable to evaluate the onset of fracture by physical measurements, e.g. dynamic compliance measurement, as proposed herein.For the first time a dynamic measurement of compliance was used to directly determine crack extension of cantilever shaped specimens inside an SEM.A detailed assessment of the setup reveals that for a reliable compliance measurement it is important to understand the frequency response of the equipment out of contact as well as in contact.Even more so for systems that do not rely on ambient damping mechanisms through atmospheres as in the present case.Furthermore, a general relationship between crack length and stiffness is proposed, based on numerical as well as experimental data.However, further detailed studies are necessary to find the limits for the applicability of this relationship.It has been demonstrated that in-situ microscale measurements of J-Δa curves by means of dynamic compliance methods are feasible and provide quantitative insight on the processes, occurring locally around a crack tip, during elastic-plastic fracture.
Measuring the local behaviour of a propagating crack in a quantitative manner has always been a challenge in the field of fracture mechanics.In-situ microcantilever testing inside a scanning electron microscope (SEM) is one of the most promising techniques for the investigation thereof.Here, for the first time we utilize a continuous measurement of the dynamic compliance in-situ to permit evaluation of the crack length.Microcantilever experiments have been performed on brittle single crystalline Si and nanocrystalline Fe to assess the stability of the setup, the applicability of the technique inside an SEM and to establish a correlation between stiffness and crack length.Subsequently, micromechanical fracture tests were performed on single crystalline, 〈001〉{001} oriented tungsten as a model material and continuous J-Δa curves were measured.The gathered data was evaluated with close relation to standardized fracture mechanics testing and showed an overall agreement with literature data.This novel possibility to measure J-Δa curve behaviour continuously and locally while also following the crack extension through in-situ imaging inside an SEM is generally applicable and will allow new insights in the crack propagation of modern materials.
and now.20,It also further reveals the cultural mechanisms behind processes of medicalization that promote ecologies of invisibility and silence in medical practice.21,In this paper we draw on historical and sociological understandings of gender as a category of analysis to situate these two populations in context and explain the temporal and ontological discrepancies between them as potential brain injury patients.22,The injuries sustained by boxers are delivered as a public spectacle, in a ring surrounded by eager fans.Pay-per-view matches, or those streamed via the Internet, broadcast these blows to potentially millions of viewers.These moments of contact are meant to be seen and celebrated—a performance of toughness and masculinity for the entire world to see.On the other hand, the physical violence sustained by women, delivered by a loved one, is more intimate than the modern term “intimate partner violence” might suggest.The classic stereotype of women covering their bruises with makeup hints at this containment of violence within the privacy of their homes, to which women have been historically associated and confined.It should be no coincidence, then, that it was during the social upheavals of the 1970s, with the massive influx of women into the public world of the work force, that domestic violence should become a topic of widespread interest.But the gendering of traumatic brain injury in these case studies is subtler than a public/private analysis alone can account for, as we elaborate below.23,The disjuncture of these diagnoses is in fact a microcosm of the social and cultural circumstances of their respective patient populations.The individualizing, somaticizing impulse of neurological research fit the masculinist, sometimes tragically heroic, spectacle of the punch-drunk boxer.The battered woman, in contrast, was ironically shunted back into a diagnostic domestic sphere, with social scientists reifying her location in the marital household, her auxiliary status within the family, and the primacy of her emotional state above all else.Finding historical sources that bring together reports of violence and brain injury is a simple enough historical exercise.In the much larger study that underpins this specific essay, we focused on research and activism primarily in the English-Speaking world from 1850 to 2012.Our sources indicated that there was a transnational exchange of ideas across English-speaking countries.We do not aim in this essay to make universalist claims, but rather to demonstrate historical trends in the sociology of medicine.We identified historically appropriate keywords using the Index Catalogue of the Library of the Surgeon-General Office.Examples of subject headings include “head injury” and “brain, concussion.,Those sources provide a composite picture of changing medical nomenclature from the period preceding this study through to the present.That medical literature cited therein was acquired and read comprehensively.Sources cited within that medical and scientific literature were also acquired and also read comprehensively."We also used those medical terms in an applied cross-reference with newspapers and periodical databases, including Proquest Historical Newspapers, Chronicling American and archives of such leading newspapers as The Times, the New York Times, and the Washington Post.Such a careful search turned up thousands of media sources on brain injury and violence, most of which were read in their entirety."Among those were many sources that described violence in sports, violence among individuals, and violence against women, and these form the basis of this essay's analysis.Sources on brain injury and violence against women were then organized chronologically and interpreted using strict historical methods.This study begins with an overview of studies of brain injury, before turning to examine the specific case of boxing by reviewing journalistic and medical literature from across the twentieth century.Boxers became in this rendering a controversial population with potential vulnerabilities.As will become obvious, much of that literature showcased the vulnerability of battered women in intimate relationships too.As the penultimate section describes, that shared vulnerability was passed over in silence, while emerging psychosocial paradigms instead sought to understand this violence in frameworks that largely elided the long-term effects of acute episodes of brain injury.Concussions and contusions were long understood as brain injuries that could occur in any setting.They were also understood to have varying levels of severity and to be dangerous, especially if repeated.By the mid-nineteenth century, observers noted the possibility of long-term consequences, including degenerative neurological disease.24,Authorities considered it likely that they had structural and functional features.Where the definition of brain injuries today possesses clearest differences from those used in the past is in the elaboration of the biomechanical mechanisms and biochemical ones.These changes in definition developed out of physics and engineering research that commenced in the 1940s and 1950s.25, "Yet a clear point to emphasize about brain injuries in which the skull was not fractured is that the severity and extent of injury to the brain's tissues was invisible.Injury became a matter of art, deliberation, and inference.As research on closed head injuries continued, a number of diagnostic categories emerged that spoke to the subjective neurological and psychiatric complaints that sometimes accompanied them for short periods, longer ones, or in rare instances became permanent.These changes in nomenclature for the mental and neurological sequelae following head injury reflected at once the emerging language of chronic disease and also the trends in advancing psychiatric nomenclatures.26,The injuries, too, were distinctly masculine and youthful, with epidemiologists calling attention by 1980 to the fact that men and boys were more than two times at risk from them than women and girls, aged 15–24, a fact that historian Kathleen Bachynski has written about extensively.27,The medical literature makes clear that a number of important contexts shaped head injury
This essay uses gender as a category of historical and sociological analysis to situate two populations—boxers and victims of domestic violence—in context and explain the temporal and ontological discrepancies between them as potential brain injury patients.Symptoms prior to that period were often cast as functional in specific psychiatric and psychological nomenclatures.
works on intimate partner violence appeared in the late 1990s as unpublished dissertations.83,Published studies with pilot data began appearing in the early 2000s, with reviews from 2011 onward beginning to call attention to the absence of data on brain injury in populations with histories of intimate partner violence.84,Katherine Price Snedaker, Director of PINKconcussions.com, was among the first researchers to start pushing publicly for a stronger research focus on concussions, domestic violence, and brain disease, and in 2013 featured domestic violence as a central focus of her first PINK website.85,Yet very little governmental funding appears forthcoming globally and, in fact, we know of only one major research grant in the United States, which was awarded in 2019.86,Contrast this observation with the resources that have been allocated for the study of traumatic brain injury and chronic brain disease in male dominated collision sports.We are not arguing here that male privilege allowed one patient population to receive the scientific and medical scrutiny it deserved, while women were simply ignored.Rather, this study in historical sociology has used this comparative case study to untangle some of the gendered assumptions at work behind clinical medicine.In the matter of traumatic head injury, as this study suggests, women were expected to experience assault as an acute episode followed by chronic emotional disturbance.Boxers, usually men and in contrast, were denied chronic emotional disturbance and permitted brain injury as a consequence.But we would go further and argue that with the focus on their family relationships and other symptoms, women were viewed in holistic terms in a way that boxers were not.The diagnostic and research frameworks resulting in these categorizations were limiting, in turn and in different ways, to sufferers of traumatic injuries in both populations.Matters in 2019 were beginning to change in these differentiated conceptualizations of brain injury, gender and exposure, but as the Pink Concussions Partner-Inflicted Brain Injury Task Force, which took shape in early 2019, seeks to highlight, there remains much distance to go for convincing the public, policymakers, and clinicians that women and men require parity in the brain injury landscape, a field that remains dominated by sports concussion research on male athletes and traditionally male sports, even as there is growing recognition that women serve in combat roles, participate in collision sports, and are the predominant population that suffer from intimate partner assault.87,We have sought to show in this essay the way that gender shapes social problems and either permits or stands against the medicalization of those problems.The gender politics of biomedicine, which underpins our analysis of different populations with similar exposures, we have argued here, has far-reaching consequences for how the problem of brain injury is conceptualized in clinical practice, policy, law, and criminal justice.We would also note that there are other populations in whom an emphasis about the psychosocial context may be realized in practical terms as a form of stigma, not least wounded veterans, prisoners, and individuals injured in the workplace.It should be obvious that this analysis possesses equally profound intersections with classed and racialized concerns in the social sciences as well, and we encourage future scholarship in social medicine to focus on this area.The case of intimate partner violence possesses a rather special urgency, however, for it has long been considered an enigmatic feature of dementia that women have heightened rates.While the putative mechanism of this observation has long been assumed a consequence of differentiated mortality rates, it may also be the case that another answer has been hiding in plain sight.
In boxing, the question of brain injury and its sequelae were analyzed from 1928 on, often on profoundly somatic grounds.With domestic violence, in contrast, the question of brain injury and its sequelae appear to have been first examined only after 1990.We examine this chronological and epistemological disconnection between forms of violence that appear otherwise highly similar even if existing in profoundly different spaces.
disassembly/pruning .Under normal physiological conditions, both pathways likely act in a highly regulated and coordinated manner to achieve appropriate levels of synaptic plasticity and normal cognitive functioning.Abnormal levels of Aβ result in cognitive impairment and memory deficits by disrupting these processes.Our data demonstrate that Aβ-driven synapse withdrawal involves the Dkk1-dependent activation of the Wnt-PCP-RhoA/ROCK pathway.We show that at nanomolar levels, oligomeric forms of Aβ1–42 regarded to be the most synaptotoxic form of Aβ , rapidly upregulate neuronal Dkk1 expression leading to dendritic spine retraction and altered localization of the postsynaptic proteins, PSD-95, and GluA1, and that these effects are dependent on Daam1 and ROCK.It has been postulated that Dkk1 alters synapse stability predominantly through antagonism of the canonical Wnt-β-catenin pathway , which doubtlessly contributes to the process given the recognized role of canonical Wnt in synapse formation and stability .Our data significantly advance on this idea, demonstrating that Dkk1-mediated synapse loss involves the simultaneous and necessary activation of the Wnt-PCP-RhoA/ROCK pathway.This is in line with previous reports specifically pointing to a role of Wnt-PCP in synapse disassembly through the core PCP component Vangl2 .Taking this further, Aβ induction of Dkk1 likely exerts two simultaneous effects both detrimental to synaptic connectivity, a reduction in synaptic adherens junction stability due to a reduction in β-catenin levels by antagonizing the canonical Wnt pathway and concomitantly allowing activation of Wnt-PCP that acts on cytoskeletal dynamics to directly drive synapse withdrawal.We previously reported that Aβ, through Dkk1, aberrantly activates the JNK/c-Jun arm of Wnt-PCP, which then drives the expression of genes required for Aβ-induced neuronal death and increases in tau phosphorylation in vitro and in vivo .Furthermore, we also presented evidence that the signaling pathways most associated with disease in the AD brain are shaped, if not driven, by Dkk1-Wnt-PCP activity .We then now argue that the Aβ-driven Dkk1-dependent activation of Wnt-PCP underpins multiple of the key neuropathological characteristics of AD including, possibly the most fundamental of all, the loss of synaptic connectivity.This concept is depicted schematically in Fig. 5 in which we also indicate the dual effects of Aβ induction of Dkk1, it not only inhibiting the canonical Wnt pathway but concomitantly permitting activation of the Wnt-PCP pathway.Although we hold that a dysregulation of Wnt signaling is likely to be central to AD pathology, work from other groups, strongly supported by genetic evidence, also clearly indicates an involvement of other systems and pathways.The main players that have emerged are immunity/inflammation and endocytosis/autophagy.In addition, the major tau kinase GSK3 also remains a key player in the disease, and although it occupies a central position in Wnt signaling, it is significant in many alternative pathways, particularly in insulin and p53 signaling that have both been strongly implicated in AD.Given the above, the fact that the familial AD gene APP has itself recently been shown to be a component of the Wnt-PCP co-receptor complex surely underpins the importance of this pathway in the disease process .It also indicates that a better understanding of the both the physiological and the pathological roles of both Aβ and APP in this pathway will shed further light on the mechanism and improve our ability to therapeutically intervene in a effective manner to slow it down or even prevent it.Here, not only do we shed new light on these mechanisms but also we identify fasudil, a drug that has been approved for clinical use in Japan and China since 1994 for cerebrovascular vasospasm, as a strong candidate for repositioning/repurposing for AD.We assessed the pharmacodynamics of fasudil and its active metabolite hydroxyfasudil, showing both have good brain penetrance.Owing to legal infringements within the pharmaceutical industry, fasudil has not received the Food and Drug Administration or European approval.However, in China, it has been used in a small clinical trial in AD patients in combination with a second vasodilator, nimodipine.In this study, fasudil was found to improve cognitive function compared with nimodipine alone .A recent report has shown that ROCK inhibitor, Y-27632, can reverse Dkk1-induced synapse loss in vivo .Thus, that fasudil is well tolerated in humans the data we present here concerning its ability to protect against Aβ synaptotoxicity, to have good brain availability, and to protect against Aβ-induced cognitive impairment, warrants serious assessment of its utility as a much needed treatment for AD."Systematic review: Several decades of medical research strongly indicate that synapse loss is an early and key event in Alzheimer's disease and that this is driven by soluble oligomeric forms of the amyloid β peptide.However, the molecular mechanisms underlying Aβ synaptotoxicity are not clear, nor has any medication been identified that can halt this.Interpretation: We present strong evidence that Aβ-driven synapse loss is dependent on a branch of Wnt signaling known as the planar cell polarity pathway.In elucidating this mechanism, we found that synapses, and cognition in rats, are protected from the effects of Aβ by a drug in clinical use, fasudil.Future directions: These findings will allow a yet more detailed understanding of the mechanisms controlling the synaptic effects of Aβ to be determined."Importantly, they indicate that fasudil, which is safe in humans and readily enters the brain, is a very promising candidate treatment for Alzheimer's disease.
Introduction: Synapse loss is the structural correlate of the cognitive decline indicative of dementia.In the brains of Alzheimer's disease sufferers, amyloid β (Aβ) peptides aggregate to form senile plaques but as soluble peptides are toxic to synapses.Methods: We compared the effects of Aβ and of Dkk1 on synapse morphology and memory impairment while inhibiting or silencing key elements of the Wnt-PCP pathway.Results: We demonstrate that Aβ synaptotoxicity is also Dkk1 and Wnt-PCP dependent, mediated by the arm of Wnt-PCP regulating actin cytoskeletal dynamics via Daam1, RhoA and ROCK, and can be blocked by the drug fasudil.Discussion: Our data add to the importance of aberrant Wnt signaling in Alzheimer's disease neuropathology and indicate that fasudil could be repurposed as a treatment for the disease.
study—using sugar, polymers, and surfactants as excipients—were based on PATH's prior experience in developing FDTs for vaccines.14,15",Because the targeted levels of the combined drugs per tablet dose was high, the 2 major criteria for down selection were drug-loading levels, based on analytical recovery, and physical properties of the tablet.Characteristics to be avoided in the physical properties of tablets were surface cracking or chipping, nonuniform appearance, extremely porous structure subject to static forces, residue left in blister cavities, and long dispersion times in fluid.The dosage form was being developed for pediatric use, so efforts were made to minimize the number of excipients and their levels in the final product, and those used are of “generally recognized as safe” quality and have a demonstrated acceptable safety profile for pediatric use.19,No preservatives were used in the formulations.The purpose of the nonionic emulsifier Tween 20 was to minimize residue and assist in solubilizing the drug components.Mannitol was used not only as a crystalline bulking agent for imparting good handling properties, but also to add sweetness.CMC was a filler agent to add bulk and improve the overall handling properties of the FDT.The composition of the formulation played a key role in solubilizing the hydrophobic drugs at high loading levels while maintaining rapid disintegration and good handling characteristics.Lopinavir and ritonavir are highly hydrophobic, with aqueous solubility of about 1 μg/mL and an octanol-water partition coefficient >4.20,21,Nearly 50% of the weight of our tablet comes from the 2 drug components, which were solubilized at 80 and 20 mg/mL for lopinavir and ritonavir, respectively, in the presence of milk and polymer stabilizers.One explanation for the increased solubility in aqueous formulation could be the presence of surfactant combined with casein phosphoproteins in milk.Casein, an amphiphilic protein, may increase drug solubility through micellization, assisted by the Tween 20.22,Although we did not investigate the mechanism in our work, similar results for casein have been reported previously.23-25,Powdered milk has been used in approved veterinary vaccines,26 as part of a formulation for pediatric tablets produced by direct compression,27 and in production of freeze-dried solid dosage forms of viable Lactobacillus spp.28,After 3 months of storage at 40°C, 75% RH, tablet physical properties remained nearly the same as prior to storage, with friability and disintegration time unchanged.The decrease in moisture content from 1.6 wt% to 0.8 wt% may be a result of secondary drying at the elevated temperature, although this did not impact the handling properties of the FDT.Each tablet disintegrated in just 0.5 mL of water, maximizing the likelihood of uptake of the full dose by small children, and there were no solid pieces that an infant might detect and expel.As many as 4 tablets could be dispersed in volumes as low as 2 mL, allowing a wide dosing range by simply increasing or decreasing the number of tablets in a fixed low volume.Furthermore, the low volume allows delivering the medication with a spoon, a commonly available implement in most households.Although we tested commercially produced foods, the tablets can be dispersed in milk or any available pureed food.No undesirable physical or chemical interactions between the food and the medicine were detected based on complete drug recovery using HPLC analysis.A question remaining is the taste of the product when dispersed in fluid and presented to children, as it is unclear whether the low volumes of fluid or food are sufficient to mask the bitter taste of the drugs.Palatability studies of the LPV/r FDTs in different fluids and infant foods in rodents have been completed and will be published separately.Rodents are useful models for taste studies29 based on analysis of lick patterns, and this study will determine whether the taste of the 2 drugs in the FDT dispersed in various foods is sufficiently aversive to alter the normal licking pattern.These results indicate that the FDT offers flexibility for administering LPV/r across a broad range of age groups, and they suggest potential applicability for other combination ARV medications and for drugs for other disease indications in the pediatric population, especially in low-resource countries.
Current presentations of the anti-HIV drugs lopinavir and ritonavir make appropriate dosing for children difficult.We conducted a feasibility study to develop a formulation for these drugs with child-safe excipients in a flexible dosage form for children across the pediatric age spectrum.The freeze-drying in blister approach was used to produce fast-dissolving tablets (FDTs), as these can be dispersed in fluids for easy administration, even to infants, and appropriate portions of the dispersion can be given for different ages/weights.We combined various ratios of polymers, surfactants, and bulking agents to incorporate the 2 highly hydrophobic drugs while maintaining drug stability, rapid disintegration, and good handling properties.The final FDT was robust and disintegrated in 0.5 mL of fluid in 10 s with up to 4 tablets dissolving in 2 mL to achieve varying doses accommodated in a common teaspoon.Drug recovery after dissolution in small volumes of liquid or fluid foods was 90%-105%.FDTs are a promising flexible dosage form for antiretroviral treatment for pediatric patients, especially in low-resource settings.
the logic of influence on the result is less straightforward than in committees and the outcomes are more uncertain.Government- and especially market-based standardisation are therefore likely to become relatively more important in a multi-mode standardisation process if actors are willing and able to spend the necessary resources and use them effectively.Neither of these two factors is likely to be static.In the medium- to long-term, the standardisation ‘culture’ in a field can change if it is challenged by sufficiently strong actors, or if it needs to adapt to outside shocks.In the short-term, available resources and knowledge can also fluctuate, e.g. because actors acquire them or because actors join or leave the standardisation process.This suggests that the relative weights of the modes can change throughout the process, as was observed in the development of international accounting standards.Such changes and the options to challenge coordination outcomes identified above also imply that multi-mode standardisation is potentially indefinitely ongoing, rather than a definite process as assumed in the ideal-typical views.Where extant literature already considers standardisation as an ongoing process, it mainly focuses on efforts in committees to extend or maintain standards or on work in committees to replace existing standards when technological change makes them obsolete.Instead of being the end point to a process, an established standard is a situation with a short-term equilibrium between the involved actors, i.e. where, for the time being, no actor attempts to challenge the status quo.An established standard therefore resembles a settled strategic action field.Such a settlement can be challenged at any time.The interactions between standardisation modes discussed above and the potentially shifting weights of the modes in a standardisation process mean that actors can launch new activities in one or multiple modes that then may affect on other modes and the overall standardisation process.To sum up, standardisation is therefore not only an ongoing process because standards need to be updated regularly, as already acknowledged in extant literature, but also because actors may disagree with an established standard and challenge it.The objective of coordination is thus only reached if no actor challenges the standard successfully.The success of such activities is likely to depend on a range of factors, such as the standardisation ‘culture’ in the field, the environment in which the standardisation process takes place, the challenging actor’s resources and knowledge, and other actors’ willingness to defend the standard.Standardisation is vital for driving forward the major current trends related to smart systems and platforms.Due to these systems’ complexity and variety of involved stakeholders, we expect multi-mode standardisation to become increasingly prevalent.This means that a better understanding of the phenomenon is needed.Although such multi-mode standardisation processes can be expected to have case-specific dynamics, these dynamics are likely to result from combinations of certain underlying features related to the ideal-typical modes of standardisation.Our work provides the basis for further research into these features by adding three major contributions to the literature. We crystallise the three modes underlying standardisation processes and their defining characteristics. We provide an overview of the available literature on the interactions between these modes and identify its gaps. We recombine the evidence from this literature to generate tentative insights, beyond what has been documented in literature so far, into the interactions and dynamics that are likely to occur in multi-mode standardisation.These interactions and dynamics are summarised in Fig. 2.9,In addition to the direct interactions between modes that are already evident from existing literature, we also expect developments in each mode to have a reciprocal impact on the dynamics between the other two modes.Because each mode of standardisation offers an ‘avenue’ for actors to contribute to a standardisation process, these actors’ actions drive the dynamics in multi-mode standardisation.Actors can activate new modes at various points in the process.Once a mode has been activated, every actor can decide whether to engage in this mode and how to use the opportunities for manoeuvring that it offers.These complex dynamics occur against the backdrop of the field’s standardisation ‘culture’ and institutional context.This backdrop has an important impact on the degree to which actors can rely on certain modes of standardisation, whether their activities within the modes are perceived as legitimate and how the developments within the modes affect each other.A further element of this backdrop is the technological context in which the standard is developed.Compared to the standardisation ‘culture’ and the institutional context, extant literature offers a weaker base for theorising about the technological context’s impact on multi-mode standardisation processes, making this a first area for further research.These findings establish the elements that are likely to be key for multi-mode standardisation processes, and provide a good basis for further research into this important phenomenon.All the elements included in Fig. 2 require further enquiry, as outlined in the agenda for research in Section 5.1.Furthermore, our findings already lead to some recommendations for practitioners, see Section 5.2.Multi-mode standardisation is likely to shape the dynamics of standardisation and major technological and social developments in the future.Theory about standardisation needs to reflect this better.We propose that additional research should approach multi-mode standardisation from three perspectives: dynamics of multi-mode standardisation processes and how they contribute to coordination; strategies for individual actors; and the role of governments and other facilitating actors like SDOs.Generating an understanding based on these three perspectives will also provide a foundation for evaluating the impact of multi-mode standardisation on business and society, which represents a fourth area of research.The first suggested area for research could
Standardisation is key to shaping new technologies and supporting major ongoing trends, such as the increased importance of platforms, developing ‘smart’ technologies and innovating large-scale complex systems.Standardisation plays a key role in shaping the rules that govern these developments and their effects on society.Due to the large variety of actors involved in these trends, the associated standardisation processes are likely to involve all three modes of standardisation identified in the literature: committee-based, market-based and government-based.This multi-mode standardisation challenges the theoretical views on standardisation which predominantly focus on one of the modes.In this paper, we review the existing literatures on individual modes and on multi-mode standardisation.By recombining existing evidence, we generate new insights into multi-mode standardisation processes.These first insights relate to the contributions that each mode can make to such processes’ outcomes and suggest that their impact depends on factors, such as their initiation's timing and the institutional context in which the standardisation process occurs.Moreover, we consider the conditions under which actors can launch each mode.Based on our observations, we formulate an agenda for future research to obtain a better understanding of multi-mode standardisation.We offer recommendations for industry actors, NGOs, researchers and policy makers involved in shaping technological and societal change.
Inhibition of automatic motor actions in favor of more complex voluntary behavior in a demanding situation is a hallmark of executive functions for humans."This ability can be impaired in different neurological conditions such as Huntington's disease, which is characterized by purposeless, involuntary and choreic movements.HD executive deficits could be the result of impairments in planning, flexible behavior, behavioral inhibition, and oculomotor control, the later probably due to disruptions in the interaction between basal ganglia and the saccade control system.These functional disabilities highlight the importance of characterizing inhibitory deficits for early diagnosis and disease progression in HD.Executive deficits can be studied with the interleaved pro- and anti-saccade task that requires flexible behavioral control to generate either automatic or voluntary movements according to specific task instructions.The pro-saccade task requires an automated saccade to a peripheral visual stimulus, while the anti-saccade task requires the suppression of this automated response and instead the participant must look in the opposite direction.HD patients display inhibitory control deficits in this task, having increased and more variable saccade reaction times, and increased proportion of direction errors in the anti-saccade task, indicating a voluntary saccade inhibition deficit.According to an influential basal ganglia model, the inhibitory anti-saccade deficits in HD patients could be attributed to the degeneration of the striatal medium spiny neurons in the indirect-pathway through the basal ganglia arising from the putamen and caudate nuclei.This degeneration leads to reduced inhibitory outflow from the internal globus pallidus/substantia nigra pars reticulata and excessive disinhibition of the thalamus that result in an excessive positive feedback to the motor areas.Therefore, over-activation of the indirect-pathway disrupts the cortico-basal ganglia circuitry, decreasing the inhibition from the substantia nigra pars reticulata to the superior colliculus that leads to saccade suppression deficits.However, the current basal ganglia model questions the indirect-pathway hypothesis suggesting instead an interaction between direct, indirect and hyperdirect-pathways in executive-motor control.In this new model, signals from supplementary motor area, premotor, prefrontal, parietal cortices and cingulate gyrus are first transmitted through the hyperdirect-pathway, induce early excitation in the substantia nigra reticulata, and inhibit inappropriate movements; then signals through the direct-pathway inhibit the substantia nigra reticulata and release appropriate movements; and finally, signals through indirect-pathway induce late excitation in the substantia nigra reticulata and stop movements.Thus, it is possible that degeneration in other brain areas beyond the indirect-pathway and their interconnectivity, could contribute to the executive and saccade deficits in HD.One way to delve into this debate is by analyzing anti-saccade and anticipatory behaviors in oculomotor tasks that probe inhibitory control, as well as structural magnetic resonance imaging to map cortico-basal atrophy in HD patients.We test the hypothesis that atrophy in hyperdirect-pathway related areas including frontal; parietal, cingulate cortices and with-matter connecting them, are related with voluntary oculomotor control deficits in early HD patients.To this end we evaluated the brain atrophy in HD using Voxel-based morphometry, Tract-based spatial statistic and its effect on voluntary oculomotor control in the interleaved pro- and anti-saccade task.We were especially interested in anti-saccade errors and anticipatory saccades as a measure of voluntary oculomotor control and its relationship with neural atrophy.Our results suggest that an extended neural network beyond the indirect-pathway mediates voluntary saccade inhibition control in HD."The Queen's University and Universidad Nacional Autónoma de México Health Science and Ethics Committees in Human Research approved all experimental procedures in accordance with declaration of Helsinki.All HD patients and control participants provided informed consent, and were recruited and evaluated at the Instituto Nacional de Neurología y Neurocirugía México.Only HD patients with molecular diagnosis of the CAG trinucleotide repeat expansion and whose neurological/motor impairment did not prevent performing the test were included.The HD group consisted of 23, right-handed patients aged 29–68 years, 13 females; mean ± SD age 49.6 ± 11.4; age at onset 25–62 years, mean 44.6 ± 10.1; early disease duration from 1 to 10 years, mean 4.5 ± 2.9; repeats size CAG 40-52, mean 44.2 ± 3.0; and years of education 9–18 years, mean 13.5 ± 3.0 see Table 1 for details.HD patients did not take any specific medication for the disease; they only were given supplements such as Coenzyme Q-10, and were not asked to interrupt it during the recording session.We exclude one patient due to extensive eye tracking loss.The control group consisted of 23 healthy right-handed volunteers that were age- and sex-matched to the HD patients.Control participants did not report any visual, neurological or psychiatric disorder and had Montreal Cognitive Assessment scores ≥ 24, mean 27.2 ± 1.6 as assessed by the experimenter IVP.An expert neurologist from INNN performed clinical evaluation for HD patients."The evaluation included the MOCA to assess the general cognitive functioning and the Unified Huntington's Disease Rating Scale to assess disease progression.HD patients scored a mean of 24.1 ± 3.3 in the MOCA, and a mean of 17.5 ± 12.5 on the motor and 11.3 ± 2.2 on the functional components of the UHDRS.It should be noted that the motor component of the UHDRS has standardized ratings of oculomotor function, dysarthria, chorea, dystonia, gait and postural stability.Right-eye position was recorded in all participants with a video based eye tracker at a rate of 500 Hz with monocular recording.Stimulus presentation and data acquisition were controlled by Eyelink Experiment Builder and EYELINK software.Participants were seated in a dark room, while seated the participants comfortably rested their heads in a sturdy chin and forehead support avoiding head movements as much as possible.The stimuli were presented on a 17-inch LCD monitor
The ability to inhibit automatic versus voluntary saccade commands in demanding situations can be impaired in neurodegenerative diseases such as Huntington's disease (HD).These deficits could result from disruptions in the interaction between basal ganglia and the saccade control system.To investigate voluntary oculomotor control deficits related to the cortico-basal circuitry, we evaluated early HD patients using an interleaved pro- and anti-saccade task that requires flexible executive control to generate either an automatic response (look at a peripheral visual stimulus) or a voluntary response (look away from the stimulus in the opposite direction).The impairments of HD patients in this task are mainly attributed to degeneration in the striatal medium spiny neurons leading to an over-activation of the indirect-pathway thorough the basal ganglia.
the therapeutic effect of pallidotomy in hyperkinetic disorders and from imaging studies showing correlations between executive-motor deficits and premotor, sensorimotor, thalamic, substantia nigra and pallidal atrophy in HD.Our imaging results showed that anti-saccade errors were related to cortico-basal ganglia circuit atrophy, including grey-matter degeneration in caudate and putamen, thalamus, cingulate gyrus, parietal cortex and middle frontal gyrus.Anti-saccade errors were also correlated with white-matter atrophy in the superior longitudinal fasciculus, anterior thalamic radiation and anterior corona radiata.These results are supported by previous work showing correlations between anti-saccade errors with bilateral striatum and external/internal capsule atrophy.Notably, it has been suggested that these fasciculi contribute to the hyperdirect-pathway that conveys projections from motor, supplementary motor, premotor and cingulate cortices to the subthalamic nucleus to exert a synergistic effect along with the indirect-pathway to suppress movements.The observed degeneration pattern is congruent with the behavioral deficits we observed because anti-saccade errors are related to frontal eye field atrophy.Taken together, these findings suggest that the voluntary saccade inhibition deficits seen in HD are related to the disruption of an extended network beyond the indirect basal ganglia pathway, including putamen; thalamus; cingulate gyrus; parietal and frontal cortices; as well as the longitudinal fasciculus, the thalamic radiation and the corona radiata.HD patients made more anticipatory saccades than controls.The occurrence of these anticipatory saccades had a positive correlation with anti-saccade errors, together suggesting reduced inhibitory control.Our imaging analysis showed critical positive relationships between white-matter atrophy, and the percentage of anticipatory anti-saccades in the inferior fronto-occipital fasciculus, anterior thalamic radiation, anterior corona radiata, forceps major and superior longitudinal fasciculus.The significant increase of anticipatory saccades in HD patients performing the pro- and anti-saccade task had not been reported previously.Anticipatory saccade behavior can be triggered prior to the arrival of the visual stimulus information to the oculomotor system when saccade preparation signals are high."In the context of the pro- and anti-saccade task, these saccades have been linked to caudate nucleus' activity prior to the target stimulus appearance; this activity in turn releases superior colliculus and frontal eye field outputs facilitating anticipatory saccade behavior.Thus, it has been suggested that controlling anticipatory saccades could reflect general mechanisms involved in voluntary control.Performing correct anti-saccades trials requires the ability to execute an incongruent stimulus-response mapping where an automatic response is inhibited and the correct response is prepared in the opposite hemisphere and then executed.In order to properly execute this incongruent mapping, and before the appearance of the peripheral target, the brain must establish a preparatory set to execute the appropriate action.For example, patients with response inhibition problems involving prefrontal cortices and basal ganglia like obsessive-compulsive disorder, show a high proportion of anticipatory saccades manifested as saccade intrusions or direction errors in oculomotor tasks.Thus, the high anticipatory saccade behavior in HD patients could reflect impaired preparatory set activity hindering the vector inversion to transforming the initial location of the target into the appropriate motor command for the saccade execution, resulting in an increase of longer latency direction errors.The control group behavior supports this idea.They had a low proportion of anticipatory anti-saccades, likely because they could inhibit anticipatory saccades, resulting in a lack of correlation between anticipatory anti-saccades and regular-latency anti-saccade errors.The imaging analysis showed relationships between the percentage of anticipatory anti-saccades and white-matter atrophy in the inferior fronto-occipital fasciculus, anterior thalamic radiation, anterior corona radiata, forceps major, and superior longitudinal fasciculus.It should be noted that regular-latency anti-saccade errors showed significant correlations with the same fasciculi that, as discussed above, are part of the neural network for inhibition in the cortico-basal ganglia circuitry.These findings are supported by the proposal that the pathogenesis in HD begins in the myelin even before the appearance of motor and cognitive impairments.Myelin abnormalities can slow fast axon transport resulting in synaptic loss and eventually axonal degeneration.Therefore, early white-matter atrophy could disrupt the cortico-basal ganglia circuitry, which affects the preparatory neural activity required to establish preparatory set and therefore impacts the ability to execute appropriate saccades in early HD patients resulting in both types of behavioral deficits."Other studies support the idea of early HD's white-matter effect on executive deficits, suggesting that in vivo measures of HD's white-matter can be a reliable marker of cognitive deficits progression.Our findings showed a significant increase in anti-saccade direction errors and anticipatory anti-saccades in early HD patients.Furthermore, the regular-latency anti-saccade errors and anticipatory anti-saccades, which correlated with each other, showed specific relationship with regionally white-matter atrophy in the inferior fronto-occipital fasciculus, anterior thalamic radiation, anterior corona radiata, forceps major and superior longitudinal fasciculus.These fasciculi contribute to the hyperdirect-pathway conveying projections from motor, supplementary motor, premotor and cingulate cortices to the subthalamic nucleus to exert a synergistic effect along with the indirect-pathway to suppress movements.These results suggest that impairments in the implementation of voluntary inhibitory behaviors could be explained by early myelin atrophy in the cortico-basal ganglia circuitry, which in turn impairs the establishment of the preparatory set activity in the neural network controlling voluntary eye movements for the execution of the appropriate saccade.Further research combining behavioral measures and MRI techniques should explore the neural basis of voluntary saccade inhibitory deficits as a possible marker of cognitive deficits progression in HD.
However, some studies have proposed that damage outside the indirect-pathway also contribute to executive and saccade deficits.HD patients had voluntary saccade inhibition control deficits, including increased regular-latency anti-saccade errors and increased anticipatory saccades.These deficits correlated with white-matter atrophy in the inferior fronto-occipital fasciculus, anterior thalamic radiation, anterior corona radiata and superior longitudinal fasciculus.These findings suggest that cortico-basal ganglia white-matter atrophy in HD, disrupts the normal connectivity in a network controlling voluntary saccade inhibitory behavior beyond the indirect-pathway.This suggests that in vivo measures of white-matter atrophy can be a reliable marker of the progression of cognitive deficits in HD.
Educational qualifications and trajectories of employment, income and health across the life course are all importantly influenced by academic achievement in childhood.There are large socio-economic inequalities in academic achievement throughout childhood, and these help drive the emergence of health inequalities.In acknowledgement of the benefits to giving every child a strong start in life and the subsequent contributions to the economic productivity of society, the focus of government and non-government organizations in many countries has turned to improving overall levels and socio-economic gaps in academic achievement in early childhood.While cognitive ability is a widely recognised determinant of academic achievement, there is increasing interest in the role of “non-cognitive” characteristics.Though the term “non-cognitive” has not been consistently defined or measured, the idea of non-cognitive skills encapsulates personality characteristics and social behaviours that can maximise life opportunities.In young children an important component of non-cognitive abilities is self-regulation which refers to the control of attention, emotion and behaviour.Some research has suggested that early “non-cognitive” skills like self-regulation may be as important than cognitive ability for future outcomes like labour market success, both directly and by supporting later cognitive ability.Self-regulation is integral to cognitive ability in childhood, through supporting engagement in and persistence with learning tasks.Cognitive ability and self-regulation have both been linked to better academic achievement and are generally lower among socially disadvantaged children.Observational studies indicate that self-regulation and cognitive ability may mediate the association between socio-economic disadvantage and academic achievement.It is therefore plausible that intervening on these components of child development may reduce socio-economic inequality in academic achievement.Interventions targeting cognitive ability and/or self-regulation in the United States have been shown to improve school readiness and early academic achievement, including in disadvantaged families, although effects may fade with time.A comparison of cognitive and self-regulation skills, as two related mechanisms that can be targeted by interventions, would inform the design of early childhood programs to reduce socioeconomic gaps in academic achievement."Our goal was to decompose the pathways from SED at birth to children's academic achievement in mid-childhood that were via early-life self-regulation and cognitive ability.Fig. 1 shows the direct pathway from SED to the child academic achievement, the indirect pathway via cognitive ability, and the indirect pathway via self-regulation.We conducted comparative analyses throughout early- to mid-childhood using data from contemporary, nationally representative cohorts from Australia) and the United Kingdom).As a sensitivity analysis to measurement error in the self-regulation measures, which were based on maternal report in MCS and LSAC, we examined these associations in a third cohort - the Avon Longitudinal Study of Parents and Children, ALSPAC, which collected an objective measure of executive function, a measure of self-regulation in young people.The LSAC is a nationally representative prospective study of two cohorts of children, recruited 2003–2004.The methodology has been previously described.We used data on 5107 infants from the ‘b-cohort’, who were first contacted at 0–1 year.The MCS is a longitudinal study of children born in the UK, 2000–2002.Information on the survey design has been described elsewhere.The first contact with the cohort child was carried out at around age 9 months for 18,818 infants.Data were downloaded from the UK Data Service, University of Essex and University of Manchester, in April 2014."In both cohorts, interviews were carried out with trained interviewers in the home, with the primary caregiver and her partner; postal questionnaires were also sent to the children's teachers once they reached school age.The counterfactual analytical method used to decompose the mediating pathways of interest favours use of binary exposure, mediator and intermediate confounding variables, because the availability of just one counterfactual state aids interpretability of results.All measures are described in detail in Table 1 and summarised below, including cut-offs for dichotomisation.Mothers’ highest educational qualifications were used as indicators of SED.Low education was defined by educational targets set by the Australian) and UK, grades A*-C) governments.We analysed two separate measures of academic achievement: maths and literacy scores derived from teacher assessment in LSAC and by tests completed by the MCS children during the interview. ’,Low’ academic achievement was defined as being in the lowest quintile of scores.We used a number of items representing a component of self-regulation known to influence academic achievement - task attentiveness and persistence.Responses to the items were summed to create self-regulation scores; children in the lowest quintile were defined as having ‘low’ self-regulation.Cognitive ability was defined as the non-verbal and verbal abilities of the child.Non-verbal abilities were assessed with the Matrix Reasoning subtest in LSAC and pattern construction in the MCS.Verbal abilities were assessed using a test of receptive vocabulary.Verbal and non-verbal scores were standardised using T-scores and then combined.The lowest quintile was used to represent ‘low’ cognitive ability.Baseline confounders were young maternal age at first live birth and language spoken in the home.MCS analyses were repeated adjusting for ethnicity in place of language and the results were unchanged.The following were considered to confound the mediator/outcome association and were also associated with the exposure: alcohol consumption and smoking in pregnancy, and at ages 3–5: lone parenthood status, housing tenure, household income, household unemployment, maternal psychological distress, parenting style and formal childcare use.Latent class analysis was used to create a summary measure of confounding characteristics.A two class model offered a good fit in both cohorts, with good separation for all items except alcohol in pregnancy, maternal psychological distress, parenting style and formal childcare use.The resulting binary variable distinguished between less and more supportive environments.The LCA was carried out in Stata 13.0 using a
Socio-economic inequalities in academic achievement emerge early in life and are observed across the globe.We examined this in two nationally representative cohorts in the UK (Millennium Cohort Study, n = 11,168; 61% original cohort) and Australia (LSAC, n = 3028; 59% original cohort).An effect decomposition method was used to examine the pathways from socio-economic disadvantage (in infancy) to two academic outcomes: ‘low’ maths and literacy scores (based on bottom quintile) at age 7–9 years.
Stata plug-in for the SAS procedure PROC LCA.We used a counterfactual method for decomposing two related mediating pathways.In counterfactual methods, the observed data are used to estimate the potential outcome that would have been observed had exposed individuals been unexposed, and unexposed individuals been exposed."Therefore estimates refer to average change in outcomes when individuals' observed exposure status is manipulated to the counterfactual.Some counterfactual methods allow the value of the mediator to react to the change in the exposure from its observed to its counterfactual state, enabling estimation of natural indirect and direct pathways).Estimating natural direct and indirect pathways can be problematic when the mediator is subject to intermediate confounding or when there are multiple, related mediating pathways.VanderWeele, Vansteelandt and Robins demonstrate a series of analytical approaches that enable the estimation of direct and indirect pathways in the presence of intermediate confounding, or two related mediators."The first of VanderWeele, Vansteelandt and Robins' analytical approaches, referred to as ‘Joint mediators’, provides an effect estimate of the ‘direct’ pathway from exposure to outcome that is not acting via the two mediators, and another for the joint indirect pathway through two related mediators.This approach might therefore be used to examine the potential for a single intervention, which improves both self-regulation and cognitive ability, to reduce inequality in academic achievement.The direct pathway is given by the change in risk of the outcome when the value of the exposure is altered from its observed to its counterfactual value.The joint indirect pathway is the difference in the risk of the outcome when both mediators are changed from their observed to their counterfactual values, while the exposure is held at its observed value.A more detailed explanation and statistical notation are provided in Appendix B.The second approach, ‘Path specific effects’, estimates the direct pathway in the same way, but in addition decomposes the joint indirect pathway into that through each mediator separately.This approach is therefore appropriate for comparing an intervention designed to improve cognitive ability with an intervention to improve self-regulation.The direct pathway is estimated using approach 1.The indirect pathway through the main mediator of interest is given by the difference in risk of the outcome when M2 is changed from its observed to its counterfactual value; the exposure is held at its observed value, while the second related mediator is held at its counterfactual value.The indirect pathway through M1 is given by the difference in the risk of the outcome when M1 is changed from its observed value to its counterfactual value; while the exposure is held at its observed value, and M2 is held at a new counterfactual value.See Appendix B for further detail.The third approach, referred to as ‘Intervention effects’, aims to emulate a randomized intervention.It provide an effect estimate for just one mediating pathway, while adjusting for the second related mediator, within levels of the exposure, using inverse probability weights.The effect estimate of the direct pathway refers to the pathway from SED to academic ability that it not acting through the single mediator of interest.This approach is therefore suited to situations where there is just one mediating pathway of interest, which is likely to be biased by intermediate confounding.The indirect effect is given by the change in the risk of the outcome when the value of M2 is estimated within levels of the observed exposure and within levels of the counterfactual exposure.The direct effect is estimated by changing the exposure from its observed to its counterfactual value, while the value of M2 is held at the value it would have taken if assigned within levels of the counterfactual exposure.See Appendix B.The directed acyclic graph demonstrates the main pathways of interest: the direct pathway from SED to academic achievement, and indirect pathways via the two related mediators: self-regulation and cognitive ability.The DAG also includes intermediate confounding.Because none of the analytic approaches allow examination of two mediators and adjustment for an intermediate confounder in a single model, we carried out a series of analyses in the following steps, each focussing on a different ‘subset’ of the DAG:‘Step A: Effect decomposition via Self-regulation & Cognitive ability’: in this step we focused on the two mediators of interest and disregarded intermediate confounding by L. Firstly, using the ‘Joint indirect effects’ approach, effect estimates for the direct pathway from SED to academic achievement and a joint indirect pathway via self-regulation and cognitive ability were estimated.This indirect pathway was then decomposed, using ‘Path specific effects’, to provide two separate effect estimates for the indirect pathway via cognitive ability, and the indirect via self-regulation.‘Step B: Self-regulation & intermediate confounding’: In Step B we estimated the indirect pathway from SED to academic achievement via self-regulation after adjusting for confounding by L, using the ‘Intervention analogue’ approach.Cognitive ability was not included in this model.‘Step C: Cognitive ability & intermediate confounding’: Here the ‘Intervention analogue’ approach was used to examine the degree to which the indirect pathway through cognitive ability was confounded by L. Self-regulation was not included in this model.Findings from Steps A-C were then subjectively triangulated, in order to compare the mediating roles of self-regulation and cognitive ability and the extent to which each of the indirect pathways might have been confounded.Baseline confounders were adjusted for in all analyses.Effect estimates for direct and indirect pathways from SED to maths and literacy scores were estimated using binary regression, in form of the risk ratios, and risk differences.95% confidence intervals were estimated using 5000 non-parametric
Risk ratios (RRs, and bootstrap 95% confidence intervals) were estimated with binary regression for each pathway of interest: the ‘direct effect’ of socio-economic disadvantage on academic achievement (not acting through self-regulation and cognitive ability in early childhood), and the ‘indirect effects’ of socio-economic disadvantage acting via self-regulation and cognitive ability (separately).
unmeasured confounding by school characteristics indicated that the association between self-regulation and cognitive ability would have had to have been overestimated by 30% in order for the indirect pathway to have been completely removed.A more likely bias of 5% reduced the joint indirect pathway by a minimal amount.For maths scores the RR for the indirect pathway fell from 1.19 to 1.15.For literacy scores the RR for the indirect pathway fell from 1.16 to 1.12.Similarly conclusions were unchanged when analyses were repeated with an alternative measure of SED, alternative cut-offs for the self-regulation and cognitive ability measures, and continuous maths and literacy scores.We examined the potential for cognitive ability and self-regulation at the start of school to reduce inequalities in academic achievement at ages 7–9 in the UK and Australia.Children from less advantaged backgrounds or GCSEs grades A*-C) were around 1.6–1.9 times more likely to be in the lowest quintile of maths and literacy scores than those from more advantaged backgrounds.In terms of absolute inequalities, the prevalence of poor academic achievement in children from less advantaged backgrounds was 12%–15% higher than in those who were living in more advantaged families."About two-thirds of the association between SED and children's academic abilities was direct.Decomposition of the indirect pathway showed that around 80–90% was through cognitive ability rather than self-regulation, in part reflecting the weaker association between self-regulation and both the exposure and the outcome.These findings were consistent when repeated with an alternative measure of SED.It was not possible to separately decompose two mediating pathways while also adjusting for intermediate confounding.However, we were able to account for intermediate confounding for one mediating pathway at a time.Intermediate confounding was captured using a binary latent variable representing a number of characteristics.A two class measure provided a parsimonious representation of the data, but it remains likely that the degree of confounding has been underestimated.However, sensitivity analyses adjusting for the characteristics which were least well differentiated in the latent measure indicated a similar level of confounding as seen in the main models.Additional sensitivity analyses also implied that the conclusions are unlikely to be the artefact of unmeasured intermediate confounding.In addition to the above limitations, which are specific to the analysis used, our findings are subject to the standard assumptions of sample representativeness, generalisability and measurement error.Around 70% of children who took part in the initial sweeps of LSAC and MCS had information on the exposure and outcome, and of these around 10% were missing baseline confounders or mediators.However, findings were consistent for both outcomes and between cohorts.Additionally, conclusions were unchanged when analyses were repeated with an alternative measure of SED, when using continuous maths and literacy scores in place of the binary outcomes, and when using an alternative cut-off in the mediating variables.There were differences in the measurement tools used in the MCS and LSAC which meant that results are not directly comparable.However we believe the consistency of findings between two different countries, and in early and mid-late childhood, indicate that these findings are generalisable to other high income settings.Finally, a sensitivity analysis in ALSPAC, which has objective measures of self-regulation, indicated that the smaller mediating pathway via self-regulation was unlikely to be due to measurement error.Our findings are in agreement with the research of Cunha, Heckman and colleagues, which found that cognitive ability was more important than “non-cognitive” skills for academic attainment upon leaving school.A number of studies examining self-regulation or aspects of cognitive ability as mediators between SED and academic achievement in childhood indicate that both play a part.However, to our knowledge, ours is the first study to decompose and compare their contributions to socio-economic inequalities in childhood academic achievement."Our results suggest that reducing social inequality remains an important strategy for narrowing inequalities in academic achievement and preventing the inter-generational transfer of social disadvantage.In the medium and shorter-term, interventions to support cognitive ability hold potential for reducing the socio-economic gap in academic achievement.Health, early care and education systems already reach almost the entire population and have a duty and a commitment to act now.Early cognitive ability is routinely monitored in Australia and the UK and it is an integral focus of the national early years learning frameworks.The impact of these universal services on school readiness and academic achievement should be monitored into the future.Pro-equity progressive universal approaches are likely to be most successful for the improvement of academic achievement and inequality reduction, because some families will require more support than others.However, identifying those who may benefit most from additional support remains a challenge.
Cognitive ability and “non-cognitive” attributes (such as self-regulation) are the focus of many early years’ interventions.Despite this, little research has compared the contributions of early cognitive and self-regulation abilities as separate pathways to inequalities in academic achievement.Analyses were adjusted for baseline and intermediate confounding.Children from less advantaged families were up to twice as likely to be in the lowest quintile of maths and literacy scores.Around two-thirds of this elevated risk was ‘direct’ and the majority of the remainder was mediated by early cognitive ability and not self-regulation.Similar patterns were observed for both outcomes and in both cohorts.Policies to alleviate social inequality (e.g.child poverty reduction) remain important for closing the academic achievement gap.Early interventions to improve cognitive ability (rather than self-regulation) also hold potential for reducing inequalities in children's academic outcomes.
We propose a robust Bayesian deep learning algorithm to infer complex posteriors with latent variables.Inspired by dropout, a popular tool for regularization and model ensemble, we assign sparse priors to the weights in deep neural networks in order to achieve automatic “dropout” and avoid over-fitting.By alternatively sampling from posterior distribution through stochastic gradient Markov Chain Monte Carlo and optimizing latent variables via stochastic approximation, the trajectory of the target weights is proved to converge to the true posterior distribution conditioned on optimal latent variables.This ensures a stronger regularization on the over-fitted parameter space and more accurate uncertainty quantification on the decisive variables.Simulations from large-p-small-n regressions showcase the robustness of this method when applied to models with latent variables.Additionally, its application on the convolutional neural networks leads to state-of-the-art performance on MNIST and Fashion MNIST datasets and improved resistance to adversarial attacks.
a robust Bayesian deep learning algorithm to infer complex posteriors with latent variables
The CdS layer in a CdS/CdTe solar cell absorbs a portion of the solar spectra from 500 nm to lower wavelengths due to its relatively low bandgap .It is necessary to use a thin CdS layer to increase the number of photons reaching the CdTe absorber.However, this has the effect of degrading the open circuit voltage and fill factor of the solar cell.VOC and FF degradation is believed to be caused by the creation of weak diodes associated with regions in which the CdS is too thin or with pinholes in the CdS layer .Highly resistive transparent layers, also referred to as buffer layers, are deposited between the transparent conducting oxide and the CdS layer to limit the effect of these non-uniformities and maintain the diode quality, thereby increasing the efficiency of the solar cell .The precise mechanisms by which the HRT layers function is not fully understood and further investigation is required .In this study we explore the effect of temperature on the growth of the ZnO HRT layer and its effect on device performance.ZnO has been shown to be an effective buffer layer in other chalcogenide solar cells and is widely used in copper indium selenide and copper indium gallium selenide devices .Thin ZnO films were deposited by Radio-Frequency magnetron sputtering.Soda lime glass and NSG TEC™ C10 glass from Pilkington were used as superstrates.The glass superstrates were cleaned using a 10% isopropanol solution in deionized water in an ultrasonic bath at 60 °C for 60 min.Thin films were deposited using an Orion 8 HV magnetron sputtering system equipped with an AJA 600 series RF power supply.The target diameter was 3″, and the ZnO target purity was 99.99%.The glass superstrates were rotated at 10 rpm during deposition to enhance the uniformity of the films.The sputtering process was carried out at a constant power density of 3.5 W·cm− 2 and at a pressure of 133.3 Pa using pure Ar as the working gas.The temperature of the substrate was varied from 20 °C to 400 °C.The film thickness was fixed at 200 nm on SLG and 150 nm on TCO-coated superstrates.The electrical properties of ZnO films were investigated using Hall Effect measurements by the Van der Pauw method using an Ecopia HMS 3000.The optical properties were investigated by UV-VIS-NIR spectrophotometry with a Cary Varian 5000.The structural properties of films were analysed by X-ray diffraction with a Brucker D2 phaser desktop X-ray diffractometer using a Cu-K-alpha X-ray gun.The XRD measurements were obtained using 15 rpm rotation, a 1 mm beam slit and 3 mm anti-scatter plate height.Devices were subsequently fabricated on ZnO-coated superstrates by the PV group of Colorado State University using the Advanced Research Deposition System, an in-line system which has been described previously .The process included the deposition of CdS and CdTe, a CdCl2 activation treatment and a Cu/Ni based back contact.CdS was sublimated at a substrate temperature of 420 °C whilst CdTe was sublimated at a substrate temperature of 360 °C.The CdCl2 treatment was carried out for 3 min at a substrate temperature of 388 °C.The thicknesses of CdS and CdTe films were maintained at ~ 120 nm and ~ 2.3 μm–2.5 μm respectively.Devices were characterized using current density-voltage characteristics and cross-section images were obtained using transmission electron microscopy.Samples for TEM were prepared by focused ion beam milling using a dual beam FEI Nova 600 Nanolab.A standard in situ lift out method was used to prepare cross-sectional samples.An electron beam assisted platinum over-layer was deposited onto the sample surface above the area to be analysed followed by an ion assisted layer to define the surface and homogenize the final thinning of the samples down to 100 nm.TEM analysis was carried out using a Tecnai F20 operating at 200 kV to investigate the detailed microstructure of the cell cross sections.Normal images were taken using the bright field detector and for elemental contrast images the high angle annular dark field detector was used.Fig. 1 shows the cross-section images of devices with a focus at the TCO/ZnO interface.The grain size of the ZnO layer increases at higher temperatures and at 200 °C and 300 °C the grains expand to the full height of the layer, with an average width of between 50 nm and 100 nm.The films deposited at room temperature contain smaller grains.The elemental contrast image of the cross section reveals the creation of small voids in the ZnO film deposited at 20 °C.These small voids appear as black spots and are concentrated at the interface with the TCO.They are possibly caused by stress build-up in the ZnO near-interface region due to the large lattice mismatch with the fluorine-doped tin oxide.This phenomenon was not observed at higher ZnO deposition temperatures indicating that partial relaxation of the stress occurs.XRD analysis of ZnO layers grown on the TCO coated superstrates was performed to evaluate the crystallographic growth at the different deposition temperatures.Fig. 3 shows XRD patterns of the 2Θ range between 30° and 70°.An XRD profile of a bare substrate was added for comparison to identify the peaks associated with ZnO.Four main XRD peaks were identified;,, and, with the and peaks being the most pronounced.The intensity of all the peaks increase when the deposition temperature is raised, suggesting that higher temperatures assist the crystallographic phases to form.The peak position 2ϴ and its full width at half maxima were extrapolated by Gaussian fitting.The position of all peaks of the various phases is shifted to
In this study ZnO HRT layers were deposited at different substrate temperatures on soda lime glass and on fluorine-doped tin oxide-coated glass to enable structural, optical and electrical characterization.The ZnO thickness was limited to 150 nm, whilst the substrate temperature was varied from 20 °C to 400 °C during deposition.
slightly lower 2Θ angles in comparison with the reference peaks.This can be attributed to the influence of the fluorine doped tin–oxide coated superstrate on the ZnO growth.The FTO crystal structure mismatch with ZnO may force the film to grow in a different way than on bare glass.The peaks associated with the ZnO films deposited at room temperature have the 2Θ peaks close to ICDD whilst their position moves further from the reference peak at higher deposition temperatures.Fig. 2 shows the stress build-up at the TCO/ZnO interface for a ZnO deposited at 20 °C.Use of higher temperature assists the ZnO structure to adjust to the underlying semiconductor, shifting the XRD peaks to lower 2Θ angles.The FWHM of the peaks is reduced for films deposited at higher temperature associated with improved crystal growth.The HRT layer is believed to act as a resistive barrier to shunts through the device.As a result, the resistivity is expected to be a key parameter for ZnO used as an HRT layer.Fig. 4 shows that the deposition temperature of 100 °C produced the lowest resistivity of 1.87 × 10− 2 Ω·cm, whilst a temperature of 300 °C yielded ZnO films with highest resistivity of 2 × 10− 1 Ω·cm, an order of magnitude difference.The resistivity reported here is relatively low compared to those in other studies where the optimal resistivity was found to be from 103 Ω·cm and higher .Oxygen is generally used to increase the resistivity of ZnO films; however, in our study oxygen was not added to the sputtering working gas pressure.The transparency of the ZnO films it is very important since it will affect the current output of the device.Fig. 5 shows the transmission curves of the superstrate glass coated with ZnO deposited at the different temperatures.The mean transmittance calculated over the wavelength range from 400 nm and 950 nm was ~ 80% for all samples with the exception of the room temperature sample which has marginally lower transmission.A total of 9 devices were fabricated on each superstrate.Current density-voltage characteristics of each device were obtained and the mean J-V parameters are summarized in Fig. 6.The mean short circuit current ranges from 21.2 mA/cm2 to 21.8 mA/cm2.The small difference in current density is expected given the similar transmittance of the films.The mean FF increases constantly with increased ZnO deposition temperature from 65% to 69%, with the exception of the 200 °C sample.This increase in FF follows the improvement in crystal structure and the increase in the resistivity.The FF may improve due to a combination of an improved TCO/ZnO interface and improved growth of the ZnO crystal structure and an increasing resistivity of the ZnO buffer layer.However, there is no clear trend between the film resistivity and any other device parameter.The VOC steadily degrades as the ZnO deposition temperature increases, reducing by 23 mV from 798 mV to 775 mV.Overall, the mean device efficiency improves marginally from 11.2% to 11.5%.It is likely that a more pronounced trend on the effects of a ZnO HRT layer on a CdTe device will be observed when the CdS layer thickness is thinned below 100 nm.The relatively thick CdS layer used in this study partially screens the ZnO effect .The impact of the deposition temperature on the growth of ZnO HRT layers and the effect on the performance of CdTe thin film solar cells has been investigated.TEM cross-section images and XRD analysis show enhanced grain growth of the ZnO films at deposition temperatures of 100 °C and above.TEM images reveal small voids in the ZnO layer located at the ZnO/FTO interface when the ZnO layer is deposited at temperatures below 200 °C.The XRD peak position shifts towards smaller 2Θ angles when films are deposited at higher temperatures, however there is no correlation between deposition temperature and the degree of peak shift.The occurrence of voids and the shifts in the XRD peaks are linked to the relaxation of the TCO/ZnO interface stress and the stress in the ZnO films.Devices were fabricated with ZnO HRT layers deposited at different deposition temperatures and their performance characterized using J-V measurements.The FF of the devices increases with increased ZnO deposition temperature.This is associated with improvements in the structural quality of the ZnO, the interface quality at the FTO/ZnO junction and the increased resistivity of the ZnO films.The VOC was found to reduce with increasing ZnO deposition temperature.Further work is required to understand this effect.The device efficiency is higher when the ZnO HRT layer is deposited at higher temperatures.A further improvement should occur if the CdS layer thickness is reduced.
The use of a highly resistive transparent (HRT) layer has been shown to increase the efficiency of thin film CdTe heterostructure solar cells incorporating a thin CdS layer.The performance of equivalent films was tested within CdS/CdTe solar cells.X-ray diffraction patterns and transmission electron microscopy of the cross-sectional microstructure of completed devices showed that the growth of the ZnO is improved when the films are deposited at higher temperatures.Film resistivity was lowest at 100 °C and highest at 400 °C, ranging from 10− 2 Ω.cm to 0.33 Ω.cm.The high temperature deposited ZnO exhibits improved micro-structural growth and an improvement in device efficiency.
together were better than BA alone for callus regenerability when the medium B1I1C1 was compared with B2C1.Additional ingredients in regeneration medium have a significant impact on regenerability.l-Asparagine and l-proline fostered significant improvement in the number of shoots on B2C1AP as compared to that on B2C1.PVP at 1 g/L when supplemented in B2C1P markedly increased shoot production as compared to B2C1 without PVP.These results are in agreement with earlier reports.A higher concentration of CuSO4·5H2O at 1 μM drastically enhanced shoot production on B2C1 as compared to that on B2, confirming the beneficial effect of high Cu level on sorghum tissue culture.Overall, B1I1C1 was the best regeneration medium for SA281.In the present system, regeneration rates up to 100% have been routinely obtained from three tested lines, much higher than 20–50% recorded earlier for cv.X4004.Recently, callus regenerability was improved to 22.8 shoots per IE from sorghum line IS 3566."By comparison, the regenerability was improved markedly in our experiments; over 60 regenerants were produced from a single IE of SA281 within eight weeks' incubation.Callus rapidly grew across the three tested lines within two weeks of incubation, with a significant increase in callus fresh weight.However, variation in callus growth was observed among the tested lines; SA281 callus grew faster than the other two.The most rapid callus growth occurred between one and two weeks as demonstrated by weekly callus growth ratio.Both regeneration rate and regenerability peaked at two weeks across these three lines.Unlike rice embryogenic callus which can maintain regenerability over 40 weeks, sorghum embryogenic callus does not maintain a useful level of regenerability over four weeks.The reason for this phenomenon is, as yet, unknown.Our data from three tested lines unequivocally demonstrated that the optimal callus age was two-weeks old.When callus was four-weeks old, the regeneration rate and regenerability dropped markedly.For instance, SA281 could produce 59 shoots per IE when callus was two-weeks old; however, it only generated 3 shoots per IE when callus was four-weeks old.Therefore, callus age plays a critical role on regenerability, which provides important information for establishing a robust tissue culture system for sorghum.To efficiently regenerate sorghum in vitro cultures, it is imperative to introduce a management strategy to minimize the effect of short duration of callus regenerability.It is necessary to continuously initiate fresh IEs in order to maintain a highly efficient tissue culture system.Sorghum lines, such as Tx430 and SA281, can grow in a temperature-controlled glasshouse throughout the year facilitating the harvest of fresh IEs at any time.Therefore, vigorous embryogenic calli can be generated consistently allowing the optimized tissue culture system to be utilized for transformation irrespective of growing seasons.
Sorghum tissue culture has been challenged by three predominant obstacles for decades, namely toxic pigments (phenolics), low regeneration frequencies and short duration of callus regenerability.Here, we report a robust tissue culture system for sorghum, which has minimized these major impediments.To optimize media, different concentrations of various plant growth regulators, such as 2,4-dichlorophenoxyacetic acid (2,4-D), N<sup>6</sup>-benzyladenine (BA), indole-3-acetic acid (IAA), indole-3-butyric acid (IBA) and α-naphthaleneacetic acid (NAA) were evaluated.Additional ingredients, including KH<inf>2</inf>PO<inf>4</inf>, CuSO<inf>4</inf>.5H<inf>2</inf>O, l-asparagine, l-proline and polyvinylpyrrolidone (PVP) were also assessed.Results showed that callus age had a conspicuous effect on its growth and regenerability, with callus weekly growth ratio and regenerability peaked at two weeks after induction.A callus induction rate up to 100% was achieved in inbred line Tx430, whereas regeneration rates up to 100% were obtained from SA281 and 91419R.This highly efficient system has been utilized for sorghum transformation for several years and has been proven to be reliable and reproducible.
antitumor protein should be expressed selectively at the tumor site, as demonstrated in this study.Since bacteria administered intravenously initially localize to reticuloendothelial organs,8,9 the protein drugs should be expressed only when the bacteria have cleared reticuloendothelial organs and accumulated exclusively in the targeted tumor tissue; this normally occurs 3 days after administration.We have shown that controlled expression of cytotoxic proteins at 3 dpi does not cause any notable systemic toxicity.9,Second, the antitumor protein must be released.Fortunately, a significant fraction of the L-ASNase of E. coli origin was secreted from Salmonellae.E. coli L-ASNase is a periplasmic enzyme that contains a typical secretary signal peptide of 22 residues at its amino terminus.When the enzyme is secreted to the periplasm, the signal peptide is removed to yield a mature protein with an N-terminal leucine residue.35,Proteins secreted into the periplasm tend to leak out through the bacterial outer membrane.Lastly, but most importantly, antitumor proteins should act on the cancer cell surface or the tumor microenvironment, rather than on intracellular targets.In the latter case, the barely permeable cell membrane remains a formidable barrier to efficacy.A widely used strategy uses cell-penetrating peptides to improve intracellular uptake.Successful intracellular delivery of antitumor proteins using this strategy requires optimization of CPP for individual anticancer proteins.12,Given these considerations, L-ASNase is an anticancer cargo protein that is appropriate for delivery to the tumor microenvironment by tumor-targeting bacteria.We demonstrated a linear relationship between in vitro cytotoxicity and in vivo tumor regression using mutant L-ASNases with varying levels of activity.The linear relationship revealed the dose-dependent effect of L-ASNase in vivo and was comparable with that determined using cultured cells in vitro.If bacterial therapy with ASNase-expressing Salmonellae was to be employed post-surgical elimination of the tumor, prognosis could be assessed based on the in vitro sensitivity of the cancer tissue to L-ASNase.In practice, bacterial therapy with Salmonella expressing L-ASNase, would be useful to treat cancer patients with multiple organ metastases since bacteria would be capable of targeting tumor of any kind with size as small as 100 mm.34,DNA encoding ASNase II was amplified from the genomic DNA of E. coli B strain40 using two sets of primers: L-ASN1 and L-ASN2.The 1,047 kb DNA fragment of asnB was inserted into GlmS+p32 using Eco RI and XbaI restriction enzyme sites to generate pASN.The mutant L-ASNases were generated by site-directed mutagenesis using pASN and the following primers: P1, 5′-GACTCCGCAACCAAATCTGGCTACACAGTGGGTAAAG-3′ for N24G; and P2, 5′-ACGTCTATGAGCGCAGGCGGTCCATTCAACCTGT-3′ for D124G.All constructs were verified by automated DNA sequencing.
Bacteria can be engineered to deliver anticancer proteins to tumors via a controlled expression system that maximizes the concentration of the therapeutic agent in the tumor.L-asparaginase (L-ASNase), which primarily converts asparagine to aspartate, is an anticancer protein used to treat acute lymphoblastic leukemia.In this study, Salmonellae were engineered to express L-ASNase selectively within tumor tissues using the inducible araBAD promoter system of Escherichia coli.Antitumor efficacy of the engineered bacteria was demonstrated in vivo in solid malignancies.This result demonstrates the merit of bacteria as cancer drug delivery vehicles to administer cancer-starving proteins such as L-ASNase to be effective selectively within the microenvironment of cancer tissue.
Foodstuff, feed and agricultural samples were randomly collected from different sources, including local and imported materials from the Syrian local market to screen them for detecting the presence of GMOs using PCR, nested PCR- and multiplex PCR-based techniques using specific primers for the most commonly used foreign DNA commonly used in genetic transformation procedures, i.e., 35S promoter, T-nos, epsps, cryIA gene and nptII gene .Thirty seven samples were randomly collected from different sources, including local and imported materials from the Syrian local market and can be categorized as the following:Maize, barley and soybean which are used mainly as animals feed.Fresh food samples: tomato, cucumber used directly for human consumption.Raw material: soybean seeds, sunflower seeds, popcorn seeds, rice which will be used for processing oil and other applications.Several plasmids, pBI-121, TOP10 PGII35S CRYA) were used as a positive control.For each sample, 0.5 g of grounded leaf tissue from a bulk of 10 plants was suspended in 2 ml of extraction buffer.The suspension was mixed well, incubated at 60 °C for 30 min, followed by chloroform-isoamyl alcohol extraction, and precipitation with 0.67 vol. isopropanol at −20 °C.The pellet formed after centrifugation at low speed for 5 min is then washed with 76% ethanol and 10 mM NH4OAc.Then the DNA is suspended in TE buffer.Plasmid DNA from positive control samples was extracted by alkaline lysis as described by Sambrook et al. .The quality and quantity of DNA extracted from samples were determined using spectrophotometer at 260 nm and 280 nm absorbance.The DNA purity was determined based on A260/A280 ratio.PCR amplification was carried out in a PCR mix of 25 µl.The final concentrations of each PCR were as follows: l× of 10× PCR buffer; 100 ng of genomic DNA; 0.4 pm of each primers; 0.32 mM of dNTPs mix; 2 mM MgCl2; 0.5 unit/reaction of Taq DNA polymerase.Oligonucleotide Primers used in this study are listed in Table 1 .All primers were synthesized by Eurofins MWG GmbH and obtained in a lyophilized state.All primers were dissolved in TE buffer before use to obtain final concentration of 10 pmol/μl.For nested PCR, the first reaction was done using the primer pair GMO9/GMO7 then followed by taking 1 µl from the PCR product as a template to make the second reaction with the primer pair GMO7/GMO8 and the master mix was the same as mentioned before.PCR program used was as the following: initial denaturation; denaturation then the annealing temperature changed according to each primer for 1 min, the extension was at 72 °C for 1 min; the number of cycles was 35 and the final extension was at 72 °C.Amplicons were analyzed in 2% agarose gel electrophoreses in a 1× TBE and visualized under UV transilluminator using SYBR® Safe DNA gel stain which exhibited very low mutagenicity compared to ethidium bromide, and it is not classified as hazardous waste or as a pollutant under U.S. federal regulations .Data obtained were analyzed and interpreted.Most of the DNA extracted by CTAB methods in this study showed a high molecular weight and high purity with A260/A280 ratios ranging from 1.8 to 2.0.The purity of DNA extracted from samples was confirmed by PCR amplification using soybean-specific and maize-specific primers for samples derived from soybean and maize, respectively.These tests also indicated whether the other tested samples contained either soybean or maize materials, and if there was any contamination between DNA samples tested.The primer pair GM03/GM04 with amplicon size of is specific for the single copy of lectin gene LE1was used.On the other hand, the primer pair zein3/zein4 was used which is specific for the native maize zein gene and yields a PCR product of 277 bp size.Using GM03/GM04 primers, all tested soybean samples gave positive results, while the result was negative in the other samples tested.These results revealed that the DNA was successfully isolated, and the isolated DNA could be amplified with PCR using this specific primer without inhibition, where there was no contamination between DNA samples tested.Using zein3/zein4 primer, all tested Maize samples gave positive results.However, the result was negative in the other samples tested.These results revealed that the DNA was successfully isolated, and the isolated DNA could be amplified with PCR without inhibition, where there was no contamination between DNA samples tested.Screening methods using the 35S promoter and NOS terminator sequences evidently are the most favorable candidates for broad method applicability.Most of the currently available GMOs worldwide contain any of three genetic elements: the cauliflower mosaic virus 35S promoter, the nonpalin synthase terminator or the kanamycin-resistance marker gene for instance.After PCR amplifications of the lectin and zein genes, all the DNA stocks were subjected to PCR amplification of the 35S promoter which are specific to the 35S promoter originating from CaMV virus.The primer pair p35S-cf3 and p35S-cf4 was used to detect one copy of this promoter.By using this primer pairs, visible band at 123 bp was found in the positive control, the soybean samples and the Maize samples, which means that these soybean and maize samples are genetically modified containing this promoter.Whereas, no band at the expected size was shown in the other samples tested, which means that these samples do not contain that promoter.The plasmid was used as a positive control to be sure that the PCR condition was suitable for these primers used where the plasmid gave the same size of the expected fragment.The primer pairs HA-nos118-f and HA-nos118-r was used to detect this terminator, where a visible band
Food and feed samples were randomly collected from different sources, including local and imported materials from the Syrian local market.These included maize, barley, soybean, fresh food samples and raw material.GMO detection was conducted by PCR and nested PCR-based techniques using specific primers for the most used foreign DNA commonly used in genetic transformation procedures, i.e., 35S promoter, T-nos, epsps, cryIA(b) gene and nptII gene.The results revealed for the first time in Syria the presence of GM foods and feeds with glyphosate-resistant trait of P35S promoter and NOS terminator in the imported soybean samples with high frequency (5 out of the 6 imported soybean samples).
at 118 bp was found in the positive control, in Soybean samples tested and in Maize samples tested, which means that these samples are genetically modified, whereas no visible band at the expected size was shown in the other samples, which means that these samples are not genetically modified using this terminator.After the initial screening steps, specific detection was done to determine the structural genes of the introduced traits.Two main traits of interest mostly used in the construction of transgenic plants are herbicide tolerance and insect resistance.Herbicide tolerance is the leading trait in commercialized GM plants with 23 lines having been approved for cuor food and/or food and feed use worldwide .To detect this gene, the primers GMO5 and GMO9 were used, where a visible band at 447 bp was detected in the soybean samples tested, which means that these samples are genetically modified with this gene, while, no visible band at the expected size was detected in other samples tested, which means that these samples are not modified with this gene.The most frequently used transgene is nptII, originating from the Escherichia coli transposon 5.This gene confers resistance to Kanamycin .To detect this gene primers were used where visible band at 254 bp was found just in the positive control plasmid, which means that these samples are not genetically modified with this selectable marker gene.There are several strains of Bt, each with differing Cry proteins.Scientists have identified more than 170 Cry proteins.Most of the Bt maize hybrids, targeted against European Corn Borer, produce only the Cry1A protein.The cry1Ab delta-endotoxin gene codes for the production of a naturally occurring insecticidal protein.This gene was modified to optimize and maximize the expression of the Q-endotoxin CRYIA protein in plants.11bt1-11bt2 primers were used to detect this gene, which yields a PCR product of 200 bp where a visible band at 200 bp was detected only in the positive control plasmid), which means that these samples are not genetically modified with this structural gene.In addition, Cry1Ac delta-endotoxin, derived from B. thuringiensis subp.kurstaki strain HD-73, encode resistance to European Corn Borer, a major insect pest of maize.The primer pairs Cry1Ac 699–Cry1Ac 1440 was used to detect this gene, which yields a PCR product of 742 bp, while the plasmid) was used as a positive control.The primers gave positive result only in the positive control plasmid, while the negative results were demonstrated in the other samples tested, which means that there is no modification with this gene.Confirmation/verification of the identity of the amplicon is necessary to assure that the amplified DNA is really corresponding to the chosen target sequence and is not a by-product of unspecific binding of the primers.Several methods are available for this purpose, first by using gel electrophoresis, but there is risk of artifact having the same size of the target sequence may have been amplified.Therefore, the PCR products should additionally be verified for their restriction endonuclease profile.The second method could be used is to subject the PCR product to second round of PCR cycle in a technique that is called nested PCR.Here, two different sets of primers – an outer and an inner pair – are being used within the target region in two consecutive rounds of PCR amplifications.This strategy reduces substantially the problem of un-specific amplification, as the probability for the inner pair of primers of finding complementary sequences within the non-specific amplification products of the outer pair is extremely low.The primer pairs GMO9/GMO5 and GMO8/GMO7 were designed for the transgene of roundup ready soybean by nested PCR, the external primer GMO9/GMO5 are complementary to the cp4 epsps gene/CaMV 35S promoter, the amplification of DNA with this primer resulted in an amplicon of 447 bp.The internal primers GMO8/GMO7, are complementary to the epsps petunia gene and to the CaMV 35S promoter.The amplification of DNA with these internal primer resulted in a fragment of 169 bp.This result confirms that the amplified DNA is really corresponding to epsps gene and is not a by-product of unspecific binding of the primers.Fig. 9 summarizes the methodologies and data presented in this brief data.
While, tests showed negative results for the local samples.Also, tests revealed existence of GMOs in two imported maize samples detecting the presence of 35S promoter and nos terminator.Nested PCR results using two sets of primers confirmed our data.The methods applied in the brief data are based on DNA analysis by Polymerase Chain Reaction (PCR).This technique is specific, practical, reproducible and sensitive enough to detect up to 0.1% GMO in food and/or feedstuffs.Furthermore, all of the techniques mentioned are economic and can be applied in Syria and other developing countries.For all these reasons, the DNA-based analysis methods were chosen and preferred over protein-based analysis.
assessed for infeasible ATP production.Using boundary conditions that mimic the in vitro medium, we optimized flux through an ATP demand reaction, and found that all GENREs for all strains generated between 0.5 and 1.9 mmol ATP/.Normalizing this value by the uptake of lactose, which was 0.22 mmol/ for all strains, gives a yield range of 2.27-8.64 units of ATP per unit of lactose, within reason for anaerobic organisms.Although erroneous energy generating cycles may be present in the GENREs presented here, the realistic ATP yield determined for all GENREs suggests they are unlikely to influence simulation results in this media condition.Flux balance analysis was performed using version 0.8.1 of the cobrapy package and the Gurobi solver v7.0.Ensembles of GENREs were analyzed using Cobrapy methods through the Medusa package.Media composition was determined by calculating exact concentrations for defined supplements, and a concentration of 1 mM was assumed to allow an uptake rate of 1 mmol/.For media components with approximately known concentrations in LB, the uptake rate was set to 5 mmol/ based on a concentration of around 5 mM for most amino acids in LB.For components detected via metabolomics that were not amino acids or supplemented, and therefore likely originated from the yeast extract in LB, the maximal uptake rate was set to 0.1 mmol/.For in silico media supplements and knockouts, a metabolite was considered essential if removal of the metabolite from the in silico medium caused the flux through biomass to fall below 1E-5/hour does not affect these results).All raw and processed data and all code used in this project except software used to process raw NMR spectra are available at https://github.com/gregmedlock/asf_interactions.Where possible, Jupyter notebooks are used for reproducibility and to display results alongside corresponding analyses.The raw NMR spectra have been deposited in Metabolights under MTBLS705.
The diversity and number of species present within microbial communities create the potential for a multitude of interspecies metabolic interactions.Here, we develop, apply, and experimentally test a framework for inferring metabolic mechanisms associated with interspecies interactions.We perform pairwise growth and metabolome profiling of co-cultures of strains from a model mouse microbiota.We then apply our framework to dissect emergent metabolic behaviors that occur in co-culture.Based on one of the inferences from this framework, we identify and interrogate an amino acid cross-feeding interaction and validate that the proposed interaction leads to a growth benefit in vitro.Our results reveal the type and extent of emergent metabolic behavior in microbial communities composed of gut microbes.We focus on growth-modulating interactions, but the framework can be applied to interspecies interactions that modulate any phenotype of interest within microbial communities.Ecological interactions determine the behavior of microbial communities, but pinpointing mechanisms governing these interactions is difficult.Here, we develop a framework for inferring interspecies metabolic interactions in co-culture based on monoculture behavior and co-culture growth outcomes.We profile the growth and metabolism of monocultures and co-cultures of bacteria from a defined mouse microbiota and apply our framework to these data.Based on these inferences, we investigate an amino acid cross-feeding interaction that may contribute to a commensal interaction.
In Germany, Industry 4.0 is the term for the next industrial revolution .In the United States, General Electric is promoting a similar idea under the name of the Industrial Internet.In China, the central government has established the Made in China 2025 initiative for future industry.All these ambitious plans indicate the beginning of the Fourth Industrial Revolution—a revolution that will merge real things with the virtual world for greater efficiency.Three other industrial revolutions have occurred in human history.The First Industrial Revolution employed mechanical production facilities; it started in the second half of the 18th century and lasted throughout the entire 19th century.Mass production using electrification led to the Second Industrial Revolution, which started around the end of the 19th century.The “digital revolution” that occurred in the 1970s can be defined as the Third Industrial Revolution, as information technology began to be used for the automation of production processes.Unlike all previous revolutions, which only released human physical power for linear changes, the Fourth Industrial Revolution will free the human thinking power that is “intelligence” and will create nonlinear changes beyond what we can imagine.The core of Industry 4.0 is intelligent manufacturing, which can be considered as the cyber-physical system within the manufacturing environment, in order to achieve full automation of both materials and information.The CPS is an Internet environment in which all users, hardware, and software are integrated, regardless of time and location, in order to adapt to different working conditions through good coordination and enhanced ability .Examples of CPSs include smart grids, automated vehicle systems, medical monitoring, and intelligent manufacturing .The differences between an embedded system and a CPS are as follows: An embedded system focuses on developing algorithms, while a CPS focuses on the connection and coordination between physical elements and computational software .Over the past decades, consumable products have become increasingly advanced and intelligent, making manufacturing systems increasingly complex.From an academic point of view, the manufacturing industry is a nonlinear multiscale complex system.No single solution exists for such a complex system.Due to the human way of linear thinking, nearly all the theories and methods developed so far are linearly dominated, making it difficult to apply them directly to nonlinear systems.The principle of systems engineering is to decompose a complex system into simpler ones, solve them separately, and then integrate all separate solutions in order to meet a global objective.In a manufacturing plant, each product is produced through a series of complex operations, each of which can be further decomposed into multiple basic actions.Obviously, uncertainties will appear at all of these stages and affect the overall quality.This paper briefly discusses the multiscale complexity of the manufacturing process, presents modeling and intelligence that may be required in a manufacturing environment, and examines a case study for jet dispensing control for the integrated circuit packaging industry.A whole factory or plant usually has more than one production line containing many different types of processes.Each process may integrate multiple machines or pieces of equipment.Manufacturing operations in a factory can be classified into three different levels: the machine level, the production level, and the plant-wide level.The whole manufacturing process can be considered as a hierarchical structure: from machine control at the bottom layer, through mid-level supervisory control and production scheduling, and up to business management at the highest level.Different properties are exhibited at different levels, as shown in Table 1.The characteristics and dynamics at different levels differ such that different control actions, from continuous to discrete, are required.Different processes may have different types of dynamics and a different scale of complexity.Some typical processes may include:Multi-time-scale processes.This is the most common scenario in manufacturing, in which a single part is manufactured within a short time, whereas parts in batches are produced over a long time period.The consistency of production is hence a primary concern, and involves the integration of various methods, such as robust system design, feedback control, and statistical process control.Space-time dynamic processes.Temperature fields, pipe fluid, and flexible robotic arms belong to this space-time dynamic system.Here, the performance changes not only in time but also in spatial location, and is therefore extremely difficult to model and control.Multi-level hybrid processes.The integration of systems at different levels results in hybrid systems that may be continuous, discrete, fuzzy, probabilistic, and so forth.The modeling and control of these types of systems are difficult because no mature methods are available.In general, the lower the level, the more dynamic property is required, such that dynamic control is needed.Uncertainty exists everywhere, in all levels of the manufacturing hierarchy.The higher the level, the larger the uncertainty, such that more intelligence is required for the control system.In terms of control engineering, several types of control can be defined, as follows:Logic control.This involves discrete action with two discrete states: on/off.No dynamics are involved.Loop control.This requires dynamic control because it entails the handling of physical dynamics.It involves continuous action at the machine level.Since machine dynamics can be expressed quantitatively, the control action can be optimized.Supervisory control.This involves nest control action, which is of a hybrid discrete/continuous nature.All of the abovementioned low-level types of control are widely used in process control.High-level control involves a more decision-making type of action, which requires more intelligence-based methods, such as the following:Operation scheduling at the production level; and,Business management of the plant-wide operation.Regarding intelligent manufacturing, the five-level pyramid structure shown in Fig. 1 can be useful in effectively processing uncertainties and improving the overall quality .The first step is to place
The Made in China 2025 initiative will require full automation in all sectors, from customers to production.This will result in great challenges to manufacturing systems in all sectors.In the future of manufacturing, all devices and systems should have sensing and basic intelligence capabilities for control and adaptation.Multiscale dynamics include: multi-time scale, space-time scale, and multi-level dynamics.Intelligent manufacturing systems should have the capabilities of flexibility, adaptability, and intelligence.
sufficient sensors appropriately in order to collect data from the physical process.If everything can be measured and connected, physical uncertainty can be minimized.Once data is obtained, it should be converted into useful information for higher level analysis and processing.Many mature modeling and learning methods can be used to help reduce information uncertainty.Since manufacturing involves the integration of many different types of equipment and functional devices, hybrid modeling and learning is required for system-level coordination.Decision-level coordination involves human-machine interaction, which requires processing ability between human linguistic language and machine computational algorithms.In order to achieve full automation, knowledge-level decisions should be able to process unexpected events, which will continue to be a long-term challenge.In summary, different types of processes require different control actions.More design is required at the fast time scale, and more control is needed at the slow time scale.The jet dispensing system for packaging is a good example that will be discussed in detail in Section 4.More quantitative action is required at low-level operations because machine dynamics can be expressed mathematically; in contrast, more qualitative action is needed in high-level supervision because that system cannot be described quantitatively.Systematic work in this area should be built step by step using a bottom-up approach: from dynamic modeling, system design, process control, and intelligent supervision, up to plant-wide management control, and so forth.This is a large-scale challenge.The multiscale complexity of the intelligent manufacturing process makes process modeling difficult.Process modeling is an essential step toward engineering control.System modeling in the field of control and machine learning in computer science actually perform similar work, albeit with different technologies and in different environments:System modeling relies more on the physics of the process because it usually operates in an environment with a low degree of uncertainty that does not affect the dominance of the process dynamics.In this case, a deterministic solution will exist.Under this relatively certain environment, physical dynamics play a major role, while external disturbance and nonlinearity have a smaller influence.Since classical quantitative methods can be used for optimization, the modeling performance is fairly deterministic and can be used for online prediction.Since the process dynamics can be quantitatively modeled, quantitative control or design can be performed.Machine learning mainly works in an environment with a high degree of uncertainty; plant-wide management is an example of such an environment.Multi-level hybrid solutions also accumulate uncertainty.Since the model structure is difficult to obtain, it relies more strongly on process data.Rather than a deterministic solution, a statistical solution will exist in this case.Non-traditional methods such as computational intelligence can be used to explore a better solution; therefore, the performance in this case is usually optimized using statistical or experiential data.Since the process dynamics cannot be quantitatively estimated, a qualitative decision is made instead of performing quantitative control.Many processes in the manufacturing industry, such as thermal processes, fluid/flow processes, and flexible robotic arm processes, belong to space-time dynamic systems, which are also called distributed parameter systems.The dynamics of a DPS are described with partial differential equations and exhibit a strong space-time coupled nature.For example, the cure oven, or the reflow oven, that is used in the IC packaging industry requires a uniform temperature field because an equal heating effect is expected at every spatial location of the cured object.System modeling is very important in manufacturing control because it can help to determine the physical process well before any control action or decision is made.Different modeling is required for different functional purposes.System modeling can be classified as follows:Modeling for process simulation.This is physics-based modeling, in which every aspect is considered to reflect the true situation.Since most of such processes are DPSs, if first-principle knowledge of the DPS is accurately known, the model can be precisely derived and then solved using computational methods such as the finite difference method and the finite element method .This type of physics-based modeling requires heavy computation and is suitable for offline process analysis.Modeling for control design.Most control theories are linearly dominated, so a linear model structure is required for control design.Modeling for online prediction.An analytical model is required with parameters calibrated from experimental data.The dominant dynamics are considered in terms of ordinary differential equations.Modeling for process design.Because this is function-based modeling, only important dynamics are considered for optimal design.Modeling for decision-making.Since decisions are based on important features and have a discrete nature, this is feature-based modeling.When modeling, an appropriate model structure must be selected, along with optimal calibration of parameters under appropriate training signals, and so forth.Because extensive computing power is required to solve a PDE, lumping techniques are used to approximately reduce the PDE into a finite-dimensional ODE using the space-time separation method shown in Fig. 2.This ODE-based model is computationally efficient and can be applied for online performance prediction; for example, it would be a challenge to estimate a temperature distribution over space using just a few sensors.Many studies have been performed on this kind of space-time modeling, including studies on the spectral method and the approximate inertial manifold .If the PDEs of a DPS are unknown due to process uncertainties, data-based model identification must be used.When the nominal PDE is unknown, neural networks can be used with the spectral method to model the unknown nonlinearities.Neural networks can also be used with the Karhunen-Loève method to model a completely unknown nonlinear DPS with the help of multiple sensors.Many different variations have been developed, and are systematically discussed in a literature survey paper .An integrated design and
In this study, after discussing multiscale dynamics of the modern manufacturing system, a five-layer functional structure is proposed for uncertainties processing.Control action will differ at different scales, with more design being required at both fast and slow time scales.More quantitative action is required in low-level operations, while more qualitative action is needed regarding high-level supervision.
drive the needle, thus pushing the adhesive out of the chamber and onto the substrate.This is a multiscale complex process, as shown in Fig. 7.A single droplet can be jetted out of the chamber in milliseconds, and several thousand droplets can be jetted within a dozen minutes.Highly consistent droplets are required for high-speed dispensing.Long operation deteriorates the jet performance because the viscosity of the adhesive has a nonlinear and time-varying nature.The following difficulties with consistent dispensing are encountered:For fast-scale single-droplet dispensing, a disturbance cannot be captured in such a short instant, so no control can be designed to manipulate the flow rate.For slow-scale long-term operation, it is difficult to perform adjustments to suppress disturbances because there is no online measurement of the internal operation of the device.Design and control should be effectively integrated in order to achieve consistent dispensation.For the fast-scale performance, only design can be applied to optimize the jet dispensing system; in contrast, for the slow-scale performance, consistent control can be applied once online sensing is implemented.There is a strong interaction between the adhesive and the jet dispensing system.The design of the system includes material handling and jet valve design.This involves both real-time experimentation and physical simulation.The adhesive viscosity, η, can be derived experimentally using a rheometer.The simulation model is calibrated with the experimental data.A simulation of the jetting process, as shown in Fig. 8, can provide more information that may be difficult to observe from an experiment in real time:The hidden mechanism for droplet formation and breakup is disclosed, and the coupling relationship between different variables is discovered .Data generated from the simulation can help to develop the analytical relationship between critical parameters and the jetting performance.Next, these design guidelines should be followed for performance improvement:The rules for proper handling of the adhesive materials before dispensation, provided in Ref. , ensure that the adhesive materials are in the best state for dispensing.The optimal design of the critical parts of the jet valve, provided in Ref. , maximizes the dispensing capability for a single droplet.In a slow-scale long operation, the performance drift should be identified in order to enable system control by means of the following actions:Establishing integrated sensing for the real-time estimation of the jetting performance at every cycle; and,Determining the cross-scale multivariable compensation for consistent jetting in batches.A measurement device was added onto the existing valve of the jet dispensing system in order to sense the needle displacement x. With this measurement device, the volume V pushed out of the nozzle at every cycle can be estimated by integrating the flow rate over the cycle period.Since the adhesive is a non-Newtonian fluid, the flow rate is unknown.Calibration with the experimental data is needed with the help of a camera and a high-precision balance.Both the experiment and the simulation have shown that the jetting performance is coupled with the pressure P and the adhesive viscosity η.The inverse models T and P should be developed for the purpose of decoupling control.Here, we propose a novel cross-scale multivariable control strategy for the jetting process, in which the pressure P and temperature T are handled separately in two control loops, as illustrated in Fig. 9: Temperature control in the auxiliary loop for viscosity compensation.The steady viscosity η should be maintained in order to minimize the coupling effect between P and T.The viscosity changes slowly during the operation and is compensated for by adjusting the adhesive temperature. Cross-scale compensation control in the dominant loop.The dominant loop has two functional loops: disturbance compensation for coupling suppression and feedback control for set-point tracking.The disturbance compensation has three major components:Fast-scale estimation.The jetted volume Vf is estimated at each cycle through the online sensing of the needle motion.This fast-scale data must be transformed into slow-scale information Vs with all the stochastic variation minimized through the fast-slow conversion.Batch measurement.The actual jetted volume in the batch is periodically weighed and statistically processed.Using the statistical method, the volume distribution information can be obtained and properly processed.Slow-scale compensation.The inverse model P is used to convert the process deviation ΔV into the appropriate adjustment ΔP in order to eliminate any prediction error.A simple feedback controller can be sufficient to maintain good set-point, Vr, tracking if the disturbance can be well compensated for.Calibration is needed to adjust both the fast-scale estimation and the slow-scale compensation if the process is strongly time-varying.Manufacturing processes have many different types of equipment and systems that are integrated to exhibit multiscale dynamic features with a hierarchical structure.Manufacturing control is a multiscale task: from smart sensing of the process at the lowest level, to optimal design of the system offline, to multivariable process control online, and further to intelligent learning for decision-making at the highest level.Multidimensional knowledge from nearly all engineering fields is needed, such as physics and material engineering, control engineering, mechanical and electrical engineering, and computer engineering.Systematic work in this area should be built up step by step using a bottom-up approach, from dynamic modeling, to system design, to process control, to intelligent supervision, and up to plant-wide management control.This development will be a long-term challenge.
These capabilities will require the control action to be distributed and integrated with different approaches, including smart sensing, optimal design, and intelligent learning.Finally, a typical jet dispensing system is taken as a real-world example for multiscale modeling and control.
the risk mitigation benefits of terracing with the yield reduction in years with adequate rainfall.In drought-prone areas, the practice should be promoted as a way to increase farmer’s adaptive capacity.This study is not without limitations.An obvious concern is that propensity matching is able to control observable characteristics only; one cannot be certain regarding unobserved variables.Since community mobilization and labor investment are important factors for terracing, adoption is largely driven by program implementations and is therefore mostly exogenous.This minimizes the concern that adoption could suffer from endogeneity bias related to unobserved characteristics.This limitation notwithstanding, the study has a number of strengths.First, it relies on a representative dataset of Ethiopia and uses a robust measurement methods – crop-cuts – as the outcome metric.This measure is arguably more accurate than farmers’ yield estimates – a widely used proxy in the literature.In addition, rainfall and drought occurrences were precisely matched with plot-level crop growing period.Further, several efforts were made in order to create a credible counterfactual.The number of covariates as well as matching diagnostics led us to conclude that a proper counterfactual was established.Sensitivity analysis showed that the estimated effect held regardless of matching within or outside the common support region.Overall, this study contributes to the literature on climate-smart practices by providing our best estimation at this point regarding the contribution of soil water management practices to resilience against extreme events.Given that adaptation benefits from terracing could be larger than previously thought, these results can help designing scenarios about climate change impacts.Bearing in mind that most terraces can be found in the Tigray and Amhara regions, the extension of terraces could also be considered as part of the country’s strategy to mitigate effects of extreme events in other regions beyond where they are currently found.
While the benefits of soil water management practices relative to soil erosion have been extensively documented, evidence regarding their effect on yields is inconclusive.Following a strong El-Niño, some regions of Ethiopia experienced major droughts during the 2015/16 agricultural season.Using the propensity scores method on a nationally representative survey in Ethiopia, this study investigates the effect of two widely adopted soil water management practices – terraces and contour bunds – on yields and assesses their potential to mitigate the effects of climate change.It is shown that at the national level, terraced plots have slightly lower yields than non-terraced plots.However, data support the hypothesis that terraced plots acted as a buffer against the 2015 Ethiopian drought, while contour bunds did not.This study provides evidence that terraces have the potential to help farmer deal with current climate risks.These results can inform the design of climate change adaptation policies and improve targeting of soil water management practices in Ethiopia.
chronic stress alters the responsiveness of 5-HT1A.Environmental enrichment which reverses programmed affective and cognitive behaviors promotes increased expression of Htr1a.This system which underpins depressive-like phenotypes appears particularly sensitive to glucocorticoid programming directly in the fetal brain.There has been considerable research into the regulation of 11β-HSD2 expression in placenta as this was thought to be the main site of expression relevant in glucocorticoid programming.We have now shown that the brain 11β-HSD2 is important too, pressing the need for research into fetal brain regulation of 11β-HSD2.Some data suggest epigenetic changes occur in the 11β-HSD2 promoter in fetal hypothalamus in response to maternal stress and fetal brain 11β-HSD2 mRNA responds to alterations in maternal nutrition in pigs.However, more studies are needed to determine factors that regulate 11β-HSD2 expression in the brain and hence the susceptibility to psychiatric diseases.In sum, we show that fetal brain programming of specific functions is powerfully influenced by local glucocorticoid levels determined by fetal brain 11β-HSD2, whilst programming of anxiety-like behavior and the HPA axis are not a direct effect of local 11β-HSD2 control of glucocorticoid access to the brain.These data illustrate exquisite and complex control of glucocorticoid action upon the developing CNS and imply multiple targets in the control of the trajectory of development of behavior, cognition and neuroendocrine function during normal and disordered development.Understanding the control of fetal brain 11β-HSD2 is of substantial interest.We acknowledge Wellcome Trust project grant and EU FP7 collaborative grant Developmental origins of healthy and unhealthy ageing to MCH and JRS.The work was supported by a PhD studentship from The University of Edinburgh Centre for Cognitive Ageing and Cognitive Epidemiology, part of the cross council Lifelong Health and Wellbeing Initiative.Funding from the Biotechnology and Biological Sciences Research Council, Engineering and Physical Sciences Research Council, Economic and Social Research Council, and Medical Research Council is gratefully acknowledged.None of the funders were involved in study design; in the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication.
Stress or elevated glucocorticoids during sensitive windows of fetal development increase the risk of neuropsychiatric disorders in adult rodents and humans, a phenomenon known as glucocorticoid programming.11β-Hydroxysteroid dehydrogenase type 2 (11β-HSD2), which catalyses rapid inactivation of glucocorticoids in the placenta, controls access of maternal glucocorticoids to the fetal compartment, placing it in a key position to modulate glucocorticoid programming of behavior.However, the importance of the high expression of 11β-HSD2 within the midgestational fetal brain is unknown.To examine this, a brain-specific knockout of 11β-HSD2 (HSD2BKO) was generated and compared to wild-type littermates.HSD2BKO have markedly diminished fetal brain 11β-HSD2, but intact fetal body and placental 11β-HSD2 and normal fetal and placental growth.Despite normal fetal plasma corticosterone, HSD2BKO exhibit elevated fetal brain corticosterone levels at midgestation.As adults, HSD2BKO show depressive-like behavior and have cognitive impairments.However, unlike complete feto-placental deficiency, HSD2BKO show no anxiety-like behavioral deficits.The clear mechanistic separation of the programmed components of depression and cognition from anxiety implies distinct mechanisms of pathogenesis, affording potential opportunities for stratified interventions.
K-feldspar recovery, the overall K-feldspar losses may be due to the dissolution of K from the microcline surface during the acid conditioning of the mica flotation, and/or due to the desorption of fluorine-amine complexes, dodecylamine molecule precipitation and/or dissolution of K during alkaline conditioning in the K-feldspar flotation.The development of a process for the selective flotation of K-feldspar from Na-feldspar in alkaline environment has shown that it is possible to produce high grade K-feldspar concentrates at high recoveries from a 0–100 µm microcline-quartz pegmatite sample.The K-feldspar was selectively floated from Na-feldspar at pH 10.5–11.6 with NaOH as pH adjuster/modifier and Brij58 as frother, but without any additional collector other than the already adsorbed fluorine and amine from the preceding feldspar-quartz separation.The best results were achieved with DI water and conditioning with 2-3 · 10−3 M NaOH.The use of tap water or replacing NaOH with KOH has shown to reduce grades and recoveries of the K-feldspar concentrate significantly, while the addition of Ca has shown to be particularly detrimental to the process.Fluoride measurements has indicated that separation may be due to the preferential desorption of fluorine-amine complexes from the Al and Na sites of Na-feldspar, while only from Al sites of K-feldspar.Apart from water quality, K-feldspar losses may be due to dissolution of K from the K-feldspar surfaces during the acid conditioning of the initial mica flotation, and due to desorption of fluorine-amine complexes from the K sites, dodecylamine molecule precipitation, and/or dissolution of K from the K-feldspar surfaces during alkaline conditioning.
In this study, a selective flotation process for the separation of K-feldspar from Na-feldspar was developed for the beneficiation of a Norwegian microcline-quartz pegmatite sample.The K-feldspar flotation was performed in alkaline environment (pH 10.5–11.6) with only the addition of NaOH as a modifier and non-ionic Brij58 as a frother, on a fluorine-amine activated bulk feldspar concentrate from the preceding feldspar-quartz separation.The K-feldspar flotation resulted in a K-feldspar concentrate of 14.3% K2O at 77% K-feldspar recovery and K2O/Na2O ratio of 10.4.The NaOH conditioning of the fluorine-amine activated feldspars resulted in rapid desorption of fluorine, and the selectivity of the process was assumed to result from the more rapid desorption of fluorine-amine complexes from the Na-feldspar surface compared to that of the K-feldspar surface.In relation to this, the use of NaOH was found to be superior to that of KOH, the use of DI water was found to be superior to that of tap water, and the introduction of Ca ions was shown to be highly detrimental to the process.In general, the results strongly indicate that the developed process may yield substantially higher K- and Na-feldspar grades and recoveries at fewer flotation operations and lower total chemical consumptions, compared to existing practices on similar mixed feldspar mineral samples.
when it was sailing to Chongqing from Nanjing along the Yangtze River.A total of 442 lives were taken, among 454 passengers and crewmembers.No direct wind observations were available due to sparse surface observation stations.The present reported work attempted to reveal the weather phenomena during the event and estimated the wind speed at the wreck location, through radar analyses as well as both ground and aerial damage surveys.The results show that the cruise ship capsized when it encountered strong winds of at least 31 m s−1 near the apex of a bow echo embedded in a squall line.The damage surveys demonstrate that such strong winds were likely caused by microburst straight-line wind and/or embedded small vortices.No adequate observational evidence of a tornado or gustnado was found in this event.Determining the occurrence of a tornado can be a difficult question in some situations.When heavy rainfall takes place, it is almost impossible to observe a tornado due to limited visibility, especially at night.According to the definition of a tornado by the Glossary of Meteorology , three necessary conditions need to be satisfied for a vortex to be considered as a tornado: rotating winds on the ground; connection with a cloud-base rotation; and location under a cumuliform cloud.From the damage point of view, a tornado usually causes a narrow damage swath with a storm-scale curved or convergent debris.Gustnadoes are often mistaken for tornadoes due to difficulties in assessing condition or ignorance of conditions and/or.A small vortex that satisfies condition and but not should be considered a gustnado if it forms right along the gust front.In this event, conditions and were met around the shipwreck location, but whether or not condition was met remains unknown.Considering that the areas with convergent or curved trees were quite small and localized, without showing a narrow swath with a swath-scale convergent or curved debris , and they did not appear along the gust front, they more likely resulted from small vortices forming on the flanks of surging outflow currents or microbursts, rather than tornadoes or gustnadoes.The authors declare that they have no conflicts of interest,This article is distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Based on observational analyses and on-site ground and aerial damage surveys, this work aims to reveal the weather phenomena—especially the wind situation—when Oriental Star capsized in the Yangtze River on June 1, 2015.Results demonstrate that the cruise ship capsized when it encountered strong winds at speeds of at least 31 m s−1 near the apex of a bow echo embedded in a squall line.As suggested by the fallen trees within a 2-km radius around the wreck location, such strong winds were likely caused by microburst straight-line wind and/or embedded small vortices, rather than tornadoes.
the sensitivity of our measurements.Inferior frontal areas have been implicated in semantic processing by some authors.Nonetheless, these areas are more frequently associated with phonological and syntactic processing as well as response selection.Furthermore, our Fig. 3 shows that IFG produced the least amount of overall activation among the perisylvian ROIs in the present experiment.The null effect in this ROI is therefore not surprising.We would still have expected a congruency effect in ATL, as this area has been labeled as a “semantic hub” in the hub-and-spoke model of Patterson and Rogers based on neuropsychological and neuroimaging findings.fMRI studies have reported semantic priming in ATL, and masked semantic priming has been reported in a recent MEG study, although in the N400 latency range."Our null result cannot be taken as evidence against ATL's possible role as a semantic hub, as it may reflect a lack of statistical sensitivity, e.g., due to incomplete sensor coverage.In conclusion, this experiment added new evidence to the growing literature that the motor system contributes to semantic processes in the human brain.We confirmed that action-related stimuli activate motor cortex early on in processing.Importantly, this early effect depended on the congruency between the effector used for pre-activation of motor areas and the meaning of subsequently presented action-words.In contrast to many previous neuroimaging studies, our movement priming paradigm allowed testing for a directional link from motor to language systems.However, we did not find a congruency effect on behavioral responses, raising questions about the behavioral relevance of our and previous neuroimaging results.We hope that our movement priming paradigm, in combination with the high temporal resolution and optimized spatial resolution of combined EEG and MEG measurements, will prove useful to further clarify the link between brain and behavior for semantics as well as in other cognitive domains.
Activation in sensorimotor areas of the brain following perception of linguistic stimuli referring to objects and actions has been interpreted as evidence for strong theories of embodied semantics.Although a large number of studies have demonstrated this "language-to-action" link, important questions about how activation in the sensorimotor system affects language performance ("action-to-language" link) are yet unanswered.As several authors have recently pointed out, the debate should move away from an "embodied or not" focus, and rather aim to characterize the functional contributions of sensorimotor systems to language processing in more detail.For this purpose, we here introduce a novel movement priming paradigm in combination with electro- and magnetoencephalography (EEG/MEG), which allows investigating effects of motor cortex pre-activation on the spatio-temporal dynamics of action-word evoked brain activation.Participants initiated experimental trials by either finger- or foot-movements before executing a two alternative forced choice task employing action-words.We found differential brain activation during the early stages of subsequent hand- and leg-related word processing, respectively, albeit in the absence of behavioral effects.Distributed source estimation based on combined EEG/MEG measurements revealed that congruency effects between effector type used for response initiation (hand or foot) and action-word category (hand- or foot-related) occurred not only in motor cortex, but also in a classical language comprehension area, posterior superior temporal cortex, already 150 msec after the visual presentation of the word stimulus.This suggests that pre-activation of hand- and leg-motor networks may differentially facilitate the ignition of semantic cell assemblies for hand- and leg-related words, respectively.Our results demonstrate the usefulness of movement priming in combination with neuroimaging to functionally characterize the link between language and sensorimotor systems.
table, one can find demos where phantoms created without the use of *.dat files.In Fig. 1, we demonstrated some classical phantoms which exist in TomoPhantom.In Fig. 3, we show novel 2D models and in Fig. 4, 3D models, respectively.Along with the given model number in the library, we also provide its composition described by objects constituting a model.By showing these models we emphasize the capabilities of TomoPhantom software in creating quite complex scenes made of piecewise-constant and piecewise-smooth objects.Rigorous testing and benchmarking of reconstruction algorithms is frequently performed using over-simplistic low-resolu-tion phantoms.The simulated projection data which are generated using the same grid and the same ray-tracing model leads to idealistic “inverse crime” reconstructions .In order to ensure better testing practices for numerical algorithms in tomography, one needs flexible and efficient simulation software .This is especially crucial with the growing number of novel iterative reconstruction methods .TomoPhantom software aims to provide a set of tools for objective testing of reconstruction algorithms with a capability of referencing to the library of novel and classical models.By using combined models with discontinuous and smooth objects, various properties of a numerical method can be investigated and highlighted.Provided analytical projections are not correlated with ray-tracing algorithms used in tomographic reconstruction software.Therefore, TomoPhantom can deliver more realistic simulated data for XCT methods testing.In addition to the simulation of exact projections, TomoPhantom offers routines for adding noise and various acquisition artifacts.Apart from using TomoPhantom just for tomographic reconstruction purposes, the designed phantoms can be also used for a large variety of image processing tasks, e.g. denoising, deblurring, and segmentation.Therefore, the potential user community for TomoPhantom is large.In this paper we present open-source software TomoPhantom which can be used for testing and benchmark studies.A library of enumerated 2D–4D phantoms is provided for future referencing in scientific papers.The proposed software enables a quick and easy access to analytical tomographic projections which can be used to rigorously test image reconstruction algorithms.The core is written in the C-OpenMP language, and the wrappers for Python and MATLAB environments are provided.
In the field of computerized tomographic imaging, many novel reconstruction techniques are routinely tested using simplistic numerical phantoms, e.g.These phantoms cannot sufficiently cover the broad spectrum of applications in CT imaging where, for instance, smooth or piecewise-smooth 3D objects are common.TomoPhantom provides quick access to an external library of modular analytical 2D/3D phantoms with temporal extensions.In TomoPhantom, quite complex phantoms can be built using additive combinations of geometrical objects, such as, Gaussians, parabolas, cones, ellipses, rectangles and volumetric extensions of them.Newly designed phantoms are better suited for benchmarking and testing of different image processing techniques.Specifically, tomographic reconstruction algorithms which employ 2D and 3D scanning geometries, can be rigorously analyzed using the software.TomoPhantom also provides a capability of obtaining analytical tomographic projections which further extends the applicability of software towards more realistic, free from the “inverse crime” testing.All core modules of the package are written in the C-OpenMP language and wrappers for Python and MATLAB are provided to enable easy access.Due to C-based multi-threaded implementation, volumetric phantoms of high spatial resolution can be obtained with computational efficiency.
Electron beam selective melting is a promising additive manufacturing technology for metallic components.It is capable of manufacturing components with complex geometry, and opens up new avenues for locally manipulating chemical compositions and mechanical properties as well.For example, Yang et al. manufactured auxetic lattice structures with negative Poisson’s ratios, and Ge et al. manufactured functionally graded Ti-TiAl materials.There are three main fabrication procedures in EBSM , as shown in Fig. 1. Spread one layer of powder on a preheated platform or previous layer.The layer thickness can vary for different layers.For each layer, the mixture ratio of the several different types of powder can be designed and tailored in order to allow the chemical compositions to be manipulated. Preheat the powder bed to make the powder slightly sintered.This helps prevent powder scattering, which may even lead to build failure. Selectively melt the powder bed.The beam power and scan speed are key factors that greatly influence the final part quality.Although the basic principle of EBSM is rather straightforward, the actual processes consist of multiple physical phenomena such as powder particle packing, heat transfer, phase transformation, and fluid flow, and a number of factors influence the process and fabrication quality.There are a considerable number of fundamental physical mechanisms in each fabrication procedure to be understood in order for optimal process parameters to be selected to ensure the fabrication quality.For example, the questions of how to improve the packing density of the powder bed in the powder-spreading procedure, how to achieve the optimal coalescence state of the powder bed in the preheating procedure, and how to avoid the balling effect and reduce single-track non-uniformity are all meaningful research topics.Most previous studies focused on the melting procedure rather than on the other two procedures.Few studies have been done to comprehensively model all the manufacturing procedures.A few powder-scale models resolving the randomly distributed particles in the powder bed have been developed to investigate the melting process of individual powder particles .Körner et al. employed the rain model to generate the powder layer and the two-dimensional lattice Boltzmann method to model the powder-melting process.They studied the influence of the powder layer thickness and input energy on the successive consolidation process of multiple powder layers.Khairallah et al. built a meso-scopic model for selective laser melting in order to investigate the formation mechanism of pores, spatter, and denudation in the single-track formation process using the ALE3D multi-physics code.Qiu et al. used the open-source code OpenFOAM to simulate the powder-scale melt flow in the SLM process in order to study the surface structure and porosity development.These models incorporated most of the driving forces of the molten pool flow, including surface tension, the Marangoni effect, and recoil pressure.In this work, we leverage modeling and experiments to advance the understanding of physical mechanisms in each of the three procedures.The models are introduced in Section 2, and include a powder-spreading model using the discrete element method, a phase field model of powder sintering, and a powder-melting model using the finite volume method.Section 3 presents the experimental methods.In Section 4, experimental and simulation results for each procedure are presented and discussed.Finally, a brief summary is given in Section 5.Note that the notations in the three subsections apply only in the respective subsections.Spherical powder particles with diameters that follow a Gaussian distribution in the range of 30–50 μm firstly fall to the bottom under gravity to form a powder bed, covering the substrate with various thicknesses.The rake then moves from left to right to spread the powder.The movement of powder particles is governed by the contact interaction and body forces.It should be emphasized that the powder-sintering mechanism during the preheating procedure is solid-state sintering driven mainly by grain boundary diffusion, rather than liquid-state sintering driven by melting and solidification.This solid-state sintering during the preheating procedure in EBSM has rarely been studied or modeled.Thus, we propose to use the PF to model the powder sintering.For brevity, we explain the method in the case of two particles, in which case two kinds of fields are used: the volume of solid, c, and the order parameter fields, η1 and η2.The value of all these fields falls between 0 and 1, as illustrated in Fig. 2.Note that on the grain boundary, c = 1.The package FEniCS is adopted to solve this nonlinear problem.The Newton solver and the monolithic method are applied.Since the preheating procedure for each powder layer usually takes about 20 s, a constant time-step dt = 2 × 10−4 s is adopted, which should be sufficient to resolve this solid-state sintering process.Periodic boundary conditions are applied and an implicit solver is employed.The recoil pressure, surface tension, and Marangoni effect are treated as the boundary conditions on the free surface, while the surface tension coefficient is set to be temperature dependent.The EBSM system used in the study is open-architecture, which allows users to customize a wide variety of fabrication parameters.A detailed description can be found in a previous paper .Since there are two powder tanks, the equipment is capable of manufacturing functionally graded materials by tailoring the mixture ratio of the two types of powder in each layer.For the powder-spreading experiments, both the translational speed and the slope angle of the powder rake can be customized.The powder size distribution is measured using the laser particle size analyzer.After the powder bed is applied and before the selective melting procedure, the powder layer is usually preheated using a de-focused electron beam.The
Electron beam selective melting (EBSM) is a promising additive manufacturing (AM) technology.The EBSM process consists of three major procedures: ① spreading a powder layer, ② preheating to slightly sinter the powder, and ③ selectively melting the powder bed.The highly transient multi-physics phenomena involved in these procedures pose a significant challenge for in situ experimental observation and measurement.To advance the understanding of the physical mechanisms in each procedure, we leverage high-fidelity modeling and post-process experiments.The models resemble the actual fabrication procedures, including ① a powder-spreading model using the discrete element method (DEM), ② a phase field (PF) model of powder sintering (solid-state sintering), and ③ a powder-melting (liquid-state sintering) model using the finite volume method (FVM).Comprehensive insights into all the major procedures are provided, which have rarely been reported.
purpose is to slightly sinter powder particles together in order to avoid powder “smoke.,In the post-process experimental characterization, in order to investigate the mechanism of sintering, we attached the sintered powder particles to some embedding resins at 100 °C, and then employed a focused ion beam instrument to do the polishing.Finally, we used a scanning electron microscope to observe the microstructures at the sintering neck at the contact point of the powder particles.Mechanical polishing is not feasible, since it may break the neck and change the microstructure.In the powder-melting procedure, a focused electron beam is used.The morphology of single tracks is observed under an optical microscope.A good index to evaluate the performance of powder-spreading machinery is the relative packing density of the powder bed.A more compacted powder bed is usually beneficial to the fabrication quality, which can be demonstrated by the powder-melting model, as introduced in Section 2.3.The simulations) can guide the design and optimization of the powder rake), including the rake shape and its translational speed.The relative packing density over either a flat substrate or the fluctuating surface of previous layers can be predicted) and then compared with experiments).Some preliminary simulation results reveal the following phenomena:If the translational speed is relatively low, the rake shape does not affect the packing density, and the resultant packing density is high.The packing density decreases with the increase of the rake speed, as illustrated in Fig. 5.It should be noted that the effect of rake vibration is not incorporated into the current model; however, in experiments, the vibration is influenced by the rake speed, and then in turn influences the powder spreading .It should also be mentioned that the first powder layer has a lower packing density than that of the whole powder bed, since the layer thickness is only around two times the mean powder particle diameter.The PF modeling of sintering is primarily in 2D.Two simulation cases are presented: ① two powder particles with different sizes, and ② two powder particles with similar sizes.In Fig. 6 and Fig. 6, the red portion denotes the domain full of material, the blue portion denotes the domain without material, and transitional colors represent the material interfaces.Table 1 lists some material parameters applied.Although there are still quite a few improvements to be made to this model, the experimental observation and Fig. 6) has been qualitatively reproduced, which demonstrates the modeling ability of the proposed method for the sintering process.It should be noted that the scales are different between the modeling and the experiment.To reach the same sintering stage, a larger scale will result in a much longer time, due to the size effect.One important feature of our high-fidelity powder-melting model is the accurate implementation of the heat source model by accurately capturing and reconstructing the material surfaces employing an enhanced VOF method .To illustrate this, we simulated the process of an electron beam heating a spherical powder particle on a substrate.In Fig. 7, it can be seen that the electron beam can penetrate through the edge of the particle into the substrate below, since the maximum penetration depth of the electron beam is about 16 μm.It is also noted that the subsurface regions have a higher temperature than the surface, thus perfectly incorporating the energy distribution along the penetration depth from the microscale simulation of electron-atom interactions .Moreover, in Fig. 7, the red region becomes smaller with the increase of distance from the center, because the energy absorptivity is higher near the center and lower near the edge, due to the influence of the incidence angle .The single track acts as one fundamental building unit and largely influences the final product quality, such as the surface roughness and dimensional accuracy.We employed the FVM-based high-fidelity powder-scale model to predict the detailed formation processes of single-track defects, including the balling effect and single-track non-uniformity.These processes are difficult to observe in experiments; previous studies have proposed different or even conflicting explanations.The model helped clarify the underlying formation mechanisms, reveal the influence of key factors, and guide the improvement of fabrication quality.More detailed discussion and description are in our previous papers .The major conclusions are:The balling effect is caused by a lack of melting of the substrate under the melted particles.Driven by the surface energy, some of the melted particles merge together into isolated clusters rather than spreading over the unmelted substrate surface.The single-track non-uniformity is due to the random attachment of the molten pool to the partially melted particles near the boundaries.In multiple-layer multiple-track manufacturing processes, the previous layers and tracks, as well as the ejected materials, will also influence the non-uniformity of the melt track.The detailed simulation results and discussion are out of the scope of this paper, and can be found in our previous papers .To achieve a comprehensive understanding of the physical mechanisms in the EBSM process, we leverage modeling and experiments.The models consist of ① a powder-spreading model using the DEM, ② a PF model of powder sintering, and ③ a powder-melting model using the FVM.These models fully resemble the actual fabrication procedures.Preliminary simulation results agree qualitatively with experiments, and demonstrate appealing potential to shed light on the underlying mechanisms and to guide the design and optimization of experimental setup and manufacturing processes.
Preliminary simulation results (including powder particle packing within the powder bed, sintering neck formation between particles, and single-track defects) agree qualitatively with experiments, demonstrating the ability to understand the mechanisms and to guide the design and optimization of the experimental setup and manufacturing process.
We build a theoretical framework for understanding practical meta-learning methods that enables the integration of sophisticated formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms in order to provide within-task performance guarantees.Our approach improves upon recent analyses of parameter-transfer by enabling the task-similarity to be learned adaptively and by improving transfer-risk bounds in the setting of statistical learning-to-learn.It also leads to straightforward derivations of average-case regret bounds for efficient algorithms in settings where the task-environment changes dynamically or the tasks share a certain geometric structure.
Practical adaptive algorithms for gradient-based meta-learning with provable guarantees.
The dataset of this article provides information on characterization of titanate compounds produced using hydrothermal method at different temperature with commercial TiO2 powder was used as precursor.Fig. 1 shows the FTIR spectra of TiO2 precursor and as-synthesized titanate at 100, 150, 200, 250 °C hydrothermal temperature and their FTIR band assignment is presented in Table 1.The XRD patterns of the TiO2 precursor and as-synthesized titanate is shown in Fig. 2.While, Fig. 3 illustrates their N2 adsorption-desorption isotherm plot.The information about types of isotherms, hysteresis, pores and shape of pores as well as surface area, pore size and pore volume of the samples are tabulated in Tables 1 and 2, respectively.2.0 g of TiO2 powder precursor was dispersed in 10 M NaOH with constant stirring for 30 minutes.Then, the mixture was sonicated in sonicator bath for 30 minutes, after that continue with constant stirring for 30 minutes.Subsequently, the mixture was transferred into teflon vessel and subjected to hydrothermal treatment at various temperature for 24 hours in autoclave.When the reaction was completed, the white solid precipitate was collected and dispersed into 0.1 M HCL with continuous stirring for 30 minutes for washing.Then, the washing was followed by using a distilled water until the pH of washing solution was 7 and subsequently dried at 80 °C for 24 hours in an oven.As-synthesized sample at 100, 150, 200, and 250 °C hydrothermal temperature denoted as HT100, HT150, HT200, and HT250, respectively.The obtained samples were characterized using FTIR, XRD, and nitrogen gas adsorption.Fig. 1 shows the FTIR spectra of TiO2 precursor and as-synthesized sample.A broad band has been observed in the range of 3700-2800 cm−1 and 1800-1400 cm−1.The metal-oxygen stretching mode has been detected below 1000 cm−1 attributed to the Ti–O bond.XRD analysis was carried out to study the phase structure of hydrothermally synthesized samples at different hydrothermal reaction temperature.For comparison the XRD pattern of TiO2 precursors was also included.As can be seen in Fig. 2 and, the TiO2 precursor and as-synthesized sample at 100 °C assigned to anatase TiO2 .Meanwhile for HT150, HT200 and HT250, their XRD is identical as hydrogen trititanate) and sodium trititanate and), respectively .These synthesized samples were assigned to trititanate compounds suggesting that the hydrothermal reaction of TiO2 precursor and NaOH occurs to produce titanate compounds.Fig. 3 shows the N2 adsorption-desorption isotherm plot of the TiO2 precursor and as-synthesized samples at different hydrothermal temperature treatment.Table 2 shows the isotherm of studied samples exhibits a typical IV-like isotherm with H3 hysteresis .Type IV isotherms is associated with capillary condensation in mesopore structures.While, the type H3 hysteresis represents slit shaped pores .Table 3 possessed the BET surface area, pore size and pore volume of commercial TiO2 precursor powder and synthesized sample as different hydrothermal treatment.As shown in Table 3, the surface area of commercial TiO2 precursor powder was only 10.07 m2/g.Nevertheless the surface area of the samples was increased after hydrothermal treatment.At 100 °C hydrothermal treatment, the surface area was found to be 146.74 m2/g.Meanwhile, HT150 which is the sample prepared at 150 °C hydrothermal treatment possessed the largest surface area.The surface area of as-synthesized sample at 200 °C is 117.51 m2/g and the surface area of HT250 sample was found to be only 28.15 m2/g.The pore sizes of the synthesized samples are between 11.29 to 16.32 nm, which is in the mesopore range and large pore volume from 0.03 to 1.36 were good for adsorbent.
Titanate compounds was synthesized using hydrothermal method at various temperature (100, 150, 200, and 250 °C) for 24 hours.As-synthesized titanate was characterized using FTIR, XRD and nitrogen gas adsorption.FTIR spectra was scanned from 4000 to 400 cm−1 using Perkin Elmer Spectrum 100 FTIR spectrophotometer.XRD diffractogram was performed by using Rigaku Miniflex (II) X-ray diffractometer operating at a scanning rate of 2.00° min−1.The diffraction spectra were recorded at the diffraction angle, 2θ from 10° to 80° at room temperature.Nitrogen gas adsorption analysis was studied by using Micromeritics ASAP2020 (Alaska) to determine the surface area and pores size distribution.The nitrogen adsorption and desorption was measured at 77 K (temperature of liquid nitrogen) and the samples were degassed in a vacuum at 110 °C under nitrogen flow for overnight prior to analysis.
of 208 genes in 9 functional modules, and a set of genes acted as key nodes crosslinking various pathways related to the immune system and cell cycle.For instance, genes encoding TFs FOS, JUN, and CEBPB acted as crosstalk nodes in the biological processes related to apoptosis and the development of ALL.The three TFs could regulate the expression of HIST1H4A and HIST2H4A , which were targeted by miR-148a-3p in our network, suggesting that this regulatory loop may have an important role in the CD19-CAR-T therapy on B-ALL.In addition, miRNA-375, whose expression was down-regulated in the remissive patients, may regulate the expression of genes encoding TFs CHD4 and JUN, as well as HIST1H4C, that are involved in the NOD-like receptor signaling in our network.Expression of miR-27a-3p, a tumor suppressor in B-ALL cell lines , was up-regulated after CAR-T therapy in samples from all four patients on D14.In our network, miR-27a-3p potentially regulates expression of a set of crosstalk genes and participates in the immune response pathways."Previous clinical trials have reported that CAR-T therapy displayed dramatic efficacy in patients with B-ALL and non-Hodgkin's lymphoma .In this study, we investigated the transcriptome profiling and regulatory networks of four B-ALL patients with different prognoses after CD19-CAR-T therapy.The co-expression and mRNA–miRNA regulatory network were constructed in an effort to identify potential functional modules underlying the CD19-CAR-T therapy on B-ALL.To the best of our knowledge, this is the first study to investigate the transcriptome profiling and regulatory mechanisms involved in CD19-CAR-T therapy.Impressive results have been reported using CD19-CAR-T cells to treat patients with refractory B-ALL .Our results are consistent with these reports that the malignant cells were eliminated, and 3 of 4 patients have achieved CR after CAR-T therapy.In addition, our findings demonstrate that the effect of CD19-CAR-T therapy on B-ALL is positively related to the abundance of CARs and the proportion of immune cell types in the BM.In our trial, although ex vivo CAR-T cells comprised random T cell subtypes, the absence of NK and CD8+ T cells in the NR patient may be associated with the poor outcome and the low expression level of markers for functional CD8+ T cells.Furthermore, the expression levels of tumor microenvironment related genes were dramatically changed after CAR-T infusion, such as immunostimulator/IL family/IFNR/GFR/chemokine families members, which may enhance the proliferation and activation of CAR-T cells and thus increase the anti-tumor activity .Despite the differences in transcriptome profiles among these patients, most of the enriched DEG terms are related to the immune response and cell cycle.Our data demonstrate that histone family members were jointly and dynamically implicated and widely distributed in different functional modules associated with immune processes, indicating their important roles in the CAR-T therapy on B-ALL.Moreover, our data show that histone and TF genes are strongly connected with most lncRNAs in the regulatory networks, suggesting the possible involvement of these lncRNAs in the CD19-CAR-T therapy.Functional relationships in the miRNA–TF–histone regulatory loop may play an essential role in CAR-T therapy.For example, miR-148a-3p, miR-27a-3p, and miR-375, which function as oncogenes or tumor suppressors, can also regulate the expression of hub TF genes JUN or CEBPB and histone genes HIST1H4A, HIST1H4C, and HIST1H4E .Expression of HIST1H4A and HIST2H4A is regulated by the TFs JUN and CEBPB as well .In our study, these TFs were co-expressed with the highest number of lncRNAs including lncRNA HCG4P7.HCG4P7 is reportedly as an important immune regulatory molecule and highly co-expressed with the gene encoding leukemia regulator FOS .These networks could provide a valuable resource for investigating the transcriptional regulatory relationships involved in the effect of CD19-CAR-T therapy on B-ALL.Although biological replications are limited due to the restrictions of medical ethics, previous studies have shown that the cancer cells of leukemia are homogeneously dispersed in the BM compared with the solid cancers .Meanwhile, given the different genetic background of patients, the convergent results obtained from transcriptional profiling of the different patients could only partially explain the mechanisms underlying the processes of CAR-T therapy.In this study, the transcriptional profiling of BM from patients was performed using bulk RNA-seq, and alterations of the composition of cell types and their transcriptome profiles within the BM may provide valuable insights into the biological processes underlying CAR-T therapy.Although the alterations of important immune cell compositions were surveyed via bioinformatics approaches, other types of cells in the BM, such as stromal and hematopoietic cells, have not been investigated.Genetically-modified CAR-T cells act as “living drugs” to enable constant cytotoxic attacks on targeted malignant cells.The efficacy of CAR-T therapy depends on the tumor-specific antigens and further in vivo expansion of CAR-T cells .Our results have shown the impact of in vivo expansion of CAR-T cells and the resulting alterations in immune cell population of CD19-CAR-T therapy on B-ALL.These could help to characterize clinically important features and develop treatments for patients with different conditions.Furthermore, our study suggests that the histone genes combined with their co-expressed lncRNAs and TFs, as well as the miRNA–TF–gene regulatory networks, may play vital roles in CD19-CAR-T therapy on B-ALL.These findings indicate an impact of these factors or modules for the CD19-CAR-T therapy on B-ALL, and may provide valuable clues for understanding the transcriptional and post-transcriptional regulatory mechanisms underlying CAR-T immunotherapy on cancers.All procedures in this trial, including sample collection, processing, freezing, and laboratory analysis etc., were performed according to established standard operating procedures and protocols in the central laboratory at the Wuhan Union Hospital, China.Huazhong University of Science and Technology and the
The regulatory network analyses revealed that microRNAs (e.g., miR-148a-3p and miR-375), acting as oncogenes or tumor suppressors, could regulate the crosstalk between the genes encoding transcription factors (TFs; e.g., JUN and FOS) and histones (e.g., HIST1H4A and HIST2H4A) involved in CD19-CAR-T therapy.These transcriptome analyses provided important clues for further understanding the gene expression and related mechanisms underlying the efficacy of CAR-T immunotherapy.
Wuhan Union Hospital ethics committees reviewed and approved this trial.All patients enrolled and treated in this trial gave written informed consent before participation.All clinical investigations were consistent with the Declaration of Helsinki.Only patients with relapsed or refractory B-ALL after standard therapies were deemed eligible for the CD19-CAR-T therapy.The CD19-CAR transgene comprises five parts: CD19 single-chain variable fragment, CD8 hinge, CD8-α transmembrane, 4-1BB costimulatory domain, and CD3 zeta chain.The transgene was constructed into the lentiviral vector as shown in Figure S1, and then transferred into the donor T cells according to the protocols of Wuhan Sian Medical Technology.Briefly, the leukocytes were separated from the patient’s blood with the remainder of the blood returned to the patient’s circulation.Subsequently, the leukocytes were incubated with the lentiviral vector encoding the CAR for 10 days according to the protocol .To improve the efficacy of CAR-T therapy, patients were pre-treated with a conditioning chemotherapy agent for 5 days to control the MRD level below 20%.Afterward, patients received a fractionated infusion of CD19-CAR-T cells.The PB and BM samples were obtained from patients on D0, D14, and D30.The percentage of CAR-T cells and normal cells in the CAR-T cell culture after ex vivo expansion and in the PB and BM samples collected from patients on D14 were determined with flow cytometry.The MRD level in BM samples was measured using flow cytometry on D0 and D30.Patients with the MRD level <0.5% after 2 weeks were considered as CR.The presence of BCR-ABL fusion transcript in BM samples was detected with a real-time PCR system with the primers.The tumor burden was calculated as the percentage of tumor cells among all karyocytes in BM samples on D0.Concentrations of cytokines in PB samples were determined with ELISA and with CRS grades evaluated accordingly.Total RNA was isolated from the BM samples of all four patients on D0 and D14 using the standard TRIzol protocol.The RNA quality was determined with the Agilent 2100 Bioanalyzer.Libraries for RNA-seq and small RNA sequencing were prepared according to Illumina’s TruSeq protocol.The libraries were sequenced on the Illumina Hiseq platform with the 2 × 150 bp paired-end strategy at BGI-Shenzhen.Base-calling was performed using the Illumina CASAVA v1.8.2 pipeline.RNA-seq reads containing <35 bp after adapter trimming or with poly-N or many low-quality bases were removed.For small RNA sequencing reads, we filtered reads containing any N base or with a length >40 nt or <17 nt.The Q20, Q30, and GC content of the clean sequencing reads were calculated.All of the downstream analyses were based on the clean and high-quality sequencing reads.Sequencing data obtained from the BM at day 0 and 14 days after CAR-T infusion were analyzed using various bioinformatics tools.The detailed procedures of transcriptome profiling, such as gene/miRNAs expression analysis, immune cell proportion estimation, functional enrichment analysis, and co-expression regulatory network analysis are presented in File S1.Sequencing data in this study have been deposited in the Genome Sequence Archive at the BIG Data Center , Beijing Institute of Genomics, Chinese Academy of Sciences, as GSA: CRA000746, which is publicly accessible at http://bigd.big.ac.cn/gsa.AYG, QZ, YW, and JY conceived the project.AYG, QZ, and WY supervised the study.WY and JY performed the clinical trial and biochemical examinations.HH, QZ, SC, FH, and CL performed the bioinformatics analysis.HH and QZ drafted the manuscript with the help of WY.AYG, QZ, and HH revised the manuscript.All authors read and approved the final manuscript.The authors declare that they have no competing interests.
Chimeric antigen receptor (CAR) T cell therapy has exhibited dramatic anti-tumor efficacy in clinical trials.The efficacy of CD19-CAR-T therapy on B-ALL was positively correlated with the abundance of CAR and immune cell subpopulations, e.g., CD8+ T cells and natural killer (NK) cells, in the bone marrow.Furthermore, many long non-coding RNAs showed a high degree of co-expression with TFs or histones (e.g., FOS and HIST1H4B) and were associated with immune processes.
Magnetic resonance imaging has had an unprecedented contribution to our understanding of the brain, thanks to its ability to take extremely detailed pictures of this organ non-invasively.As our understanding of the MR signal increased, and hardware development allowed us to push the boundaries further and further, a range of image contrasts, each reflecting different properties of the tissue, has become available.This has prompted a shift from qualitative to quantitative MRI that represented a true revolution in the application of MRI for research, particularly with the development of techniques able to detect changes occurring at the microstructural level.Most of these techniques have proven extremely sensitive to tissue abnormalities, albeit at the price of poor specificity.The MRI signal is a very indirect measure of the tissue properties we are interested in, and despite the influence that factors such as myelin content and axonal packing have on the contrast, the variety of factors that contribute to the overall signal prevents a one-to-one association between MRI biomarkers and biological substrate.In order to overcome this intrinsic limitation of MRI, increasingly sophisticated models of signal behaviour have been developed, in an attempt to link the MRI signal to specific tissue features, e.g.However, these applications remain associated with prohibitively long scan times and poor reproducibility.How can we access non-invasive imaging biomarkers with improved specificity?,The answer lies in the versatility of MRI: by combining several MRI contrasts we can exploit the relative contributions of differing pathological substrates to selected MRI contrasts and substantially increase the sensitivity to specific substrates.A way of picturing this is by imagining that many tissue components have been “encoded” via different filters in each MRI technique.Multi-modal MRI is thus the way to decode them.Multi-modal MRI is a broad concept that refers to any attempt to combine information coming from more than one MRI contrast.The possible approaches thus span from simply measuring several MRI parameters in the same individuals, to developing joint models, to using complex computational approaches to derive new measures.In this paper we will review some examples of multi-modal imaging with the aim of identifying the advantages of this approach while highlighting at the same time the challenges and pitfalls associated with it.The paper is organised as follows: first we will review the main components of brain tissue that we may want to characterise, and the MRI techniques that so far hold the most promise for achieving this goal.Next we will discuss the evidence that supports the complementarity of some of these techniques.Finally, we will review the most popular methods for the acquisition and analysis of multi-modal data.The aim of microstructural imaging is to quantify the properties of tissue components, such as myelin, axons, dendrites, glia, and to characterise pathological features such as demyelination, inflammation, axonal loss.In other words, the ultimate goal of microstructural imaging is to be able to provide non-invasive histology.While the same principles apply to the study of white and grey matter, this paper will focus primarily on the former tissue.This is because most of the work done to date concerns the white matter, and models of the MRI signal in white matter are usually less ambiguous than those of the grey matter.The white matter of the human brain is composed by tightly packed myelinated and non-myelinated axons and glial cells.The glial cells include oligodendrocytes, astrocytes, microglia, and oligodendrocyte progenitor cells.Pathology in the white matter thus consists mainly of demyelination, axonal degeneration and loss, and gliosis.In addition, iron, which is stored in the ferritin protein, tends to accumulate with age and neurodegenerative processes, although its concentration levels are higher in the grey matter than in white matter.Similar changes are induced to MRI biomarkers by each and/or a combination of these abnormalities, complicating their interpretation.Most MRI parameters tend to share some degree of variance, and disentangling these contributions is essential to understand the pathophysiology of neurological disorders and therefore to develop treatments.In addition to disease, measuring white matter changes is also relevant to understanding the mechanisms underpinning plastic changes occurring to the brain as a consequence of maturation, ageing, training and lifestyle.Here we will provide a brief overview of some basic concepts that may be needed to understand the following sections.While an extensive review of each technique is beyond the scope of this paper, interested readers can refer to the references provided below for more details.This list of techniques is not meant to be exhaustive: other MRI methods that offer insight into microstructure exist.Here we have included the most popular ones and also those that so far have been most consistently combined in a multi-modal fashion.The contrast in diffusion MRI arises from the interaction between the random motion of water molecules and the obstacles they encounter within tissue.If such obstacles are not distributed uniformly, but rather form ordered “barriers” to diffusion, then diffusion becomes anisotropic.Diffusion tensor imaging can model diffusion anisotropy, and allows a number of scalar indices to be derived, that can be used to characterise tissue microstructure.Fractional anisotropy has become one of the most popular MRI-derived indices in clinical studies, and it has been applied to the study of many neurological and psychiatric disorders.However, changes to anisotropy are difficult to interpret because different effects, such as myelin loss and axonal degeneration, could result in the same FA change.A more comprehensive picture can be gained by looking at the eigenvalue changes at the same time, or at the so-called axial and radial diffusivities.However, care must be taken, as axial and
The MRI signal is dependent upon a number of sub-voxel properties of tissue, which makes it potentially able to detect changes occurring at a scale much smaller than the image resolution.Despite the exciting promise of unique insight beyond the resolution of the acquired images, its widespread application is limited by the relatively modest ability of each microstructural imaging technique to distinguish between differing microscopic substrates.This is mainly due to the fact that MRI provides a very indirect measure of the tissue properties in which we are interested.A strategy to overcome this limitation lies in the combination of more than one technique, to exploit the relative contributions of differing physiological and pathological substrates to selected MRI contrasts.This forms the basis of multi-modal MRI, a broad concept that refers to many different ways of effectively combining information from more than one MRI contrast.This paper will review a range of methods that have been proposed to maximise the output of this combination, primarily falling into one of two approaches.
of other MR parameters, each used as a surrogate of one or more of these substrates.The unexplained, or residual, variance is then assumed to measure the tissue component which was not modelled by any of the surrogates.A few examples of successful application of linear regression can be found in the literature.Ciccarelli et al. measured NAA and AD in the spinal cord of MS patients, and then fed them into a statistical model to selectively estimate impairment of mitochondrial metabolism.Lower residual variance in NAA, assumed to reflect reduced mitochondrial metabolism, was associated with greater clinical disability in MS, independent of structural damage.Similarly, Callaghan et al. derived a linear relaxometry model, by combining MTR and T2* to explain T1 variance.These examples highlight one of the potential problems with MRI biomarkers, namely their collinearity.T1 tends to correlate with T2 and MT-derived quantities; T2* and MT might share some variance: overall there is some overlap between the quantities that we are hoping to use to measure differing underlying pathology.Some of these data-driven approaches attempt to remove this collinearity, and to isolate the unique contribution of each specific technique.More sophisticated approaches exploit multivariate methods, which provide tools able to reduce the dimensionality of the data and to extract from them some “latent variables” that better represent the characteristics of the object under study.Examples include principal component analysis, independent component analysis, and factor analysis, all of which re-express the data into a series of components obtained as linear combinations of the original observations.The difference between these 3 methods is in the criteria used to define these linear combinations.Two or more MRI modalities might be differently sensitive to several microscopic properties of tissue at the same time.Applying a data reduction approach might help to identify the common “latent” source of contrast, ideally related to a specific substrate.A nice example is provided by the multivariate myelin estimation model proposed by Mangeat et al.Assuming that T2* and MTR are both sensitive to myelin content, but also affected by other factors such as iron content and tissue orientation or inflammation and pH, they combined them using ICA to identify their shared information, assumed to reflect only myelin density in the human cortex.These methods are relatively simple to implement and potentially useful for MRI modality combination.However, it must be noted that they are unable to distinguish between the intrinsic variability of a parameter due to the underlying microstructure, and the variability dependent on measurement error and image inhomogeneity.In addition, the interpretation of the resulting components is not always straight-forward, and in some cases the latent variables may remain elusive in their meaning.Machine learning approaches constitute a more advanced class of computational methods, which can be used to associate a combination of MRI techniques with a range of microstructural features.ML methods rely on training an algorithm to identify the features of interest using real data.Once trained, the classifier can be used on previously unseen data.Although there are no published examples to date, in the context of multimodal imaging, animal or post-mortem data could be used to associate a combination of MRI parameters with a specific tissue substrate validated with histology, and then translated into clinical applications.One of the limitations of ML algorithm is that they require a very large number of observations in order to produce reliable associations.This might not always be possible in the context of biological samples.This family of methods differs from multivariate models because it attempts to combine parameters extracted by existing biophysical or signal models to obtain new parameters, which are believed to be more accurate or more specific than the original ones.Biophysical models refer to those that explain the MR signal as a function of biological properties – example: axon diameter distribution; whereas the signal models explain the MR signal using mathematical or statistical properties – example: diffusion kurtosis.A simple example of this kind is driven equilibrium single pulse observation of T1 and T2 – a method for quantifying T1 and T2 based on steady-state free precession sequences.The SSFP signal equation is a function of both T1 and T2, as both longitudinal and transverse magnetization are brought into dynamic equilibrium through the application of repeated RF pulses.In order to disentangle T1 and T2, an independent measure of either one is required.T1 is thus estimated using spoiled gradient echo at variable flip angles, thus enabling T2 to be extracted.The method can be generalised to assume multiple water components, characterised by separate relaxation times.The multi-component version yields maps of the fractions of myelin water as well as of intra and extra-cellular water, and has been used in multiple studies to characterise myelination and other microstructural properties of tissue.The sensitivity of relaxometry to myelin can be further exploited by combining these techniques with dMRI.While dMRI is highly sensitive to tissue geometry and integrity, the long echo times typically required to achieve sufficient diffusion weighting result in no signal contribution from the fast decaying myelin component.In principle the complementarity of the 2 techniques can be exploited to obtain separate estimates of the volumes of myelin, extra-cellular and intra-cellular spaces.Their complementarity derives from the fact that multi-compartment models of dMRI can easily separate the intra-cellular and the extra-cellular volume fractions, but typically are not sensitive to myelin while mcDESPOT does not distinguish intra- and extra-cellular spaces and measures their volume fractions as a combined sum.Bouyagoub et al. proposed a simple model that requires the separate acquisition of mcDESPOT and neurite orientation dispersion
The first one relies on data-driven methods, exploiting multivariate analysis tools able to capture overlapping and complementary information.The second approach, which we call “model-driven”, aims at combining parameters extracted by existing biophysical or signal models to obtain new parameters, which are believed to be more accurate or more specific than the original ones.
was already modeled in HLC.In general, the cells were more susceptible to infection with increasing state of cell differentiation .Late stage differentiated HLC were able to maintain sporozoite infection and expressed sufficient levels of cytochrom P450 enzymes to enable drug metabolism of i.e. primaquine, a drug frequently used to prevent persistence and replication of the parasite.The number of HLCs infected with Plasmodium varied between 10 and 60%.However, infection of erythrocytes from merozoites released from the HLCs cultures has not yet been demonstrated.Further, a stable long-term expression of host factors required for efficient infection and replication of the pathogens such as CD81, SRB1 , or CLDN1 and NTCP is still remains a challenge in hiPSC-derived liver tissues.Complex in vitro models, emulating all major cell types and tissues affected during journey of the parasite into the liver are required to gain a deeper insight into cell type and tissue specific pathways that are involved in the infection process.MPS would enable the investigation of the whole infection process by emulating the blood compartment, including immune cells and erythrocytes, in combination with hiPSC-derived liver tissues.So far, hiPSC-derived cells of the liver have not been broadly employed for disease modeling in MPS.In fact, there is only one system we are aware of, a heart-liver-vascular platform, operating with multiple co-cultures and perfusion .In a matrix supported microchannel architecture vascularized by endothelial cells this system combines iPSC-derived cardiomyocytes with either PHH or iPSC-derived hepatocytes.Microtissue droplets were formed from HLCs co-cultured with fibroblasts and embedded in photopolymerized hydrogels assembled around an artificial vasculature.The application of multi-organ-on-chip platforms will certainly be the next important step in in vitro research on infectious diseases.Due to the increasing problem of multi-resistant bacteria able to cause infection to the liver and emerging new pathogens causing metastatic infections to the liver such as Streptococcus bovis and Klebsiella pneumoniae due to systemic spread, more complex and reliable in vitro models emulating also systemic interactions of the liver with other organs are urgently needed.First multi organ-on-a-chip models were already described that model the interaction of the liver with the lung and fat tissue to allow the investigation of drug biodistribution by microfluidic flow linkage .The interaction between gut and liver was also mimicked by Esch et al. that constructed a liver-intestine biochip .Bricks et al. also emulated the gut-liver axis by interconnecting cell-culture inserts in microfluidic biochips containing HepG2/C3A and Caco-2 cells, respectively .In addition to the intestine, interaction of the liver with other organs such as kidney , neural cells and skin has also been modeled .Many molecular and cellular pathways involved in pathogen entry into the host cell of different organs, the mechanisms supporting pathogen persistence, and triggers required for the induction of post-infective organ regeneration still need to be uncovered.These questions could also be effectively addressed in complex in vitro models such as MPS to elucidate the underlying pathophysiology and to develop novel treatment options for critically ill patients.With the broader availability of more reliable in vitro systems we will also be able to further reduce the number of required animal experimentations in infection research.Several concerns were raised regarding the reliability of animal-tested products on humans which imposed ethical and economic pressures on pharmaceutical industries to amend their drug development methods .MPS represent a powerful alternative to complement animal models as part of a tailored research strategies and could contribute to reduce animal numbers in biomedical research.Further, these systems can contribute to speed up the translation process in drug research from the nonclinical phase to clinical testing.MPS can even contribute to make translational research safer for the patient by using patient-specific stem cells and tissue derived therefrom to detect, i.e. idiosyncratic drug reactions in the context of a personalized medicine.
Complex cell culture models such as microphysiological models (MPS) mimicking human liver functionality in vitro are in the spotlight as alternative to conventional cell culture and animal models.Promising techniques like microfluidic cell culture or micropatterning by 3D bioprinting are gaining increasing importance for the development of MPS to address the needs for more predictivity and cost efficiency.In this context, human induced pluripotent stem cells (hiPSCs) offer new perspectives for the development of advanced liver-on-chip systems by recreating an in vivo like microenvironment that supports the reliable differentiation of hiPSCs to hepatocyte-like cells (HLC).In this review we will summarize current protocols of HLC generation and highlight recently established MPS suitable to resemble physiological hepatocyte function in vitro.In addition, we are discussing potential applications of liver MPS for disease modeling related to systemic or direct liver infections and the use of MPS in testing of new drug candidates.