text
stringlengths
330
20.7k
summary
stringlengths
3
5.31k
High-resolution detectors are used in relative dosimetry, such as dose profile measurements, to correctly characterize the penumbra regions with steep dose gradient .Due to the increasing use of small fields in state-of-the-art radiation therapy, high resolution detectors are also employed to obtain the small field output factors .Compact vented ionization chambers have the advantage that the energy dependence of their dose response is weaker compared to silicon diodes .However, the signal of an ionization chamber is affected by ion recombination and the polarity effect, where the latter will become more dominant with decreasing sensitive volume .Furthermore, even with their compact designs, the volume effect may still cause significant signal perturbation, partly caused by the low density air cavity .Nevertheless, compact vented ionization chambers remain an integral component in relative dosimetry.These chambers can be calibrated in standard laboratory to be used in reference dosimetry.Such reference type dosimeters should fulfil the requirements laid out by dosimetry protocols .The compact ionization chamber investigated in this work replaces the existing PinPoint 31014.The new chamber is recommended by the manufacturer to be used in small field sizes down to 2 cm × 2 cm.The present work is based on a previous publication by Delfs et al. , in which detailed dosimetric characterization of a 3D chamber was performed.The same methodology has been applied here to determine the dosimetric properties of the novel PTW 31023 chamber with a new inner electrode and guard ring design.The properties of the new chamber are compared to its predecessor.The effective point of measurement was determined experimentally.The lateral dose response functions were characterized using a narrow beam geometry.The saturation correction factors have been determined at different dose-per-pulse values.The polarity correction factors were measured according to the DIN protocol 6800-2 at two photon nominal energies and field sizes from 5 cm × 5 cm to 40 cm × 40 cm."The beam quality correction factors kQ for reference conditions have been simulated for three different chamber models with varying central electrode's diameter.Furthermore, the non-reference condition correction factors, kNR, that account for the difference in detector response due to spectral changes between the measurement conditions and reference conditions were studied.Small field output correction factors for this chamber, determined according to the TRS 483 , have been published recently .Unless stated, all the measurements were performed at a Siemens Primus linear accelerator in a MP3-M water phantom.The air cavity of the PTW 31023 ionization chamber has a length of 5 mm and a diameter of 2 mm.The diameter of the aluminium central electrode has been increased from 0.3 mm to 0.6 mm.The resulting sensitive volume of the chamber is 0.015 cm3.The chamber wall consists of 0.09 mm graphite shell and 0.57 mm PMMA outer wall.The outer dimensions of the PTW 31023 ionization chamber remain unchanged with respect to the PTW 31014 chamber.A schematic cross-section is shown in Fig. 1.The EPOM of PTW 31023 was determined experimentally.Measurements were performed using 6 and 10 MV photon beams."The detector was positioned radially and axially. "In the first step, the reference percentage depth dose curves were obtained using a Roos chamber, for which the EPOM has been reported to be located at Δz = +0.4 mm ± 0.1 mm below its reference point . "The measurement depth, zM, is therefore given by zB + Δz, where zB is the depth of the detector's reference point and Δz is the shift of the EPOM from the reference point.A positive value of Δz indicates that the EPOM is located downstream of the reference point and vice versa."The reference point of the PTW 31023 chamber is located on the chamber's symmetry axis 3.4 mm below the chamber tip as shown in Fig. 1.Initially, the chamber was positioned with zM equal to zB.The thereby obtained PDD curves were then compared to the reference PDD curve obtained with the Roos chamber.By shifting the PDD curves of the PTW 31023 chamber against the reference curve, the values of Δz were determined by minimizing the square of the relative difference between them.All measurements were performed with a bias voltage of 200 V as recommended by the manufacturer, using a field size of 10 cm × 10 cm and a source-to-surface distance of 100 cm.The measurements were repeated using positive and negative polarity.Three sets of measurements were acquired, where the chamber was removed from the holder and repositioned between the repetitions.The polarity corrected PDD curves were obtained by taking the mean of the values obtained using both polarities."The lateral dose response functions of the PTW 31023 chamber, i.e. the σ values in Eq., were determined for three different chamber orientations: axial; radial lateral; radial longitudinal.Measurements were performed at an Elekta Synergy linac using 6 and 15 MV photon beams at SSD of 100 cm.The signal profiles, M, of a 1 cm × 40 cm field were scanned along its short side in 5 cm water depth.All measurements were performed using a bias voltage of 200 V and repeated using both positive and negative polarity.The dose profiles, D, were obtained using a microDiamond detector in axial chamber orientation, which has been shown to be a good approximation of the dose profile even at regions with steep dose gradient ."The microDiamond's profiles were then convolved with a normalized one-dimensional Gaussian distribution according to Eq.The σ-value was varied to minimize the difference between the convolution product and the measured signal profiles M.The beam quality correction factors kQ were simulated using
This chamber replaces the previous model (PTW 31014), where the diameter of the central electrode has been increased from 0.3 to 0.6 mm and the guard ring has been redesigned.The shifts of the effective point of measurement (EPOM) from the chamber's reference point were determined by comparison of the measured PDD with the reference curve obtained with a Roos chamber.The polarity effect correction factors kP were measured for field sizes from 5 cm × 5 cm to 40 cm × 40 cm.
the Monte-Carlo package EGSnrc with the user code egs_chamber.The chambers were modelled according to the blueprints provided by the manufacturer, including the guard rings.To systematically study the influence of the central electrode on the correction factor, three models of the PTW 31023 chamber with different inner electrode diameter have been investigated.Additionally, the kQ values of the previous PTW 31014 chamber were simulated for comparison.For measurements under non-reference conditions, the correction factor kNR is applied according to the DIN 6800-2 to account for the influence of spectral changes from the reference condition due to the energy dependence of the detector response.The determination of the correction factor kNR for the PTW 31023 was performed using the approach described in Delfs et al. .The response function r of the PinPoint 31023 as a function of photon energy was simulated using the EGSnrc package and the egs_chamber user-code for monoenergetic photon beams with energies of 20 keV to 15 MeV under the conditions of secondary electrons equilibrium.Fig. 2 shows the 6 MV PDD curves obtained with the PTW 31023 chamber positioned with its reference point at the measurement depth and the reference PDD curve was obtained with the Roos chamber positioned with its EPOM at the measurement depth for radial and axial orientation.Deviations between the measurements using positive and negative polarity can be observed using the radial orientation in the build-up region, whereas no difference is observed using the axial orientation.The polarity corrected curves are shown as green lines.To determine the Δz values, the polarity corrected PDD curves of the PTW 31023 were shifted to the left by minimizing the difference to the reference curve.The resulted Δz values are −0.55 mm and −0.56 mm in the radial orientation and −0.97 mm and −0.91 mm in the axial orientation.All values of Δz have an uncertainty of 0.1 mm."Generally, the EPOM in both the radial and axial orientations are always shifted upstream from the chamber's reference point, i.e. Δz is always negative.The lateral signal profiles of a 15 MV photon beam along the narrow side of a 1 cm × 40 cm field measured with the PTW 31023 chamber in all three chamber orientations are shown in Fig. 3.Discrepancies between the measurements using positive and negative polarity can been asserted at the field borders in the axial and the radial lateral chamber orientation.The polarity corrected profiles are shown as green lines.The signal profiles obtained with the microDiamond detector, which were used as the reference dose profile, was then convolved with a 1D Gaussian function according to Eq., as shown in the right panels.The optimal σ-values for a 6 MV photon beam are found to be 0.80 mm, 0.75 mm and 1.76 mm for the axial, radial lateral and radial longitudinal orientation respectively.Small energy dependence can be observed, whereby the σ values for 15 MV photon beam are greater.All σ values have an uncertainty of 0.05 mm.Fig. 4 shows the Jaffe-plots at the highest DPP used in this study for the PTW 31014 chamber and the PTW 31023 chamber.The new PTW 31023 chamber is less subjected to polarity effect.It is also noteworthy that the measured values using positive polarity are greater than using negative polarity for the PTW 31023 chamber, whereas the PTW 31014 shows a reverse behaviour.For both chambers a linear regression was performed only for voltages between 50 V and 200 V.Fig. 5 shows the kS values determined from the polarity corrected values for both chambers at four DPP values.The γ and δ values obtained using linear regression according to Eq. are also presented for both chambers.Generally, the new PTW 31023 chamber shows improved saturation behaviour, where the ions recombination effect is less prominent than that of the PTW 31014 chamber.Fig. 6 shows the polarity effect correction factor kP for both the PTW 31014 and the PTW 31023 chamber at field sizes from 5 cm × 5 cm to 40 cm × 40 cm.The values of kP decrease with increasing field size for the PTW 31014 chamber, while the opposite was observed for the new PTW 31023 chamber.Furthermore, kP for the PTW 31023 chamber also show weaker energy dependence.Under the reference conditions as defined in DIN 6800-2 , the kP of the PTW 31014 chamber is 1.0094 ± 0.0020 and 1.0116 ± 0.0020 for 6 and 10 MV respectively, while the kP of the PTW 31023 chamber is 1.0005 ± 0.0020 and 1.0013 ± 0.0020 for 6 and 10 MV respectively.Fig. 8 shows the correction factors kNR for the PTW 31023 chamber as a function of mean photon energy for 6 MV photon beam.The range of mean energy was obtained by varying the field size from 1 cm × 1 cm to 30 cm × 30 cm and the measurement depth from 2 cm to 30 cm, where kNR is unity for the reference condition.Over the whole range of the investigated mean energy, the PTW 31023 chamber shows only a very small energy dependence, where its corrections amount to less than 1%.The EPOM of the PTW 31023 chamber, when irradiated radially, is found to lie 0.55 mm ± 0.10 mm and 0.56 mm ± 0.10 mm towards the source from its reference point.Since the radius of air cavity of the chamber measures 1 mm, the shifts correspond to 0.55r and 0.56r respectively, which lies between the general value of 0.5r recommended in the DIN 6800-2 and 0.6r recommended by IAEA TRS-398 .No recommendation is given
Results: The shifts of the EPOM from the reference point, Δz, are found to be −0.55 (6 MV) and −0.56 (10 MV) in the radial orientation and −0.97 mm (6 MV) and −0.91 mm (10 MV) in the axial orientation.All values of Δz have an uncertainty of 0.1 mm.The σ values are 0.80 mm (axial), 0.75 mm (radial lateral) and 1.76 mm (radial longitudinal) for 6 MV photon beam and are 0.85 mm (axial), 0.75 mm (radial lateral) and 1.82 mm (radial longitudinal) for 15 MV photon beam.All σ values have an uncertainty of 0.05 mm.Under reference conditions, the polarity effect correction factor kP of the PTW 31014 chamber is 1.0094 and 1.0116 for 6 and 10 MV respectively, while the kP of the PTW 31023 chamber is 1.0005 and 1.0013 for 6 and 10 MV respectively, all values have an uncertainty of 0.002.The correction factor kS of the new chamber is 0.1% smaller than that of the PTW 31014 at the highest DPP investigated.
in the DIN 6800-2 for measurements in axial orientation.However, due to the construction of this chamber, i.e. the length of its air cavity is 2.5 times greater than its diameter, it should be positioned axially for relative profile measurements as recommended by the recent TRS 483 for which the EPOM have been determined in this work.Furthermore, as shown by the PDD curves in Fig. 2, the chamber is subjected to polarity effect in the build-up region due to the absence of secondary electronic equilibrium.Similar behaviour has been reported by other authors .The required polarity effect correction in the build-up region amounts up to 5% when the chamber is positioned radially, where the measured signal at positive polarity, i.e. when positive charge is collected, is higher than at negative polarity.The effect decreases however with increasing depth as secondary electronic equilibrium is being established.Interestingly, the polarity effect is almost negligible when the chamber was positioned axially."Comparing to radial orientation, where the cable is located at almost the same depth as the chamber itself, the cable in the axial orientation is extended along the beam's axis, i.e. towards greater depths where secondary electronic equilibrium has been established.Since cable irradiation was identified as a main source contributing to the polarity effect , the PDD measurement will be subjected to a smaller polarity effect when the chamber is positioned axially.The lateral dose response functions K of the PTW 31023 chamber can be approximated by 1D Gaussian function, for which the σ-values have been determined for 6 and 15 MV using three chamber orientations.Small energy dependence of the σ values was observed, where the value increases slightly with beam energy.Comparing to the Semiflex 3D ionization chamber with σ31021 = 2.1 mm ± 0.05 mm for all chamber orientations , values of PTW 31023 are smaller representing a smaller volume effect.According to the convolution model , the undistorted dose profile D can be computed from M by deconvolution using the knowledge of K.It is also noteworthy that the chamber is subjected to stronger polarity effect at the field borders of small fields, such as the 1 cm wide narrow beam shown in Fig. 3.Instead of the lack of longitudinal secondary electrons equilibrium in the build-up region during PDD measurements, the lack of lateral secondary electrons equilibrium at the field borders contributes towards the observed polarity effect.Therefore, the polarity effect should be accounted for when the chamber is used for profile measurements at small field sizes.The Jaffe-plots in Fig. 4 demonstrate that the detector readings deviate from the linear behaviour between 1/M and 1/U at the higher voltages used in this study.Therefore, a linear regression was performed only for detector readings between 50 V and 200 V to obtain the correction factors kS in this study.This corresponds to the recommended operating voltage of the PTW 31023 chamber of 200 V by the manufacturer.The chamber specific parameters γ and δ according to Eq. have been determined for both the PTW 31014 and 31023 chambers, where the new PTW 31023 chamber exhibit improved saturation behaviour compared to the PTW 31014 chamber.Furthermore, the observed stronger voltage-dependent polarity effect of the PTW 31014 chamber can be attributed to the different work functions between the materials of the guard ring and inner collecting electrode, hence causing a small potential difference between them .Under reference conditions, the polarity effect corrections according to the DIN 6800-2 ) for the PTW 31023 chamber amount to 0.05% and 0.13% for 6 and 10 MV photon beams respectively, whereas the corrections for the PTW 31014 chamber are around 1% higher under the same conditions.The corresponding polarity effect correction factors calculated according to TG-51 , Ppol = |M+| + |M−|/2|M+|, are 0.9989 ± 0.0020 and 0.9996 ± 0.0020 for the PTW 31023 chamber.Therefore, the new chamber fulfils the requirements set by the TG-51 addendum of polarity effect for reference-class dosimeters, which agrees to the finding of a recent publication .For 6 MV photon beam, the PTW 31023 chamber shows small energy dependence for measurements under non-reference conditions along the central axis, where the correction factors kNR alter by less than 1% by varying the field size from 1 cm × 1 cm to 30 cm × 30 cm and the measurement depth from 2 cm to 30 cm.
Introduction: The aim of the present work is to perform dosimetric characterization of a novel vented PinPoint ionization chamber (PTW 31023, PTW-Freiburg, Germany).Correction factors for reference and non-reference measurement conditions were examined.Materials and methods: Measurements and calculations of the correction factors were performed according to the DIN 6800-2.Its lateral dose response functions, which act according to a mathematical convolution model as the convolution kernel transforming the dose profile D(x) to the measured signal M(x), have been approximated by Gaussian functions with standard deviation σ. Additionally, the saturation correction factors kS have been determined using different dose-per-pulse (DPP) values.The influence of the diameter of the central electrode and the new guard ring on the beam quality correction factors kQ was studied by Monte-Carlo simulations.The non-reference condition correction factors kNR have been computed for 6 MV photo beam by varying the field size and measurement depth.Comparisons on these aspects have been made to the previous model.The correction factor kS was found to be 1.0034 ± 0.0009 for the PTW 31014 chamber and 1.0024 ± 0.0007 for the PTW 31023 chamber at the highest DPP (0.827 mGy) investigated in this study.The kP of the new chamber also exhibits a weaker field size dependence.The kQ values of the PTW 31023 chamber are closer to unity than those of the PTW 31014 chamber due to the thicker central electrode and the new guard ring design.The kNR values of the PTW 31023 chamber for 6 MV photon beam deviate by not more than 1% from unity for the conditions investigated.Discussions: Correction factors associated with the new chamber required to perform reference and relative dose measurements have been determined according to the DIN-protocol.Under reference conditions, the correction factor kP of the PTW 31023 chamber is approximately 1% smaller than that of the PTW 31014 chamber for both energies used.The dosimetric characteristics of the new chamber investigated in this work have been demonstrated to fulfil the requirements of the TG-51 addendum for reference-class dosimeters at reference conditions.
change could then introduce tensile or compressive stress in the migrated region, which would fracture the intergranular oxide and decrease its protectiveness.The second one is that Cr was depleted in the migrated grain boundary and protective oxide was difficult to form, resulting in faster intergranular oxidation.The formation of a Ni-rich zone was also observed in the surface oxide and the crack flank oxide.Since both the formation of the grain boundary migration ahead of the crack tip and the Ni-rich zone in the surface oxide were caused by the selective oxidation, once the Ni-rich zone was observed in the surface oxide, the occurrence of porous intergranular oxide and grain boundary migration ahead of the crack tip could be expected.According to the discussion above, both porous intergranular oxide and grain boundary migration ahead of the crack tip were detrimental to SCC.The surface, crack flank and crack tip oxidation have been characterized by high-resolution ATEM.The results are compared and the related oxidation mechanisms are proposed.The implications to SCC are discussed.The main findings are summarized as follows:After 2000 h of exposure, the oxide film formed on 316 L SS has a triplex structure, a Cr-rich penetrative oxidation layer, a Cr-rich inner oxide layer, and a Fe-rich outer oxide layer.A further outer incomplete layer exists, made of Fe-rich discrete oxide particles.The penetrative oxidation layer is formed by the selective oxidation of Cr along the fast-diffusion channels.The formation of inner oxide layer is dominated by the solid-state growth mechanism.The inner layer oxide is Cr-rich spinel and epitaxial to the matrix while the outer layer oxide is amorphous and dissolves into the solution eventually.The formation of the outer surface oxide particles is the result of precipitation of corrosion products and they have no crystallographic orientation relationship with the matrix.The electrochemical potential in the crack is supposed to be similar to that on the sample surface because the microstructure and chemistry of the crack flank oxides are similar to those on the surface.A similar oxidation mechanisms is suggested for both cases, although the water chemistry is different, with a higher concentration of dissolved cations in the open environment causing the precipitation of Fe-rich spinel containing Cr and Ni instead of magnetite.In addition, crack flank oxidation is faster than on the free surface because of the applied-stress.Intergranular selective oxidation develops ahead of the crack tip at a rate over 3 orders of magnitude faster than that at the surface.The enhanced intergranular oxidation rate is supposed to be cause by the higher dislocation density and applied-stress and further contributed by the formation of porous intergranular oxide and grain boundary migration.
Oxidation and stress corrosion cracking (SCC) of 316L stainless steel were studied in simulated pressurized water reactor primary water.Surface, crack flank, and crack tip oxides were analyzed and compared by high-resolution characterization, including oxidation state mapping.All oxides were found to have a triplex structure, although of different dimensions and composition, revealing the effects of local water chemistry and applied stress.The higher oxidation rate at the crack tip could be explained due to the existence of a higher dislocation density, higher level of stress and cation unavailability from the environment.The implications to SCC mechanisms are discussed.
on the relationship between transport location and real estate prices.By building on previous research, they demonstrate a decrease in the willingness-to-pay for proximity on purchasing prices decreased by 42.5% for dwellings in Athens during the ongoing financial crisis between 2011 and 2013.Clearly, the time of analysis accounts for the impact variation of transport infrastructure, with the sensitivity of housing prices to macroeconomic conditions a strong determinant of the willingness-to-pay for location premiums."Given the time-line of the present study coincided to a deterioration of the UK's housing market conditions, it is possible that usually strong determinants of housing prices such as transport interventions lose their impact which may account for the lower bounds of our estimates.For this to be the case, however, it would have to be an effect of the crisis that only affected properties closer to upgraded stations, as otherwise the DiD method would abstracts this unobserved heterogeneity."Yet, for Ealing, it is possible the model's we employ may underestimate the final total effect of the Crossrail intervention. "As Ealing is dominantly an owner-occupied property market, anticipation effects linked to Crossrail's announcement may increase given longer time horizons.This is because owner-occupiers have been found to have shorter-run views of the anticipatory effects of policy interventions8.Under this intuition, if home-buyers who plan to become owner-occupiers are less receptive to the anticipated effects of future rail interventions, there may be less of an incentive to purchase property in investment areas because “they must commute from day one” – i.e. prior to the opening of the transport innovation anyway.Therefore, the premiums linked to the anticipation of policy interventions may rise in housing markets with a higher ratio of landlords.For Ealing, 28% of the total 124,082 properties are privately rented with 53% owner-occupied."With this market share reflecting a dominance of owner-occupied home-buyers, it is expected that price adjustment is more likely to occur nearer to the time of Crossrail's opening. "In this way, as this study's data differences property sales between 2002 and 2014, it is probable our model's estimated premiums underestimate the transport benefits than if property sales were pooled for years closed to Crossrail's completion in 2019. "Therefore, whilst the anticipated benefits of Crossrail was found to have been speculatively internalised into the home-buyer's WTP, the magnitude of these premiums may have been relaxed by sluggish price adjustment to the anticipation of new rail services.Rail transit is a key determinant of land use evolution.Property markets are conduits for the economic impact of transport interventions and so provide a compelling backdrop reflecting these changes.In this paper, we estimate how home-buyers anticipate the benefits of a rail upgrade intervention by considering the area of Ealing in London and the announcement of Crossrail in July 2008.As the Crossrail innovation remains under construction, the intervention we consider is the announcement of the project, rather than its completion.To obtain the most possible accurate estimate of its causal effect on house prices, we use a combination of DiD estimation and spatial econometrics whilst introducing further robustness checks.This approach allows us to isolate the effect of exogenous changes in transport accessibility whilst controlling for spatial effects of property sales and the temporal dimension of the data.In doing so, we explore the anticipatory effect attributed to the implied journey-time savings by estimating the value of service-level improvements to home-buyers who live, or intend to live, in Ealing.Controlling for unobserved spatial effects, our DiD models find for every kilometre a house is closer to a station scheduled for Crossrail upgrades, home-buyers are willing to pay between 2.4% and 2.5% extra, down from the 4% premium estimated by the naive OLS estimator.In support of Gibbons and Machin, we take this as evidence that cross-sectional models overstate the premiums for transport access even when saturated with control variables.Relative to past research, the low magnitude of the coefficient may be linked to two considerations: sluggish price adjustment to the anticipation of the new lines opening; and the intervention was constructed in an area of high transport substitutability – i.e. multiple alternative transportation modes."Irrespective of this, we find the announcement of Crossrail was positively capitalised into Ealing's housing market, with a higher WTP for properties nearer to stations expecting the Crossrail treatment. "This would appear to align with Crossrail's objective to increase residential capital values and impact property investment decisions in London's housing market.Future research might seek to confirm our findings by applying the same quasi-experimental methodology, but by pooling property sales data some time after Crossrail has been completed.
This paper estimates the willingness-to-pay for anticipated journey-time savings introduced by the Crossrail intervention in the London Borough of Ealing.Given Crossrail remains under construction, we estimate how the anticipated benefit of Crossrail's announcement enters the house price determination process.Anticipated journey-time savings should enter the home-buyer's pricing equation because these benefits are speculatively internalised even before the service becomes operational.Using a experimental method that accounts for the possibility of a spatial autoregressive process in housing values, we test the hypotheses that the announcement of a new commuter rail service generated a location premium, and that house price appreciation reflected proximity to Crossrail terminals.
of listening to speech while in noise themselves.In the former case, participants experienced Lombard speech.In the latter, they did not, and we propose that a Lombard listening mode might only be triggered if the characteristics of the speech signal resemble speech plausibly produced in noise.Our perception experiment had two flaws: we attempted a within-listener manipulation, which failed to produce interpretable results due to the repetition of identical stimuli across conditions; and we attempted to use stimuli that were not plausibly produced in noise.Despite these failures, we found a significant effect of location, suggesting that this line of research is worth pursuing further.Hay and Drager hypothesized that language and location should be strongly associated, such that “changes in environment should cause a shift in which phonetic variants are produced and perceived”.We have shown good evidence that this is true for production, and tentative evidence that it is also true for perception.This paper thus adds to the burgeoning literature showing how contextually-rich our speech memories are, and how individuals dynamically exploit these contextual memories in a way that significantly impacts their speech production and perception behaviours.
Some locations are probabilistically associated with certain types of speech.Most speech that is encountered in a car, for example, will have Lombard-like characteristics as a result of having been produced in the context of car noise.We examine the hypothesis that the association between cars and Lombard speech will trigger Lombard-like speaking and listening behaviour when a person is physically present in a car, even in the absence of noise.Production and perception tasks were conducted, in noise and in quiet, in both a lab and a parked car.The results show that speech produced in a quiet car resembles speech produced in the context of car noise.Additionally, we find tentative evidence indicating that listeners in a quiet car adjust their vowel boundaries in a manner that suggests that they interpreted the speech as though it were Lombard speech.
that the mixing activity is very low.At a distance 7D downstream of the nozzle exit, all cases are showing a double peak for the PDFs.The M0 case has a very high probability at a value close to 0.5, followed by the TR case with a high probability for a value close to 0.65.All the other cases have their first peak for a value close to 0.8 with highest probabilities when the intensity of the forcing for the plasma actuator is important.For all the cases, the highest probability is for the second peak for a value of 1, except for the M0 case for which the probability for the value 1 is only 0.4.The PDF data generated at the lipline are also very interesting to discuss in line with the mixing properties of the flow.The most important results are related to the size of the Gaussian-like shape for the PDFs and its peak value.In our set-up, good mixing can be characterized by a low peak value combined with a narrow shape for the probability function and the same value for the highest probability at the centreline and at the lipline.As hinted by the previous results, the sharpest Gaussian-like shape and the lowest peak are obtained for the M0 case.Furthermore, the M0 case is the only case for which the highest probability value is the same on the centreline and at the lipline at 4.5D and 7D from the nozzle exit, suggesting a very good homogeneity for the scalar field.We can therefore conclude that the pulsating control case is showing a great potential for mixing enhancement by comparison to the other cases.The effect of four different control solutions based on eight Dielectric Barrier Discharge plasma actuators located just before the nozzle exit of a turbulent jet were investigated with the aim to enhance the mixing of the jet.One of the controlled cases is based on a pulsating motion for the forcing at a frequency corresponding to the jet preferred frequency.Data were compared with two non-controlled cases, one with no perturbation inside the nozzle and one with a random tripping to trigger instabilities in the nozzle boundary layer.The first important result is that the plasma actuators are able to strongly modify the flow field downstream of the nozzle with more or less the same flow rate as the non-controlled cases.The effect of the plasma actuators can easily be seen with ejections of pairs of elongated streamwise vortical structures generated between two plasma actuators.The breakdown to turbulence is therefore happening closer to the nozzle exit by comparison to the baseline case due to the promotion of strong thin elongated structures around the large ring generated at the nozzle of the jet.As a consequence, a reduced length for the potential core and an increase of the size of the jet were observed for the control cases by comparison to the baseline case.The reduction in length of the potential core is not present when comparisons are made with the TR case.However, the shape of the potential core is affected by the plasma actuators with a thinner potential core.It seems to suggest that the shape of the potential core can influence the mixing properties of the jet.It should also be noted that each controlled case is producing a different pattern for the radial shape of the jet which seems to suggest that the intensity of the forcing is an important parameter for control purposes.The passive scalar study revealed that the pulsating controlled case can enhance mixing by comparison to the non-controlled cases, with a more homogeneous scalar field and low levels of scalar fluctuations.These conclusions were confirmed with PDFs of the scalar field downstream of the lipline with a sharp Gaussian-like shape and the same low peak value by comparison to the other cases on the centreline and at the lipline.Future studies will investigate the influence of the number of plasma actuators inside the nozzle as well as their location with respect to the nozzle exit.Since the pulsating motion was quite successful, various duty cycles will be investigated at various Reynolds numbers.Different excitation modes will be tested, like the first and second helical modes, following the work of Samimy et al., Kim et al., Samimy et al. for supersonic turbulent jets, as well as investigating further the jet column mode in order to find the most effective way to improve the mixing properties of the jet.Finally, the potential of forcing azimuthal modes will be explored in great detail, as they can have an important contribution for triggering instabilities with the potential to affect the mixing and acoustic properties of the jet.
Plasma-controlled turbulent jets are investigated by means of Implicit Large–Eddy Simulations at a Reynolds number equal to 460,000 (based on the diameter of the jet and the centreline velocity at the nozzle exit).Eight Dielectric Barrier Discharge (DBD) plasma actuators located just before the nozzle exit are used as an active control device with the aim to enhance the mixing of the jet.Visualisations of the different cases and time-averaged statistics for the different controlled cases are showing strong modifications of the vortex structures downstream of the nozzle exit, with a substantial reduction of the potential core, an increase of the jet radial expansion and an improvement of the mixing properties of the flow.
rats become equally efficient as controls to terminate an acute stress response.This competency complies well with previous results where BPA-treated males exhibited similar behavioral coping and corticosterone responses to stress as the untreated animals.The finding that a short-term stress can modify FKBP51 levels despite the BPA-induced methylation changes, suggests an enhanced expression plasticity of this regulator that was also observed in the HT22 cell line deriving from male mouse hippocampus.Our findings in male rats are in good agreement with the results obtained in HT22 cells and demonstrate that this cell line is a valid model to study the mechanisms underlying the effects of BPA exposure on Fkbp5 regulation.In the cell line, BPA exposure during differentiation lead to decreased Fkbp5 expression and increased methylation at the corresponding CpGs in intron 5.Of note, these changes were not due to acute effects of BPA as they were observed 3 days after BPA washout."Similarly to the rat model, BPA's effect on Fkbp5/FKBP51 levels was larger than on Fkbp5 methylation.This is at least partly due to the fact that the detection methods do not have the same sensitivity, which makes comparison of the results difficult.Further, other factors and/or genomic regions not investigated here, but involved in Fkbp5 regulation, might be affected by BPA.Interestingly, dexamethasone treatment, simulating the stressful situation in the rat model, increased Fkbp5 expression to a greater extend in BPA-treated than in the untreated cells.The reason for decreased basal expression of Fkbp5 but increased inducibility of the gene by glucocorticoids despite an increase in DNA methylation is not solved.One possibility is that methylation changes at the responsive element under study affect mineralocorticoid receptor binding, responsible for mediating the effects of basal glucocorticoid levels, but not GR binding that follows the increase of hormone levels.Although sharing the DNA recognition sequence, the two receptors are modulated in their activity by diverse co-activators and co-repressors that might be affected differently by the methylation changes.Further studies are necessary to understand the detailed implications of the DNA methylation changes induced by BPA.The ER inhibitor ICI affected Fkbp5 expression and methylation during HT22 differentiation similarly to BPA.Furthermore, ERβ, but not ERα, bound to the differentially methylated region at intron 5, and this binding was disrupted by BPA and ICI."These results imply a function of ERβ in mediating BPA's effects on Fkbp5, which was further supported by the fact that ERβ knock-down abolished the effects of BPA on Fkbp5 expression and DNA methylation. "Notably, however, the effects found here cannot be ascribed to BPA's estrogenic properties as i) they were observed three days after BPA had been washed out of the culture and ii) BPA did not display agonistic properties in the HT22 cells on an ERE-driven transcription, in contrast to E2.Further, there is an agreement in the literature that BPA does not induce the same conformational changes as the natural ligands when binding to ERs, hence presumably attracting a different set of co-regulatory proteins than in the presence of natural ligand.Accordingly, we propose that BPA affects Fkbp5 transcriptional regulation by interfering with ERβ binding to the regulatory region of intron 5, where ERβ controls DNA methylation, a function of ERβ that we have described previously.A tentative model is depicted in Supplemental material, Fig. S3.We could not see any effect of BPA on ER or GR protein expression in our cell model.Further, ERβ levels were not changed in the hippocampi of the treated rats.However, in mice it was shown that BPA exposure leads to decreased ERβ levels in brain areas other than the hippocampus.Thus in other regions BPA might not affect DNA binding of ERβ but rather its protein levels.Ultimately, however, this will lead to the same result, a lack of ERβ binding to intron 5 of Fkbp5 and thus an increase in DNA methylation.Interestingly, BPA seems to affect ERβ expression in the rodent brain in a sexual dimorphic manner and data not shown).This might explain why we could not detect any methylation changes in female rats at the investigated regions of Nr3c1 and Fkbp5.Sexually dimorphic effects of BPA in brain function have been reported in several previous in vivo studies.Furthermore, the few epidemiological studies linking BPA exposure to neuropsychiatric outcomes in children also show differences between girls and boys.This demonstrates the intricate interaction between BPA and the endogenous sex hormones and consequently the importance to investigate its effects on both sexes.We demonstrate here that perinatal exposure of rats to a low BPA dose alters Fkbp5 expression, methylation pattern and inducibility by stress in the hippocampus of male offspring.The observed alterations in Fkbp5 were also detected in differentiating hippocampal neurons of male origin.In the cell model, the mechanism implicates ERβ in the regulation of the epigenetic impact, a finding that requires further studies in the in vivo setting.The BPA-induced changes in hippocampal Fkbp5 confer a link between environmental chemicals and stress-related disorders.The authors have no competing financial interests.
In rats, perinatal BPA exposure modifies stress response in pubertal offspring via unknown mechanisms.Similar effects were obtained in a male hippocampal cell line when exposed to BPA during differentiation.The estrogen receptor (ER) antagonist ICI 182,780 or ERβ knock-down affected Fkbp5 expression and methylation similarly to BPA.Further, BPA's effect on Fkbp5 was abolished upon knock-down of ERβ, suggesting a role for this receptor in mediating BPA's effects on Fkbp5.These data demonstrate that developmental BPA exposure modifies Fkbp5 methylation and expression in male rats, which may be related to its impact on stress responsiveness.
Pulmonary fibrosis is a condition with injured lesions and scars, which cannot work properly and make it difficult to breathe due to the disability of the lungs to carry oxygen to the bloodstream.The most common symptoms are shortness of breath and dry cough.These symptoms may be mild or even in the early stages.It may be asymptomatic, or symptoms become worse when scars develop.In pulmonary fibrosis, normal lung tissue architecture is replaced by scar tissue, which is generally characterized by collagen deposition and fibroblast proliferation.Pulmonary fibrosis is a chronic inflammatory lung disease with potential lethal prognosis with an inappreciable response to available medical therapies .Idiopathic pulmonary fibrosis is a devastating disease of unknown cause.Some drugs include BLM, methotrexate, amiodarone, nitrofurantoin, heavy metal dust in the air and mineral agents like silica, malachite and exposure to dust and radiation can induce pulmonary fibrosis .Pulmonary fibrosis may be due to acute and chronic pulmonary disorders.Pulmonary fibrosis arises from excessive deposition and abnormal accumulation of collagen generated by fibroblasts and myofibroblasts.These events damage alveolar cells and reduce their elasticity and flexibility .BLM is an important chemotherapeutic glycopeptide antibiotic, which is used for many malignancies.It was produced by the bacterium “Streptomyces verticillus” that discovered by Umezawa and colleagues in 1962 .BLM plays a very important and considerable role in the treatment of various cancers such as lymphoma, carcinoma of head and neck, germ cell tumors, testicular carcinoma and ovarian cancer.BLM has no serious immuno- and myelosuppressive effects.The most important toxicities of BLM in humans are pulmonary injury and skin complication .Pulmonary fibrosis is the most severe adverse effects of BLM in cancer patients.Thereafter, administration of a single intratracheal dose of BLM was introduced as the most common animal model of pulmonary injury and fibrosis in mice, rats and hamsters.Intratracheal administration of BLM causes dose-dependent damage to the lungs .BLM is known to generate reactive oxygen species like superoxide and hydroxyl radicals.Generation of the ROS in the lung tissue is because of DNA injury, lipid peroxidation, damage of epithelial cells, and an excessive deposition of extracellular matrix and lung collagen synthesis.Administration of BLM leads to inflammatory and fibrotic reactions that way collagen production is stimulated by fibroblasts .BLM pulmonary toxicity appears as pneumonia, which begins with vascular endothelial damage due to free radicals and cytokines and eventually it may progress to fibrosis .In many types of research, pulmonary adverse effects in most patients are depend on BLM dose, age, the presence of pre-existing lung diseases, smoking and exposure to polluted air in industrial cities .The chemical structure of flavonoids has the general main backbone, which consists of two phenyl rings and heterocyclic C ring in their main structure.Hydroxyl groups on the B ring mediate the most antioxidant activity of flavonoids.There are plenty of polyphenol compounds in many fruits and vegetables, which show antioxidant properties in addition to its nutritional role ."Epicatechin is one of the polyphenol flavonoids that belong to flavan-3-ols group.This compound is polyphenol due to numerous phenol groups in its main structure and is generally found in green tea and cacao .In addition, the antioxidant properties of flavonoids are related to the chelating activity with metal ions and scavenging of free radicals .Previous evidences have shown the protective effects of Epi on oxidative stress and fibrosis.The green tea extract, which consists of Epi, has shown protective and antifibrotic effects in the paraquat model of pulmonary toxicity and fibrosis by controlling oxidative stress and endothelin-1 expression .Furthermore, it is known that tea catechins including Epi, epicatechin gallate, epigallocatechin and epigallocatechin gallate can prevent the oxidative damage due to tert-butyl hydroperoxide by reducing the oxidative stress markers in diabetic rats .In addition, it has been reported that cisplatin nephropathy can be prevented by Epi via decrease in mitochondrial oxidative stress and ERK activity .In order to prevent pulmonary injury caused by BLM, this study was designed to evaluate the preventive effects of Epi on oxidative stress, inflammation and pulmonary fibrosis induced by BLM in mice.Bleomycin was obtained from Chemex Company.-Epicatechin, bovine serum albumin, chloramine T, Ellman’s reagent, thiobarbituric acid and Bradford reagent were purchased from Sigma–Aldrich.Ammonium molybdate, butylated hydroxytoluene, trichloroacetic acid, buffered formalin, HCl and perchloric acid were purchased from Merck Company.Commercial glutathione peroxidase and superoxide dismutase kits were purchased from RANSEL, Randox Com, UK.Transforming growth factor beta commercial enzyme-linked immunosorbent assay kit was provided by Hangzhou, Eastbiopharm.Male NMRI mice were obtained from Ahvaz Jundishapur University of Medical Sciences animal house.Upon arrival, the animals were allowed to acclimatize for 1 week.The mice were kept in cages and given standard mouse chow and drinking water ad libitum.Mice were maintained at a controlled condition of temperature with a 12 h light and 12 h dark cycle.This research was performed in accordance with the Animal Ethics Committee Guidelines of AJUMS for the care and use of experimental animals.This study was conducted on 82 mice weighing 20–25 g in two times to differentiate oxidative stress, inflammation and fibrosis .Mice were randomly divided into six groups of 6 to 8 mice in each time of 7 and 14 days.Experimental mice groups were I. control, II.BLM 4U/kg/2 ml, III-V BLM groups were pretreated with Epi 25, 50 and 100 mg/kg/10 ml, respectively, from three days before until 7 or 14 days after BLM and VI.Epi 100 mg/kg/10 ml.Mice were anesthetized with intraperitoneal injection of 50 mg/kg ketamine and 5 mg/kg xylazine, and then received a single intratracheal dose of either saline or
Lung fibrosis is a chronic and intermittent pulmonary disease, caused by damage to the lung parenchyma due to inflammation and fibrosis.Each group was divided into six subgroups include control, Epi 100 mg/kg, BLM, and BLM groups pretreated with 25, 50 and 100 mg/kg Epi, respectively, from three days before until 7 or 14 days after BLM.Lung tissue oxidative stress markers including the activity of superoxide dismutase, glutathione peroxidase, catalase and the levels of malondialdehyde and glutathione were determined.
GPX activity in a dose-dependent manner.These effects were at the same level compared to the BLM group and showed significant recovery.BLM reduced lung tissue GSH levels in comparison with the control group.Epi in doses 50 and 100 mg/kg increased GSH levels, whereas the treatment with Epi 25 mg/kg had no effect.BLM increased MDA levels and Epi at the doses 50 and 100 mg/kg decreased lung tissue MDA levels.HP content is an important index for collagen deposition in the lung tissue.BLM obviously increased HP content compared with the control group.Epi in doses 50 and 100 mg/kg reduced lung HP but the treatment group with Epi 100 mg/kg showed significant recovery.Treatment with Epi in dose 100 mg/kg decreased TGF-β compared with the BLM group.TGF-β is a pro-fibrotic cytokine that is activated by ROS and triggers a cascade mechanism, which causes fibrosis in the lung tissue.The TGF-β level on day 14 was higher than day 7.Histological examination shows that the treated group with BLM is damaged with the obvious lesions, cell infiltration, and decomposed tissue.In MT staining, the collagen can be seen in the form of blue strings, which is more confirmation of fibrosis.These pathological manifestations are more obvious on day 14.The treated group with Epi 25 mg/kg in both time courses is like BLM and there are no significant changes.At both early and late phases of BLM injury, treated groups with Epi 50 and 100 mg/kg showed recovery very well.Pretreated group with 100 mg/kg has similar manifestations to the control group.Grading of alveolitis and inflammation by the Szapiel method on day 7, and scoring for fibrosis by the Ashcroft method on day 14, in mice model of pulmonary injury induced by BLM are shown in Fig. 9.Epi in doses of 50 and 100 mg/kg has been able to attenuate inflammatory lesions on day 7 and inhibit the progression of fibrosis on day 14.In this study, we investigated possible protective effects of different doses of Epi against the harmful effect of BLM.BLM is a chemical agent for many malignancies.One of its most important adverse effects is pulmonary fibrosis; therefore, BLM is used as a model of pulmonary fibrosis.The BLM model of lung injury and fibrosis has some similarities to human lung fibrosis.Pulmonary fibrosis is a lethal lung disease that occurs when the lung tissue becomes hardly damaged by thickening of the alveolar cell walls with collagen.Alveolar wall thickening is associated with coughing, shortness of breath and dyspnea .In this model of lung toxicity, damage can be observed as an early phase with oxidative and inflammatory events, which usually continues until day 7 and late phase with fibrotic outcomes, which will continue until 14 to 21 days after bleomycin.Thus, the days 7 and 14 were selected as endpoint days .BLM causes apoptotic changes in the alveolar and bronchiolar epithelial cells .Therapeutic agents can prevent apoptotic effects induced by BLM.BLM causes toxicity by cleaving DNA in a process dependent on the presence of molecular oxygen and iron ion as cofactors in DNA double-strand demolition.BLM binds DNA and Fe and molecular oxygen and produces a complex that can attack the DNA, and then the ROS mediators cause lipid peroxidation .The role of iron has been determined in the oxidative stress and damage caused by bleomycin .Therefore, the chelating ability of flavonoids may be responsible for attenuation of lung injury due to BLM in mice.The antioxidant activity of Epi is related to free radical scavenging and metal ion chelating ability."The antioxidant activity of Epi is due to the ortho 3,4'-dihydroxy moiety in the B ring and its chelating ability is due to the o-phenolic groups in the3,4'-dihydroxy positions in the B ring .The endogenous antioxidant defense system including enzymatic and non-enzymatic was examined in this study.The oxidative stress that activates inflammatory signaling pathways causes oxidative damage .In BLM model of lung injury, oxidative damage is a condition due to the lack of balance between ROS production and the antioxidant defense system .It has been reported that oxidative stress and inflammation are early and fibrosis is a late event in BLM model of lung damage .However, in present research it seems that oxidative stress and inflammation remain at the time of fibrosis and play a role in the development and maintenance of fibrosis.In addition, the fibrotic lesions are noticeably visible on Day 7.In confirmation, the data indicate that the level of TGF-β increases after BLM administration and their levels are well visible on days 7 and 14.TGF-β increases ROS overproduction and ROS activates TGF-β cytokine production.This can be a reason to keep oxidative stress high in the late phase of pulmonary injury on day 14.TGF-β is a pro-fibrotic agent that causes fibrosis by the proliferation of fibroblast and accumulation of excessive ECM.Furthermore, TGF-β can decrease GSH levels in the lung tissue .BLM caused 25% mortality in one-week study and 50% in two-week study.Epi could reverse mortality percent of BLM treated animals.In a similar study, which pulmonary toxicity induced by BLM 0.1 U/100 μl/mouse, i.t., BLM induced 60% mortality after 15 days .The increased body weight during administration of Epi 100 mg/kg is similar to that of the control group.In BLM treated mice the body weight of animal decreases and this refers to the condition of the disease.In the present study, body weight returned to almost normal weight in Epi 100 mg/kg pretreated mice in the BLM model of lung injury.This suggests that Epi leads to recovery and weight
Accordingly, animals were randomly assigned into two groups of 7 and 14 days to evaluate the role of Epi in the early oxidative and late fibrotic phases of BLM-induced pulmonary injury, respectively.Epi exerted protective effects against BLM-induced pulmonary injury in a dose-dependent manner in two early and late phases of lung injury.Oxidative stress markers persisted until the late fibrotic phase, as pro-fibrotic events were present in the early oxidative phase of BLM-induced injury.
gain by improving the damage induced by BLM.A significant increase in lung index on day 7 may be due to the role of both inflammation and pro-fibrotic lesions.Pretreatment with Epi, especially with dose 100 mg/kg in the BLM lung injury model, has shown the best reversing effect according to increase the activity of GPX, SOD, CAT and GSH level.Epi reduced tissue levels of MDA, HP, TGF-β and lung index.The improvement of pathological manifestations in the lung tissue can be attributed the antioxidant activity of Epi in the lung.Pretreatment with Epi in dose of 50 mg/kg in the BLM model of lung fibrosis also provides recovery, but damage can still be seen and there is no complete recovery.There is no difference between Epi 25 mg/kg in BLM lung fibrosis with the BLM group and Epi with this dose cannot reverse the lung inflammatory and fibrotic damage.As previously reported, Epi 1 mg/kg by gavage for 14 days increases the expression of GPX, SOD and CAT enzymes in senile mice .It can be concluded that enhanced activity of SOD, CAT and GPX in Epi pretreated mice of the lung injury may be due to the induction of expression of these enzymes by Epi in this model of lung injury.However, Epi dose in mentioned study is low in comparison with the Epi doses in our study.It has been reported that pretreatment with Epi 50 mg/kg decreased the oxidative damage and showed hepatoprotective effects in the rat model of hepatitis .In another study in doxorubicin model of brain toxicity in the rats, the used Epi dose was 10 mg/kg per day for 4 weeks .It has been shown that the mice doses should be divided by 12.3 to convert animal doses to human equivalent doses .Accordingly, effective Epi doses of this study can be considered for the relevance of Epi doses in mice to humans.As a result, daily human Epi doses in the range of 4–8 mg/kg, which is equivalent to 280–560 mg for a person of 70 kg can be estimated for clinical application.However, careful pharmacokinetic and pharmacodynamic studies along with considerations of species differences and human variability should be made to estimate the right dose in humans.In this regard, a review article on the effects of Epi in the range of 25–447 mg on human cognition has reported that there is not enough evidence for the optimal Epi dose for positive cognitive effects .Polyphenol compounds have poor bioavailability and short half-life, hence their prophylactic and therapeutic uses are limited .Thus, there are ongoing efforts to enhance their bioavailability and consequently biological activity by using nanotechnology .There is evidence that Epi absorption and excretion are dose-dependent in rats .Thus, poor bioavailability of Epi can be compensated with high or repeated doses of Epi.Overall, Epi repeated administration, nanotechnology techniques, Epi inhalation use and intravenous application of Epi are highlighted to overcome poor bioavailability of Epi.Epi and Epi containing foods have beneficial effects .However, as a question in this study, can the inhibitory effects of Epi on BLM-induced pulmonary damage be extended to the reversing effects of Epi on anticancer activity of BLM?,Can Epi be co-administered with BLM in clinical application for cancer patients?,Hence, it is mentioned that green tea polyphenols induce selective toxicity in cancer cells and could be a valuable adjuvant in the treatment of cancer .Furthermore, the beneficial effects of tea polyphenols in combination with anticancer agents have been widely accepted by cancer researchers .In addition, Epi as a naturally known tea polyphenol compound can enhance the anti-proliferative effect of BLM on cancer cells without any toxicity to normal cells .Nevertheless, Epi is believed to be able to alleviate the negative side effects of BLM and enhance its anticancer efficacy.Data from Epi 100 mg/kg without BLM treatment indicated that Epi has no harmful effect on the lungs and its effects are similar to the control group.Thus, Epi may be considered for inhalation or systemic use before using BLM or at the time of injury in cancer patients.In conclusion, our project showed that Epi could reverse the toxic effects of BLM through the attenuation of oxidative stress, inflammation and fibrosis in mice.Overall, based on the data of this study, Epi effects on BLM-induced pulmonary lesions are schematically shown in Fig. 10.Epi as a restorative agent can improve and control lung damage induced by BLM and maybe increases the quality of life in cancer patients.However, further studies are needed to confirm the safety of Epi and co-treatment of systemic or inhaled Epi with BLM requires more safety and efficacy studies for clinical application in cancer patients.The authors did not have any conflict of interest.
Epicatechin (Epi) as a flavonoid has antioxidant and anti-inflammatory properties.This study was conducted to evaluate the effect of Epi on oxidative stress, inflammation and pulmonary fibrosis induced by bleomycin (BLM) in mice.Finally, it is concluded that Epi can protect the lung against BLM-induced pulmonary oxidative stress, inflammation and fibrosis.
voice research more generally, by optionally exporting a wealth of intermediate signals and analysis results in convenient data formats.With extra multichannel hardware, FonaDyn supports the acquisition of additional signals in parallel and in synchrony, such as pressure, respiration, larynx height tracking, etc., for subsequent co-analysis with the EGG data, using matlab or similar tools.Such hardware, with frequency response down to DC, is available from the music analog synthesizer industry.It is connected via ADAT inputs, which can be found on some high-end audio interfaces.FonaDyn has not yet been deployed to other users, pending the publication of the present article as the primary reference.However, it has been a tool in several student theses and pilot studies, and the principle of EGG waveform clustering has been reported in several journal papers by these authors.At seminars and conference demos, potential users have expressed great interest.FonaDyn 1.5, while perfectly usable, is still a research prototype.With our prior experience of commercializing other software, we realize that much further work is needed to implement its functions in a form that is to be robust in the clinic or in the voice studio.In placing FonaDyn and its source code in the public domain, we invite those interested to develop such a system, with due acknowledgment of the present work.The authors have no competing interests to declare.Once the cluster configuration has been tailored specifically to the user’s research question, the FonaDyn system is able to classify and/or stratify various phonatory regimes of interest, in real time, with visual feedback.Its novel contributions are: phonatory regimes and voice instabilities are mapped over voice sound level and phonation frequency; the statistical clustering obviates the need for pre-defining thresholds or categories; and, the sample entropy shows promise as a metric of perceptually relevant voice instabilities.The program can also serve as a workbench for general voice-related data acquisition and analysis.FonaDyn is hereby provided to the voice research community, as freeware under public license.
From soft to loud and low to high, the mechanisms of human voice have many degrees of freedom, making it difficult to assess phonation from the acoustic signal alone.FonaDyn is a research tool that combines acoustics with electroglottography (EGG).It characterizes and visualizes in real time the dynamics of EGG waveforms, using statistical clustering of the cycle-synchronous EGG Fourier components, and their sample entropy.The prevalence and stability of different EGG waveshapes are mapped as colored regions into a so-called voice range profile, without needing pre-defined thresholds or categories.With appropriately ‘trained’ clusters, FonaDyn can classify and map voice regimes.This is of potential scientific, clinical and pedagogical interest.
Complex vasculobiliary injuries constitute 12–61% of biliary injuries, most commonly following laparoscopic cholecystectomy .They present a formidable challenge to the multidisciplinary team treating these patients.During management, complete right and left hepatic arterial occlusion due to accidental coil migration during embolization of a cystic artery stump pseudoaneurysm is an extremely rare complication .We present a case depicting our strategy to tackle this obstacle in the management of CVBI.This work has been reported in line with the SCARE criteria .A 35 year old healthy lady presented to our department on sixth postoperative day with an external biliary fistula and intra-abdominal sepsis.She had undergone laparoscopic converted to open cholecystectomy for acute calculous cholecystitis.She had a biliary injury that was identified intra-operatively, managed by Roux-en-y hepaticojejunostomy.The anastomosis leaked.An interno-external percutaneous transhepatic biliary drainage extending across the leak was performed at our hospital on POD 7 for both right and left hepatic ducts.On POD nine, she had an upper gastrointestinal bleed.Esophagogastroduodenoscopy and Contrast enhanced computed tomography abdomen did not reveal the source of bleeding.On conventional hepatic arteriogram, a leaking cystic artery pseudoaneurysm was identified.During angioembolisation, due to a short stump of cystic artery, coils were placed in right hepatic artery .However, one of the coils accidentally migrated into the left hepatic artery and could not be retrieved.LHA stenting was performed, with good flow of contrast across the stent.However, LHA developed spasm in its distal part, resulting in complete block of LHA and RHA.On first day after coiling, there was significant elevation of liver enzymes with features of ischemic hepatitis.CECT abdomen with arteriography revealed poor enhancement of hepatic arterial tree in the segmental branches with partial revascularization from inferior phrenic and retroperitoneal arteries.The patient’s relatives were explained the possibility of a need for an emergency liver transplant.The patient improved over the next 48 h, was transferred out of intensive care unit and oral feeds were started.Abdominal drain was removed after it stopped draining bile.She was discharged on POD 28 with PTBD catheters in situ.On POD 33, her liver function tests were within normal limits, percutaneous transhepatic cholangiogram showed trickle of contrast across the RYHJ, and the PTBD catheters were clamped.PTBD catheters were kept longer than 6 months and intervening periodic cholangiograms revealed anastomotic narrowing.Balloon dilatation of the stenosed anastomosis was performed on multiple occasions.Liver function was normal all along.Fourteen months after surgery, cholangiogram revealed a worsening of RYHJ stricture.A CECT abdomen and conventional angiogram performed at 18 months showed a blocked RHA and LHA.Multiple collaterals were seen arising from the right inferior phrenic artery, retroperitoneum and along the hepatoduodenal ligament.In view of the collateral formation and persistent tight stricture with failed multiple dilatations, a surgical revision of the anastomosis was planned.Prior to surgery, right PTBD catheter was maneuvered from right to left duct across the hilum, draining externally.During surgery, utmost care was taken not to release any adhesions in the hepatoduodenal ligament and not to mobilize liver from the diaphragmatic attachments to preserve all collaterals.Hilar dissection was kept to a minimum and the PTBD catheters guided the identification of biliary hilar confluence.A minor segement IV hepatotomy was done to expose the left hepatic duct and the roof of biliary confluence.Previous anastomosis was resected, and a Roux loop was prepared.A 2 cm single layer, interrupted 5-0 polydioxanone RYHJ was fashioned to the anterior wall of the biliary confluence with an extension onto the left duct.Her postoperative recovery was uneventful.Cholangiogram on POD 10 showed a good flow across the anastomosis with no sign of a leak, and she was discharged with a clamped PTBD catheter.The PTBD catheter was removed 6 weeks after the surgery.The patient is doing well at 1 year follow up.Liver function is normal.She is being followed up to evaluate for secondary patency.CVBI, most commonly seen after a laparoscopic cholecystectomy, is defined as any biliary injury that involves the confluence or extends beyond it, any biliary injury with major vascular injury or any biliary injury in association with portal hypertension or secondary biliary cirrhosis .Major vascular injury is injury involving one or more of aorta, vena cava, iliac vessels, right hepatic artery, cystic artery, or the portal vein, seen in 0.04–0.18% of the operated patients and more so in patients with biliary injury .The vascular injury is suspected intraoperatively when there is significant bleeding during laparoscopic cholecystectomy, when there is a sudden rise in alanine aminotransferase during early postoperative course, or when there are multiple metallic clips on plain film images of the abdomen .The arterial injury may have been the result of the primary bile duct injury or may be the result of the attempted biliary repair .Vascular injury can present as pseudoaneurysm, usually within the first 6 weeks of post-operative period, abscesses over 1–3 weeks or ischemic liver atrophy after many years .Proper hepatic artery/both right and left hepatic artery involvement in CVBI has been very scarcely reported in literature.The reported cases in English literature and their management plan are presented in Table 1 .While these cases deal with intraoperative proper hepatic artery injury/occlusion, our case had a post RYHJ cystic artery pseudoaneurysm to begin with and the proper hepatic artery occlusion was an unfortunate event that took place during the embolization.There are occasional case reports of interventional proper hepatic artery pseudoaneurysm coiling leading to complete proper hepatic arterial occlusion with good outcome .However, a biliary injury complicated by cystic artery pseudoaneurysm bleed and then a proper hepatic artery occlusion has not been reported
Introduction: Complete proper hepatic arterial [PHA] occlusion due to accidental coil migration during embolization of cystic artery stump pseudoaneurysm resulting from a complex vasculobiliary injurie [CVBI] post laparoscopic cholecystectomy [LC] is an extremely rare complication with less than 15 cases reported.We present a case depicting our strategy to tackle this obstacle in management of CVBI and review the relevant literature.Presentation of case: A 35 year old lady presented on sixth postoperative day with an external biliary fistula following Roux-en-y hepaticojejunostomy [RYHJ] for biliary injury during LC.She developed a leaking cystic artery pseudoaneurysm, during angioembolisation of which, one coil accidentally migrated into left hepatic artery resulting in complete PHA occlusion.Fourteen months later, cholangiogram revealed a worsening RYHJ stricture despite repeated percutaneous balloon dilatations.Revision RYHJ was fashioned to the anterior wall of biliary confluence with an extension into left duct.Postoperative recovery was uneventful.The patient is doing well at 1 year follow up.
so far, to the best of our knowledge.Proper hepatic arterial injury induces biliary ischaemia and worsens a biliary injury.The hepatic ischemia is usually transient.The effect is more profound when the biliary confluence is disrupted along with a proper hepatic artery injury, since the hilar marginal collateral is also disrupted in these injuries .The effect on blood supply also affects the surgical outcomes.On univariate analysis in a study on factors affecting the surgical outcomes in biliary injury cases, VBI and sepsis were identified as factors for treatment failure.Also, 75% of these cases had complications and a poor long term patency rate .5.6–15% of CVBI cases in another published series required liver resection.Known indications of hepatectomy included concomitant vascular injury and high biliary injury, liver atrophy, and intrahepatic bile duct strictures in their series .The clearing function of the liver with the translocated intestinal bacteria is impaired after ischemia and hence, it is important to maintain these patients with high antibiotic levels in the blood just to avoid septic complications in the ischemic liver parenchyma .Table 1 shows that these cases have a very high mortality rate and overall outcomes are poor.The management options include hepatic resection, liver transplantation in cases with fulminant liver failure and revision RYHJ with or without an arterial repair.Most of these cases have been managed by an early attempted RYHJ and arterial repair .However, temporary percutaneous biliary intervention to allow the collaterals to form has not been attempted in these cases.Since Michel’s study on collateral circulation of liver to the present angiographic studies, it is now known that there are a lot of possible collateral channels to liver and bile ducts as shown in Fig. 5.Collateral flow can be demonstrated 10 h after the occlusion.With time, the collaterals are good enough to sustain the biliary system .As seen in Fig. 5, collaterals can develop from common hepatic artery, gastroduodenal artery, ligamentous arteries from falciform, coronary and right triangular ligaments, pancreatoduodenal arteries, the intercostal arteries and the phrenic arteries .In our case, the predominant collaterals were in the hepatoduodenal ligament and from superior and inferior phrenic arteries.Whilst waiting for the collateral supply to develop, the biliary anastomotic leak can be managed percutaneously by balloon dilatation and/or stenting as was performed in our case.Surgical planning involves identification of all collateral channels on the arteriogram.During surgery, liver mobilization has to be kept to minimum to preserve the ligamentous collaterals and hepatoduodenal ligament dissection also needs to be minimum .The revision anastomosis is performed with the standard surgical principles of RYHJ.Vascular assessment should be part of investigation of all complex biliary injury cases.Delayed definitive repair in cases involving PHA occlusion to allow collateral circulation to be established within the hilar plate, hepatoduodenal ligament and perihepatic/peribiliary collaterals and thereby provide an adequate arterial blood supply to the biliary confluence and the extrahepatic portion of bile duct is a feasible management option.Minimum dissection should be done during surgery to preserve biliary and hepatic neovasculature.Authors declare that they have no conflict of interest.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.Institutional ethics committee approval was taken for the publication.Informed consent was obtained from the patient for this publication.Gunjan Desai: collecting data, analysis of data, preparing the initial draft of the manuscript, critical revision of the manuscript for intellectual content, technical support, material support, study supervision.Prasad Pande: critical revision of the manuscript for intellectual content, technical support, material support, study supervision.Rajvilas Narkhede: critical revision of the manuscript for intellectual content, technical support, material support, study supervision.Dattaprasanna Kulkarni: study concept, critical revision of the manuscript for intellectual content, administrative, technical support, material support, study supervision.This is a case report and does not require such registration.The publication was performed after the approval of research protocols by the Ethics Committee of Lilavati Hospital and Research Centre in accordance with international agreements.Gunjan S Desai is the article guarantor.Not commissioned, externally peer-reviewed.
Multiple collaterals had developed.Minimum hilar dissection ensured preservation of collateral supply to the biliary enteric anastomosis.Discussion: Definitive biliary enteric repair should be delayed till collateral circulation is established within the hilar plate, hepatoduodenal ligament and perihepatic/peribiliary collaterals to provide an adequate arterial blood supply to biliary confluence and extrahepatic portion of the bile duct.Conclusion: Assessment of hepatic arteries should be part of investigation of all complex biliary injuries.Delayed definitive biliary enteric repair ensures a well-perfused anastomosis.Minimum hilar dissection is the key to preserve biliary and hepatic neovasculature.
Advances in nanomaterial synthesis have led to the development of solar cells that can potentially combine high efficiency with lower production costs than conventional cells."One example is the dye sensitised solar cell, first demonstrated in 1991 by O'Regen and Grätzel , which uses nano-structured TiO2 to enhance the absorption by a thin layer of dye and for which efficiencies have reached 12% .Quantum dot sensitised solar cells are a variation of this design in which colloidal quantum dots replace the organic dye.A number of the properties of QDs make them well-suited to the role of photoabsorber.In particular, they have a band gap that can often be size-tuned to optimise exploitation of the solar spectrum; they are photo-stable and highly absorbing; and they can in some cases exhibit multiple exciton generation.This process has the potential to enable the Shockley–Queisser limit of solar cell efficiency to be exceeded .QDs can also be used to replace the electrolyte as the hole-transporting medium .QDs composed of a number of different materials have been used as photoabsorbers in QDSSCs, including: CdS, CdSe, CdTe, PbS, PbSe, InP, GaAs and HgTe .The efficiencies of QD-based cells have shown rapid improvement in recent years, with the greatest efficiency reported currently 7% .Recently, QDs with a Type-II or quasi-Type-II structure have begun to be investigated as photoabsorbers.Type-II QDs have a core/shell structure in which the electron and hole localise in different regions.In contrast, both charge carriers are contained within the same volume in Type-I QDs; in a quasi-Type-II structure, one carrier is delocalised over the whole QD whilst the other is confined to a particular region.A Type-II or quasi-Type-II structure reduces the overlap between the electron and hole wave-functions, decreasing the rate of direct recombination and hence potentially improving the efficiency of charge extraction from the QD.In these structures the band edge transition is between the valance band of one component and the conduction band of the other, which red-shifts the absorption edge from what can be achieved in QDs composed of either material alone.This can be an advantage in some cases because it allows the band-edge to be shifted closer to the optimum value for exploitation of the solar spectrum, ~ 1.35 eV .QDSSCs sensitised by ZnSe/CdS , CdS/ZnS and ZnTe/ZnSe Type-II QDs have been investigated previously.However, all of these had absorption edges in the visible part of the spectrum at wavelengths less than 600 nm, and so were not well-suited to the efficient use of the solar spectrum.Very recently, a QDSSC incorporating CdTe/CdSe QDs has been reported , which had an absorption edge in the near-infrared.There are a number of out-standing issues that require study before the design of Type-II QDs can be optimised for the sensitization of QDSSCs.In particular, a consequence of using a Type-II structure is that the probability that one or other of the charge carriers is found in the shell region is significantly increased.This will lead to an increased likelihood that this carrier will interact with surface traps.Surface trapping has been shown to result in very rapid recombination in Type-II QDs , which might more than offset the reduction in direct recombination.It is also not clear whether there is an advantage n localising the hole in the core of the QD, away from surface traps, rather than the electron, or vice versa.The region in which each charge is localised will also affect the rate at which it can be extracted from the QD.In this study, we compare the performance of QDSSCs sensitised by CdTe/CdSe and CdSe/CdTe QDs, which localise the hole in the core and shell regions, respectively.We also investigate QDSSCs incorporating CdTe/CdSe/CdS and CdSe/CdTe/CdS core/shell/shell QDs.The conduction and valance band structure for each of these QDs is shown schematically in Fig. 1.The values shown are band offsets calculated from bulk ionisation potentials and electron affinities .The CdS outer shell acts as a potential barrier to holes, reducing the overlap of their wave-functions with any surface states.CdTe has been shown to be vulnerable to corrosion by the sulphide electrolyte commonly used in QDSSCs and the CdS layer is also protection against this .Initially, the synthesis of each of the QDs is detailed.The structural characterisation of the QDs by scanning transmission electron microscopy and powder X-ray diffraction is also reported, as is their optical characterisation by absorption and photoluminescence spectroscopy.We also describe an assessment of the effect of surface traps on the recombination rate for each QD using transient PL spectroscopy and PL quantum yield measurements.Finally, the photovoltaic performance of QDSSCs based on each of the QDs is reported and discussed .Cadmium oxide, oleic acid, octadecene, selenium, tri-n-octylphosphine, octadecylamine, tetradecylphosphonic acid, tellurium, and sulphur were used as purchased.All reactions were carried out under nitrogen conditions using Schlenk techniques.Anhydrous solvents were used in all procedures.QDs with a spherical shape are believed to more effectively penetrate nanoporous TiO2 electrodes than other shapes .CdSe nanoparticles can be grown either with a wurtzite or zinc blende crystal structure, with only the latter being isotropic and hence ensuring uniform growth in all directions and a spherical shape.Thus, to obtain spherical growth, a method was chosen to ensure that a zinc blende crystal structure is formed, as opposed to the more stable, at higher temperatures, hexagonal wurtzite structure.For CdTe shell growth, 0.04 mol dm− 3 cadmium oleate and 0.05 mol dm− 3 TOP-Te stock solutions were prepared and stored under nitrogen.In a typical reaction 1 mL CdSe cores were injected
CdTe/CdSe and CdSe/CdTe core/shell colloidal quantum dots, both with and without a second CdS shell, have been synthesised and characterised by absorption and photoluminescence spectroscopies, scanning transmission electron microscopy and X-ray diffraction.Each type of quantum dot had a zinc blende crystal structure and had an absorption edge in the near-infrared, potentially enabling the more efficient exploitation of the solar spectrum.
it is unlikely that CTAB would have affected this cell fraction.Only minor fractions of CTAB were recovered as FFA, and phospholipid, with 25.61 ± 6.05% of CTAB remaining on the column.This data suggests that the adsorbed CTAB did not significantly affect the composition of the lipid fraction.Therefore, CTAB must have increased the amount of phospholipids available for extraction due to the solubilisation of the phospholipid cell membrane ; this may also have contributed to the increased total recovered lipid when harvesting by foam flotation.The neutral lipids found within microalgae extracts are mainly comprised of triacylglycerols .Although both polar and neutral lipids can be converted to biodiesel, neutral lipids are the desired fraction as TAGs are easily transesterified to biodiesel .FFA can also be converted after esterification ; however, Van Gerpen reported that phosphorus containing compounds in the crude lipid oil did not convert into the methyl esters, which may cause problems during conversion and combustion processes .However, whilst not necessarily valuable for biodiesel production, there are established and growing markets for certain phospholipids and their by-products that may form important outputs as part of an algae biorefinery.CTAB is used as a food grade chemical for the extraction of pigments from red beet; therefore procedures to ensure that the product is fit for human/animal consumption are established .Phospholipids can also be recycled as sources of nitrogen and phosphorus for microalgae cultivation, which could significantly reduce the operational production costs .The effect of foam flotation on the fatty acid profile was investigated.It was important to characterise the fatty acid profile as this can dramatically affect the quality of the biodiesel product and inform the economics of a biorefinery producing lipid-based high value products.No significant difference was found between the quantity of FAMEs gained from cells harvest by centrifugation or foam flotation, confirming that the adsorbed CTAB did not significantly affect the neutral lipid fraction.The total tranesterifiable lipid for cells harvested by centrifugation was 6.4 ± 1.3% dry weight and 5.6 ± 0.3% for cells harvested by foam flotation.However, discernible changes within the FAME profiles were noted.Significantly higher yields of the monounsaturated fatty acid oleic acid were recovered from cells harvested by foam flotation compared to cells harvested by centrifugation.Significantly greater yields of total MUFA and saturated fatty acids were also recovered from foam flotation harvested cells."In terms of biodiesel quality, higher proportions of MUFA and SFA are desirable as they increase the fuels' energy yield, cetane number, and also improve the oxidative stability .Surprisingly, cells harvested by centrifugation had higher yields of total polyunsaturated fatty acids at 32.5 ± 0.48% DW compared to 25.3 ± 0.54% DW for cells harvested by foam flotation; including higher yields of linoleic and linolenic acids.There was no significant difference between the C18 series between either harvest method.Knothe stated that palmitic, stearic, oleic, and linolenic acids are the most common fatty acids present in biodiesel.These components equate to 24.7 ± 0.46% for centrifugation and 23.3 ± 0.30% for foam flotation; there was no significant difference between the harvesting methods.Lee et al. tested the effect of three different flocculating methods: pH adjustment, treatment with aluminium sulphate, and treatment with Pestan, on the lipid content of Botryococcus braunii.It was found that the total lipid content was unaffected by the harvest method; however, no investigation into the fatty acid profile was carried out.Borges et al. also found no significant difference for the total microalgae lipid content with respect to harvest method when comparing anionic and cationic polyacrylamide flocculants; however, the fatty acid profile differed significantly between different flocculants.It would therefore appear that the choice of harvest method can greatly affect lipid product quantity and quality.Harvesting of Chlorella sp. by foam flotation is most effective during phases of active culture growth, suggesting that foam flotation may prove particularly advantageous for species that synthesise desirable biochemicals during active growth, but not as beneficial necessarily for species cultured specifically for biodiesel production.A greater quantity of lipid was recovered when biomass was harvested by foam flotation as opposed to centrifugation.This study is the first to investigate the effect of CTAB-aided foam flotation harvesting on lipid content and fatty acid profiles.The improved lipid recovery occurred due to a combination of an increase in the total extractable lipid caused by the solubilisation of the phospholipid bilayer by the surfactant CTAB, and also a proportion of the CTAB dose becoming adsorbed onto the cell and entering the lipid extraction process.Foam flotation resulted in a predominance of saturated and monounsaturated fatty acids within the fatty acid profile, which provide many favourable features for biodiesel production.Foam flotation is an advantageous microalgae harvesting technique and a full technoeconomic analysis in relation to microalgae biorefining is greatly needed.
Foam flotation is an effective and energy efficient method of harvesting microalgae.Surprisingly, the quantities of lipid recovered from microalgae harvested by foam flotation using the surfactant cetyl trimethylammonium bromide (CTAB), were significantly higher than from cells harvested by centrifugation.Further, cells harvested by CTAB-aided foam flotation exhibited a lipid profile more suited to biodiesel conversion containing increased levels of saturated and monounsaturated fatty acids.However, further evidence also suggested that CTAB promoted in situ cell lysis by solubilising the phospholipid bilayer, thus increasing the amount of extractable lipid.This work demonstrates substantial added value of foam flotation as a microalgae harvesting method beyond energy efficient biomass recovery.
in Training schools.iLEAPS also supports participants from developing countries to conferences and training schools to ensure truly global coverage.The iLEAPS newsletter with a hard-copy circulation of ca. 3000 and available online to the whole community and beyond highlights important aspects of iLEAPS work to large audiences.With the emerging Future Earth, iLEAPS will initiate and join integrated activities that aim at providing sustainable solutions via co-designing with funders, scientists, and private sector stakeholders relevant to the question at hand.The Initiatives that iLEAPS is developing in collaboration with its sister projects include the joint iLEAPS-GEWEX initiative Aerosols, Clouds, Precipitation, Climate that was restructured in early 2013; the joint iLEAPS-GLP-AIMES initiative Interactions among Managed Ecosystems, Climate, and Societies; the joint IGAC-iLEAPS-WMO Interdisciplinary Biomass Burning Initiative; the joint iLEAPS-GEWEX theme on Bridging the Gaps between Hydrometeorological and Biogeochemical Land-Surface Modeling; the joint iLEAPS-ESA initiative Biosphere-Atmosphere-Society Index; the Extreme Events and Environments initiative that aims to connect the two separate communities working on temporary climate extremes and permanently extreme environments, respectively, to shed light on the adaptive capacities of the Earth’s ecosystems and societies; and the ambitious international programme The Pan-Eurasian Experiment that includes more than 40 institutes in Europe, Russia, and China.In addition to these active initiatives, more work is in preparation: iLEAPS is planning to engage with the adaptation community on hotspot areas especially in the Arctic and in Africa, with Latin America and East and South Asia in mind as well.The regional offices of iLEAPS are a crucial element of this work.Over the last decade, the importance of land-atmosphere processes and feedbacks in the Earth system has been shown on many levels and with multiple approaches, and a number of publications have shown the crucial role of the terrestrial ecosystems as regulators of climate and atmospheric composition.Modelers have clearly shown the adverse effect of neglecting land cover changes and other feedback processes in current Earth system models and recommended actions to improve them.Unprecedented insights of the long-term net impacts of aerosols on clouds and precipitation have also been provided.Land-cover change has been emphasized with model intercomparison projects that showed that realistic land-use representation was essential in land surface modeling.Crucially important tools in this research have been the networks of long-term flux stations and large-scale land-atmosphere observation platforms that are also beginning to combine remote sensing techniques with ground observations.The first decade of iLEAPS work focussed mainly on natural, pristine environments.The result has been a substantial increase in our understanding of the processes controlling land-atmosphere interactions, but still the uncertainties related to their role in the Earth system are large.In addition, increasingly, the main influence modifying ecosystems is human society.Humans are now one of the strongest influences on climate and the environment in the history of the Earth, and can no longer be ignored in studies of the Earth system and land-atmosphere interactions.The second phase of iLEAPS will concentrate on interactions between natural and human systems and on feedbacks among climate, atmospheric chemistry, land use and land cover changes, socioeconomic development, and human decision-making.iLEAPS will contribute to Future Earth’s agenda to provide research and knowledge to support the transformation of societies toward global sustainability.
The integrated land ecosystem-atmosphere processes study (iLEAPS) is an international research project focussing on the fundamental processes that link land-atmosphere exchange, climate, the water cycle, and tropospheric chemistry.The project, iLEAPS, was established 2004 within the International Geosphere-Biosphere Programme (IGBP).During its first decade, iLEAPS has proven to be a vital project, well equipped to build a community to address the challenges involved in understanding the complex Earth system: multidisciplinary, integrative approaches for both observations and modeling.The iLEAPS community has made major advances in process understanding, land-surface modeling, and observation techniques and networks.The modes of iLEAPS operation include elucidating specific iLEAPS scientific questions through networks of process studies, field campaigns, modeling, long-term integrated field studies, international interdisciplinary mega-campaigns, synthesis studies, databases, as well as conferences on specific scientific questions and synthesis meetings.Another essential component of iLEAPS is knowledge transfer and it also encourages community- and policy-related outreach activities associated with the regional integrative projects.As a result of its first decade of work, iLEAPS is now setting the agenda for its next phase (2014-2024) under the new international initiative, future Earth.Human influence has always been an important part of land-atmosphere science but in order to respond to the new challenges of global sustainability, closer ties with social science and economics groups will be necessary to produce realistic estimates of land use and anthropogenic emissions by analysing future population increase, migration patterns, food production allocation, land management practices, energy production, industrial development, and urbanization.
one function parameter in the probability distribution function and the parameter name must be “aa”, “bb” or “cc” corresponding to the first, second or the third fitting parameter.Remind also that, a sample file for the AMIDAS code can be downloaded from the AMIDAS website.For doing Bayesian fitting, users need to set the lower and upper bounds of the scanning range for each fitting parameter:Moreover, once the probability distribution function of one fitting parameter has been chosen as, e.g. Gaussian-distributed, it will be required automatically to set also the expected value and the standard deviation of this parameter:On the other hand, for the case that an user-defined probability distribution function has been used for one fitting parameter, users have also to give the notation and the unit of this parameter for the output files and plots:Finally, as the last information for our Bayesian analysis procedure, users have the opportunity to choose different scanning method.So far in the AMIDAS-II package we programmed three different scanning methods:For a finer scanning, the third option above “scan the whole parameter space roughly and then the neighborhood of the valid points more precisely” let AMIDAS-II scan immediately and randomly the neighborhood of the point for finding a better point, once this point is valid or almost valid.13,Users can set the number of scanning points for one fitting parameter:Remind that the default AMIDAS simulation setup shown in Sections 2.2–2.4, Sections 3.2–3.7 and Sections 5.2–5.4 has been generally used; the total number of WIMP-signal events is now set as 500 on average .Meanwhile, for numerical simulations, two plots for users’ comparisons will also be offered.One is only the comparison of the reconstructed rough velocity distribution with the input functional form :The distribution of each Bayesian reconstructed fitting parameter, e.g. the characteristic Solar and Earth’s Galactic velocities, in all simulated experiments or analyzed data sets will also be provided by the AMIDAS-II package:Moreover, for reconstructions with more than one fitting parameter, the projection of the result points on each 2-D plane of the parameter space will also be given:In this paper, we give a detailed user’s guide to the AMIDAS package and website, which is developed for online simulations and data analyses for direct Dark Matter detection experiments and phenomenology.AMIDAS has the ability to do full Monte Carlo simulations as well as to analyze real/pseudo data sets either generated by another event generating programs or recorded in direct DM detection experiments.Recently, the whole AMIDAS package and website system has been upgraded to the second phase: AMIDAS-II, for including the new developed Bayesian analysis technique.Users can run all functions and adopt the default input setup used in our earlier works for their simulations as well as analyzing their own real/pseudo data sets.The use of the AMIDAS website for users’ simulations and data analyses has been explained step-by-step with plots in this paper.The preparations of function/data files to upload for simulations and data analyses have also been described.Moreover, for more flexible and user-oriented use, users have the option to set their own target nuclei as well as their favorite/needed one-dimensional WIMP velocity distribution function, elastic nuclear form factors for the SI and SD WIMP–nucleus cross sections and different probability distribution functions needed in the Bayesian reconstruction procedure.As examples, the AMIDAS-II codes for all user-uploadable functions are given in Sections 3 and 5 as well as Appendices B and C.In summary, up to now all basic functions of the AMIDAS package and website have been well established.Hopefully this new tool can help our theoretical as well as experimental colleagues to understand properties of halo WIMPs, offer useful information to indirect DM detection as well as collider experiments, and finally discover Galactic DM particles.
In this paper, we give a detailed user's guide to the AMIDAS (A Model-Independent Data Analysis System) package and website, which is developed for online simulations and data analyses for direct Dark Matter detection experiments and phenomenology.Recently, the whole AMIDAS package and website system has been upgraded to the second phase: AMIDAS-II, for including the new developed Bayesian analysis technique.AMIDAS has the ability to do full Monte Carlo simulations as well as to analyze real/pseudo data sets either generated by another event generating programs or recorded in direct DM detection experiments.Moreover, the AMIDAS-II package can include several "user-defined" functions into the main code: the (fitting) one-dimensional WIMP velocity distribution function, the nuclear form factors for spin-independent and spin-dependent cross sections, artificial/experimental background spectrum for both of simulation and data analysis procedures, as well as different distribution functions needed in Bayesian analyses.
robots .With the ability to incorporate enhanced hardware and AI/machine learning to carry out many everyday jobs, smart automation now enables the discovery of new molecules and improvements to existing chemical synthesis .In addition, AI/machine learning coupled with ‘big data’-generating systems will, and already can in some cases, directly provide outputs for many purposes in the various fields of chemistry.This is because the fundamental nature of AI/machine learning permits the model to be updated and continuously refreshed as new data is produced, leading to more discoveries that cover a larger area of chemical space and eliminating negative confounding factors.In our view, the enthusiasm of the field should now be focusing on the potential of developments for chemical discovery with emphasis on automation coupled with machine learning, harnessing the powerful capabilities of these approaches shown throughout this opinion article.Here, we have shown how automation and machine learning can improve efficiency and accuracy and therefore are a universal combination for synthesis, optimization, and discovery in the chemistry laboratory.Outstanding Questions,How can we enable synthetic chemists to operate chemputers without having to know how to code?,How much of current chemistry can be done with the Q12019 chemputer?,Can we drive adoption of the chemputer via development of a new way to write synthesis protocols?
There is a growing drive in the chemistry community to exploit rapidly growing robotic technologies along with artificial intelligence-based approaches.Applying this to chemistry requires a holistic approach to chemical synthesis design and execution.Here, we outline a universal approach to this problem beginning with an abstract representation of the practice of chemical synthesis that then informs the programming and automation required for its practical realization.Using this foundation to construct closed-loop robotic chemical search engines, we can generate new discoveries that may be verified, optimized, and repeated entirely automatically.These robots can perform chemical reactions and analyses much faster than can be done manually.As such, this leads to a road map whereby molecules can be discovered, optimized, and made on demand from a digital code.
stress.However, it is important to note that an oxide would generate a compressive lateral stress which must be balanced by a tensile stress present deeper into the matrix.Therefore, if the oxide is intergranular a lateral compressive stress is generated which is balanced by a tensile stress ahead of the oxide along the grain boundary.This tensile stress, which cannot be measured with FIB micro-hole drilling technique, can enhance the oxidation kinetics.Once a considerable amount of intergranular oxide is formed, it would help to reduce the Ni transport to the surface along the grain boundary, leading to higher compressive stress near the GB and resultant higher biaxial tensile stress ahead the oxidation front.This high compressive stress would then be partially relieved by pipe diffusion of Ni atoms to the surface with Ni nodules formation also in the regions adjacent to the GB itself) .The extent of the intergranular oxide penetration measured via FIB cross-sectioning analysis was found to vary according to the sign and magnitude of residual stress.The intergranular oxide penetration depth profile reported in Fig. 8 clearly highlighted this behaviour.In particular, regions of the specimen subjected to a compressive residual stress always showed intergranular oxide penetrations of less than 280 nm whereas regions characterized by tensile residual stress exhibited GB oxide penetration greater than 400 nm.The increased depth of preferential intergranular oxide penetration in regions with tensile residual stress could be associated with the stress-assisted diffusion of Cr to the GB and the vacancy migration away from the GB.In fact, Cr is expected to diffuse faster to GBs under a tensile stress, thus increasing the intergranular oxidation rate.Moreover, in presence of a tensile stress the O solubility in the material is markedly increased and its diffusivity is affected by the hydrostatic stress gradient present in the host metal in a similar manner to other interstitial elements such hydrogen .Oxygen locally diffuses from compressive or low-tension zones towards those in high tension.In the current case when a residual tensile stress is present, a stress gradient is generated between the surface and the bulk, and a hydrostatic stress can be present only in the bulk material.This stress gradient is ultimately believed to be responsible for the enhanced oxygen diffusion along the grain boundary, and for the phenomenon of stress-assisted preferential intergranular oxidation.The beneficial effect of compressive stresses on the intergranular oxidation susceptibility is markedly visible in region “A” of Fig. 1.At the extrados, where the residual compressive stress was high, the intergranular oxide penetration was less than 200 nm, but as the residual compressive stress decreased, the intergranular oxide penetration markedly increased reaching a depth of 300 nm in vicinity of region “A” to “B” transition where the residual compressive stress was much lower.It is also important to note that the specimen was plastically bent and therefore a considerable amount of plastic strain was present in the sample, especially in regions “A” and “E” of Fig. 1.Thus, the effect of plastic strain on the intergranular oxidation susceptibility must also be considered and evaluated.In fact, it is well-known that the presence of plastic strain promote strain incompatibilities and strain gradient across the grain boundary, which can affect cracking initiation and propagation as well as increase the intergranular oxidation susceptibility and cracking of Alloy 600 and austenitic alloys .Therefore, it might have an addictive effect to stress, promoting intergranular oxidation even in regions where a compressive residual stress would hinder it.The results of this study have shown that the combination of advanced micromechanical and analytical techniques are providing new insight into processes occurring at the surface/near-surface regions in Alloy 600 during exposure in H2–steam at 480 °C, and are thus relevant to the early stages of SCC initiation phenomena:Advanced AEM characterisation of FIB-produced cross-section specimens demonstrated that Alloy 600 underwent both internal and preferential intergranular oxidation in H2-steam at 480 °C.The XRD stress profiles revealed a marked variation in the stresses at the GB suggesting a correlation between the oxide surface morphology, the intergranular oxidation susceptibility and the residual stress.The FIB-based micro-hole drilling technique was successfully employed to obtain residual stress profiles across grain boundaries for the first time on polycrystalline material.These residual stress results are correlated with the susceptibility of the material to preferential intergranular oxidation.The presence of residual tensile stress enhances the intergranular oxidation susceptibility of Alloy 600 whereas plastic strain has a secondary influence.The local and random GB oxide morphology variations were often seen at the intersection of twin and high-angle grain boundaries suggesting that the GB character can play an important role on the oxidation susceptibility.The presence of Al and Ti oxides at HAGBs might play a fundamental role in the first stage of Alloy 600 preferential intergranular oxidation acting as a precursor event for the subsequent Cr-rich oxide formation.The Novel FIB micro-hole drilling technique could be employed to measure stresses in a variety of polycrystalline materials in order to understand the effect of stress localization on environmentally assisted cracking susceptibility.
Analytical electron microscopy was employed to characterize the early stages of oxidation to aid in developing an understanding of the stress corrosion cracking behaviour of this alloy.The measurements of residual stresses at the microscopic level using a recently-developed FIB micro-hole drilling technique revealed a correlation between local stress variations at the grain boundaries and the oxide morphology.
According to the ambitious targets of climate change mitigation made in the Paris Agreement, there is a need for rapid and effective reductions in global greenhouse gas emissions.The Paris Agreement aims at holding the increase in the global average temperature well below 2 °C, compared to pre-industrial levels, and pursues an even smaller increase of 1.5 °C.Boreal forests and forestry may largely contribute to the global carbon cycle and mitigation of climate change.This is because boreal forests sequestrate large amounts of carbon dioxide from the atmosphere and provide forest biomass for the growing needs of the bioeconomy, which will reduce the use of fossil fuels.One way to reduce the GHG emissions from the production and use of fossil-based products and fuels is to replace them with wood-based products and fuels.Increased use of wood-based products and fuels can limit GHG emissions by the substitution effect and enhance the removal of CO2 from the atmosphere by increasing the carbon stocks in wood-based products.The climate benefits of wood utilization are typically considered self-evident if sustainable forestry holds, i.e. when the harvested forest area remains as a forest and new trees will replace the harvested trees in the area.The wood utilization is, however, more complicated from the viewpoint of climate change mitigation if time aspects are taken into account.Firstly, wood harvesting reduces the carbon stocks of forests, compared to unharvested forests.Secondly, most of the carbon in new wood-based products and fuels will also be released back to the atmosphere rapidly, especially from biofuels and paper products.This will lead to a situation in which GHG emissions measured as carbon dioxide equivalents are increased in the atmosphere in a certain time interval if harvested wood with substitution effects cannot compensate for carbon debt in forests before the new forest growth.From a climate change mitigation point of view, increased biogenetic CO2 emissions is analogous to an increase in fossil-based carbon emissions, especially when studied over short time periods.In this sense, to gain climate benefits over time, harvested wood should be used for products and fuels that would release less GHG emissions to the atmosphere than substituted fossil-based products and fuels.Additionally, we should simultaneously increase carbon sequestration in forests.In recent years, the climate impacts of forest-based bioeconomy have been assessed in many simulation-based studies, considering changes in carbon stocks in forests and wood-based products and fuels."In many previous studies, the climate benefit of substituting non-wood products and fuels with wood-based ones have been quantified through a displacement factor, which expresses the amount of reduced GHG emissions per mass unit of wood use, when producing a functionally equivalent product or fuel.In its calculation, the GHG emissions of all stages of the life cycles of products and fuels are taken into account, but DFs do not cover the impacts of wood harvesting on the carbon stocks of forests and wood-based products and fuels.When the interpretation of the climate impacts of wood-based products and fuels is based only on the values of DFs, changes in carbon stocks in forests and wood-based products are not considered.However, they should be considered in the evaluation of net climate impacts for forest biomass use over time.The predicted results of carbon stock development in forests by simulation models depend especially on the quality of input data and on models’ capability to describe relevant carbon flow processes in forests.For assessing DFs, life cycle assessments of both wood- and non-wood-based products and fuels include also uncertainties.Although this methodology has been standardized and there exist guidelines for the calculation rules of LCA.For example, forest industry produces a wide range of wood product types and materials, the DFs of which are difficult to assess on regional and market levels because of data gaps in real substitution situations and the challenges related to the GHG assessments in product comparisons.In practice, the assessments employ different choices and assumptions in the methodology and input data, which may be site- and region-specific."The reported DFs have in most cases been positive for wood-based products.This means that they cause less GHG emissions compared to fossil-based alternatives.In general, the use of wood-based products and fuels may be assumed to have positive net climate impacts over time, if their emission reductions due to DFs are greater than the reduction in the carbon stocks in forests and wood-based products and fuels in a selected time period.In this study, the aim was to develop a methodology to assess a required displacement factor for all wood products and bioenergy manufactured and harvested in a certain country in order to achieve zero CO2 equivalent emissions from increased forest utilization over time in comparison with a selected baseline harvesting scenario.We applied the methodology in the real case of Finland to assess the RDF at the country level.In order to interpret the RDF results, a magnitude of average DFs for all domestic wood-based products and fuels produced in the Finnish forest industry was assessed.In this study, WU in Eq. includes all wood material that is harvested from forest sites.Furthermore, the calculation of GHG emissions is based on the use of GWP factors for different GHG emissions in order to express results as CO2 equivalents of the emissions.GHG emissions represent fossil-based emissions along the life cycles of products and fuels in the techno-sphere.If ΔCF is positive in Eq., forests act as carbon sinks.If Net C is negative, the forest utilization causes more CO2 equivalent emissions than it reduces them.If the result of Eq. is negative,
However, the DFs of individual products and their production volumes could not be used alone to evaluate the climate impacts of forest utilization.For this reason, in this study we have developed a methodology to assess a required displacement factor (RDF) for all wood products and bioenergy manufactured and harvested in a certain country in order to achieve zero CO2 equivalent emissions from increased forest utilization over time in comparison with a selected baseline harvesting scenario.
of INT1 the corresponding emissions will be clearly smaller, i.e. 367 Mt CO2 equivalents for the first 50 years and 696 Mt CO2 equivalents for the whole period.Finnish GHG emissions in 2015 were 55.6 Mt CO2 equivalents.Assumptions and uncertainties in models and their input data will contribute to the results of RDF.In addition, the uncertainty aspects related to the estimation of an average DF for all domestic wood-based products and fuels produced in a country play an important role in the interpretation of RDF results.In our study, the average RDFs during the 100-year period obtained from the difference between the basic and INTs scenarios in 2017–2116 are clearly greater than our average displacement factors for domestic wood-based products and fuels produced from Finnish forests.Our estimation is quite similar to the average DF of 1.2 tC tC−1 obtained in the meta-analysis by Leskinen et al., in which DFs were derived from 51 case studies on products mostly covering wood used in construction materials.However, it is important to notice that substitution impacts of forest utilization on country levels have been estimated to be lower than estimations calculated for individual products.On the country level, two recent studies report average DFs of 0.5 tC tC−1 in Switzerland and Canada.The results indicate that our rough estimation of DFF may be overestimated and it can be considered as “a maximum value”.For this reason, our estimates of DFF will probability lead to too positive interpretations for the climate impacts of wood utilization.The Monsu model has been developed by utilizing large sets of empirical observations on forest growth and soil respiration, considering also changes in tree growth due to climate change.Sensitivity analyses have been conducted with the Monsu model on changes in the carbon pools of living forest biomass, dead organic matter and wood products, as well as carbon releases from harvesting in regard to management and wood use intensity.It can be assumed that Monsu can describe well the current carbon balance of forest utilization in Finland, but possible changes in environmental circumstances will be challenging for future predictions in all models.For example, simulation models seldom consider the effects of forest management and harvesting intensity, and climate change, on different abiotic and biotic disturbances.The aging of forests and increasing volume of growing stock, especially in Norway spruce, may increase different abiotic and biotic damage to forests by windstorms, drought, insects, pathogens, and forest fires.As a result of large-scale disturbances, forest carbon stocks may decrease and large amounts of carbon may be released into the atmosphere.One way to check the reliability of our calculations is to compare them to corresponding results produced by different forest simulation models.As showed in Section 4.1, the MELA model will produce quite similar results, but the timeframe of the comparison was only 30 years.However, the comprehensive comparisons had not been available.For these reasons, it is important to carry out comparative studies in order to understand the behavior of forest simulation models and their possible limitations to improve conclusions about the reliability of the calculated RDFs.In the method developed in this study, determination of the required displacement factor for additional domestic wood harvesting was based on the difference in the carbon stocks in forests and wood-based products and fuels between two wood harvesting scenarios during a certain time period.A RDF expresses here the minimum efficiency of using forest biomass to reduce net GHG emissions.The 100-year simulation of the use of domestic round wood by the Finnish forest industry revealed that increasing wood harvesting permanently by 19 Mm3 yr−1 from the basic level would lead to a required displacement factor of 2.4 tC tC−1 for wood-based products and fuels obtained from the increased harvest in 2017–2116.This would compensate for the decreased carbon sinks in forests and changes in the carbon stocks of wood-based products.However, reported displacement factors for wood-based products and fuels and the share of wood-based products and fuels manufactured in Finland indicate that the average displacement factor of wood-based products and fuels produced in the Finnish forest industry is probably under 1.1 tC tC−1.The lower value of DFF compared to the assessed value of RDF means more net GHG emissions to the atmosphere.The increase of 9.6 Mm3 yr−1 in wood harvesting in Finland will cause only slightly smaller RDFs during the next 100 years compared to the increase of 19 Mm3 yr−1.The results indicate that the increase of harvesting intensity in the current situation represents a challenge for the Finnish forest-based bioeconomy from the viewpoint of climate change mitigation.Our method is also applicable in other countries and it is straightforward to apply at a country level to calculate the RDFs for additional harvesting and utilization of domestic round wood for different wood-based products and fuels, if forest simulation models and required input datasets are available.However, to reduce the uncertainty of RDF calculations and to improve the interpretation of results, there is a need to produce corresponding results using also other simulation models and different circumstances.Better estimations on the average DF of wood-based products and fuels manufactured from domestic wood for the current situation and in the future are also needed.
A displacement factor (DF) may be used to describe the efficiency of using wood-based products or fuels instead of fossil-based ones to reduce net greenhouse gas (GHG) emissions.Input data for calculations were produced with the simulation model, Monsu, capable of predicting the carbon stocks of forests and wood-based products.We tested the calculations in Finnish conditions in a 100-year time horizon and estimated the current average DF of manufactured wood-based products and fuels in Finland for the interpretation of RDF results.However, the estimated average DF of manufactured wood-based products and fuels currently in Finland was less than 1.1 tC tC−1.The results indicate strongly that the increased harvesting intensity from the current situation would represent a challenge for the Finnish forest-based bioeconomy from the viewpoint of climate change mitigation.For this reason, there is an immediate need to improve reliability and applicability of the RDF approach by repeating corresponding calculations in different circumstances and by improving estimations of DFs on country levels.
5 V.The measurement was repeated with 1500 ns pulses and compared to NDMOS with gate grounded.In this case, the device capability is determined by avalanche breakdown of the vertical NDMOS.On-time of the clamp was also measured, and is approximately 1 μs.Qualification showed that the product passes 3 kV HBM and 750 V CDM.An unexpected LU failure was detected on the 1st revision of the silicon.Measurements were performed on bench in order to understand the root-cause of the failure.All board capacitances on the supply pin were removed in order to analyse the failure w/o causing destruction.The supply was powered with a 57 V source and a negative current sweep was done on the impacted IO.At −3 mA the supply went abruptly in compliance.When the LU source was removed, the device resumed normal operation.Triggering of the supply clamp was suspected.A closer look to the layout of the failing product revealed the presence of a substrate NPN between “any N-type pocket”, substrate and IO under test.The suspected victim collector pockets were the cathodes of the zener stack used as static trigger of the supply clamp.A FIB was performed to disconnect these zeners, which indeed solved the problem.On the Rev2 silicon, the static trigger of the supply clamp was removed and additional NPLUG guard-rings were also added.The device now passes 100 mA LU.Up to 1 kV differential and 2 kV common-mode surge robustness is required for indoor PoE applications.The surge generator has a rise-time of 8 μs and decay time of 20 μs when discharged into a short circuit .For data lines, a 40 Ω resistance is placed in series with the 2 Ω surge generator, resulting into 48 A peak current for a 2 kV surge.For the analysis, a differential surge on unpowered device has been assumed as a worst-case.Powered surge benefits from the CPD capacitance, and common-mode surge results into reduced discharge current, depending on common-mode impedance to earth .For symmetrical communication lines a 5/320 μs common mode surge is applicable as well.During system-level events the current needs to be handled by the TVS and the supply clamp of the IC should have a Vt2 above the TVS clamping voltage .A pulsed absmax characterization of the Rev2 silicon and the TVS devices was done from 100 ns until DC.The short pulses were measured with TLP.Intermediate pulses were generated with a custom pulse generator.The results are summarized on Fig. 13.Furthermore a characterization of the TVS devices was done with 20 μs rectangular pulses as a worst-case approximation of 8/20 μs surge.We observed that the on-resistance of the TVS devices depends strongly on the pulse length, indicating that this is related to the self-heating of the TVS.When the TVS is used sufficiently below its thermal capability, the voltage drop of the TVS is significantly below the maximum value listed in the datasheet and will therefore adequately protect the controller IC.When larger surge protection levels are required, a secondary protection approach is proposed.This scheme uses 2 external resistors and an optional secondary TVS.The resistances RVPP & RRTN can be inserted without impacting the operation of the controller if their resistance value is low enough.It is recommended to limit RVPP to 10 Ω in order not to disturb the communication between PD and PSE during classification.Adding the RRTN resistance is also possible on the 802.3bt PD controller with integrated pass-switch by using a dedicated sense pin on the controller, separated from the high-current path through the drain of the pass-switch.The primary TVS is either SMAJ58A or 1SMB58A.The optional series resistors RVPP and RRTN are 10 Ω, and the optional secondary TVS is SMF58A.Measurements were done with several configurations of DTVS_1, RVPP = RRTN and DTVS_2.Tables 3 & 4 list the current capability for 20 μs and 1 ms rectangular pulses.Table 3 indicates that by using SMF as secondary TVS it was not possible to make the application fail with the current capability of the test setup.On Fig. 16, a rectangular pulse of 20 μs was applied.The measured configuration uses DTVS_1 = SMAJ58A, RVPP = RRTN = 10 Ω and DTVS_2 = SMF58A.The v waveform shows the voltage across the primary TVS, and v is the voltage across the controller IC.The primary TVS was operated close to its capability and confirms the measurement on Fig. 14, showing VPORTP ~120 V.Table 4 indicates that by using the secondary protection approach, the surge capability is solely determined by the primary TVS.During the design of an ESD supply power clamp it is important to consider also the application level surge requirements and the characteristics of the external TVS typically used to clamp the surge current.A pulsed measurement method was demonstrated in order to extract the characteristics of the IC and the external TVS, allowing an analytical design of the application diagram for a given protection level.Co-design of the IC and application schematic revealed the possibility to insert a small series resistor on dedicated pins not impacting the operation of the controller.This is optionally complemented with a small secondary TVS, and allows a scalable surge protection level.A robustness against >50 A for 20 μs pulses was demonstrated, equivalent with 2 kV test level for a 8/20 μs current surge applied line-to-line.This approach also opens up the possibility to further extend the surge immunity to the levels required for extreme outdoor environments.
In this paper we present the development of an electrostatic discharge (ESD) protection supply clamp for an 802.3bt PD PoE controller.A characterization method for extending the System-Efficient ESD design (SEED) approach to surge pulses is presented and applied to the IC and externals.Finally an application diagram for enhanced surge protection is proposed that enables the PD PoE controller to operate in harsh electrical environments.
that is taken up by each slice, which is always displayed in the Investigator View.The final percentages can be easily logged by the investigator, e.g., by taking a screen shot.In the Participant View, the participant uses the knobs to manipulate sizes of slices on the wheel.The participant is also able to use the reset button to re-initialize the wheel to equally sized slices.The participants can view the slice titles and if the investigator allows, the participant can also see the percentages of the wheel that are currently occupied by each slice.It is simple to toggle between the Investigator and Participant Views.In a typical usage setting, the investigator will use the Investigator View to set the desired settings for the question to be asked, and the participant will adjust the slices of the wheel with the touchable knobs to give a visual estimate of their answer.The investigator can then switch back to Investigator View, log the information and proceed to the next question.The probability wheel could be used to help a person estimate the probabilities of possible events occurring.For example, in a consultation about transverse rectus abdominis muscle flap breast reconstruction, the attending surgeon could use the probability wheel to estimate the probabilities that no complications will arise; that the worst complication will be minor enough to only require local wound care; that the worst complication will be serious enough to require an invasive procedure; or that the worst complication will be a life-threatening complication.Using the above case as an example, in the mobile app, the attending surgeon can set up four options such as no complications, minor complications with local wound care, serious complications with invasive procedures, and life-threatening complication in the investigator view, and use the probability wheel in the participant view to estimate the probabilities of these possible events occurring.The attending surgeon can also use the mobile app’s visualization as a graphic aid while explaining these possible outcomes to the patient.The probability wheel could also be used to assist in the utility assessment process.The probability wheel could be used to vary the probabilities of two complementary health states to determine the evaluation of an intermediate health state.For example, in a breast reconstruction consultation, the complementary health states could be a TRAM flap reconstruction surgery with no complications vs. a TRAM flap reconstruction with life threatening complications.The attending surgeon can use the mobile app’s to show a visualized utility assessment between these two possible events to the patient.This notion of a standard gamble, where the anchors are the best possible state and the worst possible state, is well-established in the decision science literature .In clinical decision analysis, the standard gamble evaluates an intermediate health state compared to the best possible health state and the worst possible health state .Procedures and tools for computerized utility assessment have been previously developed .Here we presented a mobile app implementation of the probability wheel on both Android and iOS platforms.Compared to the physical prop of a traditional probability wheel, this software application is more portable, available, and versatile, e.g., it is simple to adjust the number of slices.We envision that this tool will improve decision consultations by enabling accurate quantitative estimates of probabilities and utilities.
A probability wheel app is intended to facilitate communication between two people, an “investigator” and a “participant”, about uncertainties inherent in decision-making.A user adjusts the sizes of the slices to indicate the relative value of the probabilities assigned to them.A probability wheel can improve the adjustment process and attenuate the effect of anchoring bias when it is used to estimate or communicate probabilities of outcomes.The goal of this work was to develop a mobile application of the probability wheel that is portable, easily available, and more versatile.
in persons with Mtb infection or active TB represents a possibility that must be considered when testing therapeutic TB vaccines on persons with active TB disease.The possibility of pulmonary and systemic inflammatory reactions , and the breakdown of granuloma structure potentially resulting in Mtb dissemination should be considered and monitored.Respiratory function tests may be useful to monitor lung safety and measure therapeutic impact on lung morbidity.Mostly minor adverse events have been observed in most therapeutic vaccine studies conducted to date , but enrollment in a phase 2 trial of the M72/AS01E vaccine candidate was interrupted prematurely after the observation in TB patients of local reactions larger than expected .A careful safety risk management plan should be instituted to identify and mitigate potential risks.Safety data will need to be interpreted in the context of co-administered drug treatment to sick individuals.The potential severity of TB, compared to healthy people at minimal risk of health outcomes, may justify considering different acceptable safety thresholds for therapeutic vaccines than for prophylactic vaccines.The primary audience for this WHO PPC document includes all entities involved in the development of therapeutic vaccines for improvement of TB treatment outcomes.This PPC is presented as a complement to the existing WHO PPC on the development of TB vaccines intended to prevent TB disease , providing guidance to scientists, funding agencies, and public and private sector organizations developing therapeutic TB vaccine candidates.It is anticipated that PPCs provide a framework for developers to design their development plan and define in more detail specific target product profiles.WHO PPCs are developed following a consensus-generating wide consultation process involving experts and stakeholders in the field.Key policy considerations are highlighted, but preferred attributes expressed here do not pre-empt future policy decisions.The PPC criteria proposed are aspirational in nature.Some aspects of a potentially effective therapeutic vaccine may diverge from those proposed in this PPC, which would not necessarily preclude successful licensure and policy decision for clinical application.Achieving proof-of-concept for a therapeutic vaccine based on the ability of the vaccine to reduce recurrence rates over one year or more following a drug-mediated cure represents the preferred short-term strategic goal of therapeutic TB vaccine development.Assessing the efficacy of a vaccine delivered at some point during treatment, possibly at the end of the initial intensive treatment phase, against end-treatment failure and recurrence, may also be considered as a short-term goal.Initial studies should be conducted in the context of standard recommended drug treatment.Mtb samples obtained prior to the initiation of treatment should be available to assess whether a given recurrence is due to reactivation of the same, initially infecting Mtb strain, or whether it results from a de novo, Mtb infection that occurred near or post end-of-treatment.While initial proof-of-concept may best be generated in patients with drug-sensitive TB, reducing the emergence of drug-resistant TB represents an important strategic goal.Investigations in special populations should be initiated rapidly after initial demonstration of efficacy.Shortening and simplifying drug regimens represent essential long-term goals.Proof-of-concept should imperatively trigger vaccine evaluation for other indications including prevention of TB in the general population or in recently exposed individuals.Initial demonstration of preventive vaccine efficacy in other target populations should trigger evaluation of the vaccine as a potential therapeutic adjunct, as these various indications are of importance to public health.Tuberculosis Vaccine Initiative, Lelystad, The Netherlands
Treatment failure and recurrence after end-of-treatment can have devastating consequences, including progressive debilitation, death, the transmission of Mycobacterium tuberculosis – the infectious agent responsible for causing TB – to others, and may be associated with the development of drug-resistant TB.The burden on health systems is important, with severe economic consequences.Vaccines have the potential to serve as immunotherapeutic adjuncts to antibiotic treatment regimens for TB.A therapeutic vaccine for TB patients, administered towards completion of a prescribed course of drug therapy or at certain time(s) during treatment, could improve outcomes through immune-mediated control and even clearance of bacteria, potentially prevent re-infection, and provide an opportunity to shorten and simplify drug treatment regimens.The preferred product characteristics (PPC) for therapeutic TB vaccines described in this document are intended to provide guidance to scientists, funding agencies, public and private sector organizations developing such vaccine candidates.This document presents potential clinical end-points for evidence generation and discusses key considerations about potential clinical development strategies.
time series of food availability.Trends in these variables may help to explain the increased likelihood of maturation at small lengths, but could not be considered in this study as the relevant data were not available.In light of the apparent correlation between regional differences in the fisheries and the rate of change in Lp50, we suggest further investigation into the role of fishing.Time series of fishing mortality rates could be included in Eq., as well as interactions between fishing and the other environmental variables.If fishing mortality rates and a greater number of environmental or physiological variables were included in Eq. then it may be possible to confirm if fishing explains some of the trends in Lp50, either directly or through interactions.The potential role of fisheries-induced evolution could also be assessed by calculating time series of selection differentials due to fishing.Estimates of fishing gear selectivity and time series of fishing mortality rates in the Clyde will be needed to investigate how fishing may have been influencing maturation, so conducting stock assessments to derive these estimates will be a necessary first step.Time series of PMRNs describe temporal changes to maturation propensity independently from potential changes in growth, so temporal trends in growth were not considered in this paper.Growth rates can vary in response to the same conditions as maturation schedules – environmental, physiological and selective conditions – and a similar investigation into the growth of fish from the Clyde and wider west coast of Scotland will complement this study.Haddock, whiting and female cod in the Scottish west coast have been maturing at progressively smaller lengths and younger ages since 1986, and this has occurred most rapidly in the Clyde populations of haddock and whiting.As decreases in lengths at maturation can reduce lengths-at-age and maximum lengths by prematurely slowing growth rates, the steep decline in the abundance of large fish in the Clyde may be partially explained by these trends in maturation.Declines in Clyde landings coincided with decreases in large fish abundance, and typical catches increasingly consisted of small unmarketable individuals.Trawl fishing always truncates length structures, lowering the abundance of large fish, but if it has also been causing increasingly early maturation then the fishing process has induced a response which may have further reduced the probability of individuals growing to a large size.A reversal of these trends in maturation may promote increases in the abundance of large fish, which is needed if the Clyde demersal fishery is to be restored.If fishing has caused the observed declines in Lp50 then the amount of time since Clyde vessels stopped targeting demersal fish – from 2005 – has been insufficient for a recovery.This may be due to Nephrops vessels continuing to catch large quantities of fish.If discarding levels of the Nephrops fleet have not reduced since the 1980s and 1990s then current fishing activity may be preventing lengths at maturation from increasing, this may also explain why the community length structure has not shown signs of improvement.Furthermore, if the changes in maturation schedules have been partially caused by evolutionary responses to size-selective fishing, then this process may be ongoing through the Nephrops fishery, and may even have been accelerated by the increased use of nets with smaller mesh sizes.If there is an evolutionary component to the declines in maturation lengths then increases in Lp50 will be gradual and likely to require periods of time similar to the initial decreases.Further work is still needed to determine why Clyde demersal fish have shown such rapid declines in length at maturation, and to assess means of reversing these trends.
Probabilistic maturation reaction norms (PMRNs) were used to investigate the maturation schedules of cod, haddock and whiting in the Firth of Clyde to determine if typical lengths at maturation have changed significantly since 1986.Some potential sources of growth-independent plasticity were accounted for by including sea-surface temperature and abundance variables in the analysis.The PMRNs of the Clyde populations were compared with those from the wider west coast, in conjunction with regional differences in the fishery, to assess whether fishing may have been driving the observed trends of decreasing lengths at maturation.The lengths at which haddock, whiting and female cod were likely to mature decreased significantly during 1986-2009, with rates of change being particularly rapid in the Clyde.It was not possible to estimate PMRNs for male cod due to limited data.Trends in temperature and abundance were shown to have only marginal affects upon PMRN positions, so temporal trends in maturation schedules appear to have been due to a combination of plastic responses to other environmental variables and/or fishing.Regional differences in fishing intensity and the size-selectivity of the fisheries suggest that the decreases in lengths at maturation have been at least partially due to fishing.The importance and scale of the Clyde Nephrops fishery increased as demersal landings declined, and the majority of demersal fish landings have come from Nephrops bycatch since about 2005 when the demersal fishery ceased.Since it appears as though fishing may have caused increasingly early maturation, and a substantial Nephrops fishery continues to operate in the Clyde, reversal of these changes is likely to take a long time - particularly if there is an evolutionary component to the trends.If size-selective fishing has contributed to the lowered abundance of large fish by encouraging maturation at increasingly small lengths, then large fish may remain uncommon in the Clyde until the observed trends in maturation lengths reverse.
Prefrontal cortex is a part of the brain which is responsible for behavior repertoire.Inspired by PFC functionality and connectivity, as well as human behavior formation process, we propose a novel modular architecture of neural networks with a Behavioral Module and corresponding end-to-end training strategy. This approach allows the efficient learning of behaviors and preferences representation.This property is particularly useful for user modeling and recommendation tasks, as allows learning personalized representations of different user states. In the experiment with video games playing, the resultsshow that the proposed method allows separation of main task’s objectives andbehaviors between different BMs.The experiments also show network extendability through independent learning of new behavior patterns.Moreover, we demonstrate a strategy for an efficient transfer of newly learned BMs to unseen tasks.
Extendable Modular Architecture is proposed for developing of variety of Agent Behaviors in DQN.
In this work, we propose a self-supervised method to learn sentence representations with an injection of linguistic knowledge.Multiple linguistic frameworks propose diverse sentence structures from which semantic meaning might be expressed out of compositional words operations.We aim to take advantage of this linguist diversity and learn to represent sentences by contrasting these diverse views.Formally, multiple views of the same sentence are mapped to close representations.On the contrary, views from other sentences are mapped further.By contrasting different linguistic views, we aim at building embeddings which better capture semantic and which are less sensitive to the sentence outward form.
We aim to exploit the diversity of linguistic structures to build sentence representations.
of the electric field occurs so fast that there is no excess ion diffusion in the direction of the field.This indicates that the electrode polarization and space charge effects are prevalent.The decrease of dielectric constants with increasing frequency is mainly attributed to the mis-match of interfacial polarization of composites to external electric fields at high frequencies .It is also evident that the dielectric constant is a function of dopant concentration as shown in Fig. 12.The real component ε′, which represents storage of energy during each cycle of the applied electric field, increases with filler loading and is attributed to the fractional increase in charges due to the addition of VO2+ ions in the pure PV/MAA:EA polymer blend whereas the imaginary component ε″, which represents the loss of energy in each cycle of the applied electric field, decreases with an increase in filler loading and is attributed to the reduction of charge transport due to the building up of space charge near the electrode/electrolyte interface resulting in high conductivity .The pure and VO2+ doped PVA/MAA:EA polymer blend films have been successfully synthesized using solution casting method.A structural analysis shows an increase in amorphicity of the doped polymer blend films.The dTGA study shows an enhancement of thermal stability of the system with increase in dopant concentration.The optical absorption spectrum exhibits three bands corresponding to the transitions 2B2g→2A1g, 2B2g→2B1g and 2B2g→2Eg, characteristic of VO2+ ions in octahedral symmetry with tetragonal distortion and reveals that band gap values shift towards longer wavelength on VO2+ ions doping which is due to interband transitions.The EPR results show that g|| < g⊥ <ge and A|| G > A⊥ G for all VO2+ ions doped polymer blend films which confirm that VO2+ ions exist in the polymer blend as VO2+ in octahedral coordination with tetragonal compression.The conductivity study shows that addition of VO2+ ions to the polymer blend system enhances the ionic conductivity which is attributed to the increase in amorphicity.
Pure and VO2+ ions doped PVA/MAA:EA polymer blend films were prepared by a solution casting method.XRD pattern reveals an increase in amorphicity with increase in doping.The dTGA study shows an enhancement of thermal stability of the system with increase in dopant concentration.The optical absorption spectrum exhibits three bands corresponding to the transitions 2B2g→2A1g, 2B2g→2B1g and 2B2g→2Eg, characteristic of VO2+ ions in octahedral symmetry with tetragonal distortion and reflects that the optical band gap decreases with the increase of mol% of VO2+.EPR spectra of all the doped samples show a characteristic eightline hyperfine structure of VO2+ ions, which arises due to the interaction of unpaired electron with the 51V nucleus.The spin-Hamiltonian parameters (g and A) evaluated from the EPR spectra confirm that the vanadyl ions exist as VO2+ ions in octahedral co-ordination with a tetragonal compression and have a C4v symmetry.The impedance spectroscopic study shows that the addition of VO2+ ions into the polymer blend system enhances the ionic conductivity which is explained in terms of an increase in the amorphicity.
The presented data contains the microbial composition of a drinking water supply system for O'Kiep, Namaqualand, South Africa.Table 1 represents the bacterial composition of the source point at the lower Orange River while Table 2 shows the microbial composition of the treated water, distributed by a state owned agency responsible for water management activities in the region."Table 3 represents the microbial composition from a local municipal reservoir at O'Kiep storing the treated water from the water agency, which is further distributed to individual households in O'Kiep. "Tables 4–10 represents microbial composition at the point-of- use, i.e. households' tap.The DWSS samples were obtained from a 100km long pipe system designed to deliver a flow of 18 ML/day."Freshwater is sourced from the lower Orange River by a regional water supply system to the nearby towns including O'Kiep which is located in the Northern Cape, Namaqualand region of South Africa .DWSS samples were collected in April 2017 from the source to the point-of-use, i.e. at numerous household taps, in non-transparent 500 mL sterile polyethylene bottles which were immediately placed on ice prior to transportation to the laboratory.A composite sample was initially collected from lower Orange River.The second sample was composed of the treated water prior to distribution at the local water supply agency reservoir."A similar composite sample from the local municipal reservoir and samples were randomly collected from households' taps.All samples were handled according to the guidelines used for drinking water quality standard quantification .The samples were filtered through a 0.22-μm micropore cellulose membrane and the membrane was pre-washed with a sterile saline solution followed by the isolation of the genomic DNA using a PowerWater® DNA isolation kit as per the manufacturer guidelines.The DNA purity and concentration were quantified using a microspectrophotometry and the DNA concentration ranged from 10.7 to 17.3 ng/μL.The purified DNA was PCR amplified using the 16S rRNA forward bacterial primers 27F–16S-50-AGAGTTTGATCMTGGCT- CAG-‘3 and reverse primers 518R-16S-50-ATTACCGCGGCTGCTGG- ‘3 that targeted the V1 and V3 regions of the 16S rRNA.The PCR amplicons were sent for sequencing at Inqaba Biotechnical Industries, a commercial NGS service provider.Briefly, the PCR amplicons were gel purified, end repaired and illumina® specific adapter sequence were ligated to each amplicon.Following quantification, the samples were individually indexed, followed by a purification step.Amplicons were then sequenced using the illumina® MiSeq-2000, using a MiSeq V3 kit.Generally, 20 Mb of the data were produced for each sample.The Basic Local Alignment Search Tool-based data analyses was performed using an Inqaba Biotech in-house developed data analysis system.Overall, sequences were deposited in two databases, i.e. the National Centre of Biotechnology and the Sequence Read Archive database, prior to the generation of accession numbers for individual bacterial species.
The metagenomic data presented herein contains the bacterial community profile of a drinking water supply system (DWSS) supplying O'Kiep, Namaqualand, South Africa.Representative samples from the source (Orange River) to the point of use (O'Kiep), through a 150km DWSS used for drinking water distribution were analysed for bacterial content.PCR amplification of the 16S rRNA V1–V3 regions was undertaken using oligonucleotide primers 27F and 518R subsequent to DNA extraction.The PCR amplicons were processed using the illumina® reaction kits as per manufactures guidelines and sequenced using the illumina® MiSeq-2000, by means of MiSeq V3 kit.The data obtained was processed using a bioinformatics QIIME software with a compatible fast nucleic acid (fna) file.The raw sequences were deposited at the National Centre of Biotechnology (NCBI) and the Sequence Read Archive (SRA) database, obtaining accession numbers for each species identified.
Rheumatoid arthritis is a progressive systemic inflammatory disease characterized by joint destruction and functional disability .RA occurs globally in about 1.0% of the general population, with 2–4-times higher prevalence in women than in men .Although the etiology of RA is not quite clear, some inflammatory cytokines such as tumor necrosis factor α have been shown to play a central role in the occurrence and progression of RA .Infliximab, an inhibitor of TNF-α, is one of the most widely used biological disease-modifying antirheumatic drugs; combined use of infliximab and methotrexate shows clinical and radiographic benefits compared with placebo in patients inadequately controlled with therapeutic doses of MTX .Because the therapeutic effects of infliximab have been demonstrated in several clinical studies , the primary goal of RA treatment has shifted from the achievement of clinical remission to sustained remission without biologic DMARDs particularly in patients with RA in sustained remission .The first study reporting the possibility of biologic-free treatment in patients with RA was the TNF20 study .This trial indicated that early treatment of RA with infliximab induces a permanent response that persists, even after discontinuation of the drug.After publication of the TNF20 study, the Behandelstrategieёn study evaluated biologic-free treatment in much larger cohort .Sixty-four percent of patients with early RA were able to discontinue infliximab and in 56% patients treated with MTX monotherapy for 2 years, low disease activity was maintained and progression of joint damage was inhibited.In established RA patients exhibiting an inadequate response to MTX, the Remission induction by Remicade in RA patients study also examined the possibility of biologic-free remission or low disease activity .The patients enrolled in the study were those who had reached and maintained a disease activity score 28 of less than 3.2 for more than 24 weeks with infliximab treatment and who then agreed to discontinue the treatment.Among the 102 evaluable patients who completed the study, 56 maintained low disease activity after 1 year and showed no progression in radiological damage and functional disturbance; 44 remained in clinical remission.In this context, subanalysis of the dose-escalation study of infliximab with MTX showed a significant interaction between baseline TNF-α and the dose of infliximab in the clinical response.Additionally, the clinical response and disease activity were significantly better when the treatment was applied at 10 mg/kg than at 3 and 6 mg/kg, with a high baseline TNF-α .To achieve the clinical response and its sustained remission, serum TNF-α could be considered as a key indicator for optimal dosing of infliximab for RA treatment.The Remission induction by Raising the dose of Remicade in RA study was planned to compare the proportions of clinical remission based on the simplified disease activity index after 1 year of treatment and its sustained remission rate after another 1 year between the investigational treatment strategy and the standard strategy of 3 mg/kg per 8 weeks of infliximab administration in infliximab-naïve patients with RA showing an inadequate response to MTX.In this study, we describe the study design and baseline characteristics of the enrolled patients.Patients with RA were eligible for enrollment if they had active disease equal to or greater than 6 mg MTX weekly, were 18 years of age or older at the time of enrollment, and experienced no prior infliximab use.Patients were excluded if they were taking corticosteroids at doses higher than 10 mg prednisolone equivalents/day, had an SDAI ≤11.0, had severe infections, had active tuberculosis or evidence of latent tuberculosis, were given a diagnosis of systemic lupus erythematosus or any other form of concomitant arthritis, had congestive heart failure, or were pregnant or lactating women during or 6 months after treatment.All the patients gave written informed consent in accordance with the Declaration of Helsinki, and the trial was approved by the institutional review board at each participating institution.This trial was registered with University Hospital Medical Information Network.The RRRR study was conducted as an open-label, parallel group, multicenter randomized controlled trial.Eligible patients with RA who had active disease despite taking equal to or greater than 6 mg of MTX weekly were able to participate.They were randomly assigned in a 1:1 ratio to receive either a standard treatment or a programmed treatment with the starting dose of infliximab based on the three categories of baseline TNF-α in addition to baseline MTX after 10 weeks of enrollment.To ensure a balanced group design, the Clinical Research and Medical Innovation Center at Hokkaido University Hospital centrally performed the randomization using a computer-generated random number-producing algorithm.Patients were randomly assigned one to one ratio to the standard treatment arm or the programmed treatment arm with the use of permuted block within each stratum.Sixteen strata for randomization consisted of disease duration, baseline SDAI, and baseline TNF-α."Treatment allocation was blinded for the reviewer of the patients' disease, but was open for both the patients and the physicians.Clinical response was measured using SDAI, which is a well-validated measure of composite clinical disease activity .If patients had achieved an SDAI of less than or equal to 3.3 at the end of 54 weeks, they discontinued infliximab.Discontinuation of infliximab was maintained throughout follow-up until 158 weeks after enrollment unless patients showed clinical or radiologic progression.The treatment plans for the standard treatment arm and the programmed treatment arm are described in detail below.After enrollment, patients received 3 mg/kg infliximab at 0, 2, and 6 weeks.The same dose was taken every 8 weeks after 14 weeks.If the patients showed an SDAI of less than or equal
Infliximab, an inhibitor of TNF-α, is one of the most widely used biological disease-modifying antirheumatic drugs.Recent studies indicated that baseline serum TNF-α could be considered as a key indicator for optimal dosing of infliximab for RA treatment to achieve the clinical response and its sustained remission.The Remission induction by Raising the dose of Remicade in RA (RRRR) study is an open-label, parallel group, multicenter randomized controlled trial to compare the proportions of clinical remission based on the simplified disease activity index (SDAI) after 1 year of treatment and its sustained remission rate after another 1 year between the investigational treatment strategy (for which the dose of infliximab was chosen based on the baseline serum TNF) and the standard strategy of 3 mg/kg per 8 weeks of infliximab administration in infliximab-naïve patients with RA showing an inadequate response to MTX.Target sample size of randomized patients is 400 patients in total.
to 3.3 at 54 weeks, they discontinued infliximab.After the enrollment, the patients received 3 mg/kg infliximab at 0, 2, and 6 weeks.The dose of infliximab was selected based on baseline serum TNF-α.If serum TNF-α was less than 0.55 pg/mL, infliximab was kept at 3 mg/kg every 8 weeks after 14 weeks.If serum TNF-α was greater than 0.55 pg/mL to less than 1.65 pg/mL, infliximab was increased to 6 mg/kg at 14 weeks and maintained at 6 mg/kg every 8 weeks after 22 weeks.If serum TNF-α was 1.65 pg/mL or greater, infliximab was increased to 6 mg/kg at 14 weeks and to 10 mg/kg at 22 weeks; the dose of 10 mg/kg was then administered every 8 weeks after 30 weeks.If the patients showed an SDAI ≤3.3 at 54 weeks, they discontinued infliximab.The allocated dose could not be changed.Patients were dropped from the trial if they used biological DMARDs except for infliximab, increased the dose in the standard treatment arm, did not increase the dose in the programmed treatment arm, could not continue the treatment due to adverse events, were re-introduced infliximab after the discontinuation of infliximab, or had other reasons.During the infliximab treatment period, the same dose of concomitant treatment at baseline was accepted, and dose reduction or halting of concomitant treatments was also possible if necessary.The primary endpoint was the proportion of patients who sustained discontinuation of infliximab 1 year after discontinuation of infliximab at the time of 54 weeks after the first administration of infliximab.The secondary endpoints were the proportion of clinical remission at the time of 54 weeks after the first administration of infliximab; the portion of patients who sustained discontinuation of infliximab at 2 years after discontinuation of infliximab; the proportion of clinical remission based on SDAI and changes in SDAI from baseline at each time point; the proportion of clinical remission based on DAS28-ESR, DAS28-CRP, and Boolean-based definitions and change in each value at each time point; radiographs of the hands, wrists, and feet, which were centrally assessed and assigned a score according to the van der Heijde modifications of the total Sharp score; rheumatoid factor and matrix metalloproteinase-3; health assessment questionnaire and EQ-5D; serum infliximab concentration at the time of 54 weeks after the first administration of infliximab; and adverse events.Table 1 shows the details of data collection during the trial.Based on the RISING study, the proportions of clinical remission were assumed to be 21% and 34% for the standard treatment arm and programmed treatment arm, respectively .After the discontinuation of infliximab, if we assumed that the proportion of patients who sustained discontinuation was set as 55% in the standard treatment arm and 65% in the programmed treatment arm , the proportions of patients who sustained discontinuation of infliximab at 1 year after a discontinuation of infliximab at the time of 54 weeks after the first administration of infliximab in the standard treatment arm and programmed treatment arm were calculated as 11.6% and 22.1%, respectively.Based on 11.6% in the standard treatment and 22.1% in the programmed treatment arm, 199 randomized patients were needed for each treatment arm to have 80% of power at a two-sided 5% level of significance.Considering the dropout rate of approximately 10% between the enrollment and the randomization, we sought to enroll 450 patients at most in the trial until the end of September 2013.Primary analysis will be conducted based on the intention-to treat population, which included all the patients enrolled and randomized in the trial.The proportion of sustained discontinuation at 1 year after a discontinuation of infliximab at the time of 54 weeks will be compared using the Cochrane-Mantel-Haenszel test and stratification factors with disease duration and baseline SDAI.Risk difference of the proportion of sustained discontinuation at 1 year after a discontinuation of infliximab and its 95% confidence intervals will be calculated.To confirm the robustness of the primary results, the same analyses will be conducted in the population restricted to the patients who completed the planned infliximab and entered the infliximab-free period.Subgroup analysis based on the disease duration, baseline SDAI, and baseline TNF-α concentration will be planned.For the secondary endpoints, we will conduct the same analysis for a proportion of patients with clinical remission at the time of 54 weeks and a portion of patients who sustained discontinuation of infliximab at 2 years after a discontinuation of infliximab.The proportions of clinical remission according to DAS28-ESR-, DAS28-CRP-, and Boolean-based definitions will be calculated.Changes from baseline in SDAI, DAS28, rheumatoid factor, MMP-3, HAQ, EQ-5D, and the total Sharp score will be analyzed using a mixed model for repeated measures .Means and standard deviations will be calculated for all time points and displayed as a transition diagram.Time to discontinuation of infliximab and time until the loss of efficacy will be plotted using the Kaplan-Meier method with key survival statistics.Treatment arms will be compared using log-rank tests, and hazard ratios and 95% CIs will be estimated using a Cox proportional hazards model.Safety analysis will be conducted based on the safety population, which included all patients who enrolled in the study and received infliximab at least once.The combined results with the treatment arms will be shown before randomization and shown separately for each treatment arm after the randomization.The numbers and proportions of adverse events will be calculated.As an exploratory analysis, logistic regression analyses will be performed in order to identify the predictors of clinical remission at 52 weeks and sustained remission after 1 year.Baseline characteristics
The primary endpoint is the proportion of patients who kept discontinuation of infliximab 1 year after discontinued infliximab at the time of 54 weeks after the first administration of infliximab.The secondary endpoints are the proportion of clinical remission based on SDAI and changes in SDAI from baseline at each time point, other clinical parameters, quality of life measures and adverse events.
time on crop monitoring and have a suboptimal timing of on-farm activities.We found that the number of plots had a significant, positive impact on TE.In other words, the potential positive TE impacts of having more plots as a way to adapt to variations in micro-level agro-climatic conditions outweighed the potential negative effects for the farmers in this sample.This finding is consistent with the positive effect of land fragmentation on TE previously observed in the Jiangxi and Gansu Provinces of China.Finally, the significant positive effect of the township dummy indicated that farms in the Luocheng township have a lower TE than those in the Heiquan township, when other factors affecting TE remain constant.Differences in agro-climatic factors and market conditions may explain this finding.Intercropping systems generally have higher land-use efficiencies than monocropping systems.It remains unclear, however, to what extent the higher yields per unit of land are obtained at the expense of the efficiency with which other inputs such as labour, water and nutrients can be used.In this study, we examined the contribution of intercropping to the TE of a smallholder farming system in northwest China.TE measures the output obtained for a certain crop, or combination of crops, as a share of the maximum attainable output from the same set of inputs used to produce the crop.The farm-level TE of a cropping system is a key determinant of its profitability, and thus an important determinant of the livelihood strategies of smallholder farmers.Although our analysis is limited to a relatively small region in northwest China, the insights it provides are likely to be relevant for other regions where intercropping methods are practiced, both in China and the rest of the world.The contribution of intercropping to TE was examined by estimating a translog stochastic production frontier and an efficiency equation using farm input and output data collected from 231 farm households in Gaotai County, in the Heihe River basin in northwest China.Our main finding is that intercropping has a significant positive effect on TE, implying that the potential negative effects of intercropping on the use efficiency of labour and other resources are more than offset by its higher land-use efficiency when compared with monocropping.The estimated elasticity of the proportion of land under intercropping was 0.744, indicating that TE goes up by 0.744% if the proportion of land used for intercropping increases by 1%.The large and significant value of this estimate gives strong support to the view that intercropping is a relatively efficient land-use system in the studied region.Our results imply that there is still considerable scope for increasing TE in Gaotai County without bringing in new technologies.Increasing the proportion of land used for intercropping may play an important role in this respect, given that only 60% of the land in this region was under intercropping in 2013 and that the elasticity of TE in terms of the proportion of land under intercropping is close to 0.8.The expected increase in TE will contribute to increasing farm output and farm profits without affecting the availability of scarce land resources.It should be noted, however, that this conclusion only holds under the assumption of constant output prices.If the production of non-grain crops like cumin and seed watermelon would increase, this could result in lower prices for these crops and negatively affect the TE, and hence the profitability, of these intercropping systems.Recent price declines for maize in China, on the other hand, have increased the TE of maize-based intercropping systems when compared with single maize crops.Farm size was found to play a key role among the control variables affecting TE.The non-linear relationship between TE and the area of cultivated land implies that ongoing policies aimed at increasing agriculture through the promotion of so-called family farms and the renting of land to co-operatives and private companies may make a positive contribution to the overall efficiency of farming in the region we examined.TE was estimated to be highest for farms that are twice as large as the average size observed in our study.The TE analysis employed in this study takes the available technology at the time of the survey as a given; however, productivity gains could also be made through the development and introduction of new technologies, both in intercropping systems and monocropping systems.In the case of intercropping systems, these changes could involve the promotion of new varieties to replace conventional cultivars of component crops, as well as the development of specialised machinery for intercropping systems to reduce its large labour demand.
This traditional farming method generally results in a highly efficient use of land, but whether it also contributes to a higher technical efficiency remains unclear.Technical efficiency refers to the efficiency with which a given set of natural resources and other inputs can be used to produce crops.In this study, we examined the contribution of maize-based relay-strip intercropping to the technical efficiency of smallholder farming in northwest China.Data on the inputs and crop production of 231 farms were collected for the 2013 agricultural season using a farm survey held in Gaotai County, Gansu Province, China.Controlling for other factors, we found that the technical efficiency scores of these farms were positively affected by the proportion of land assigned to intercropping.This finding indicates that the potential negative effects of intercropping on the use efficiency of labour and other resources are more than offset by its higher land-use efficiency when compared with monocropping.
statistical precision in the measurement data.Comparison of fit results with different codes does reveal biases that are not under statistical control.The preferred way of dealing with uncertainties in radionuclide metrology is to make for each peak a detailed uncertainty budget and perform proper uncertainty propagation towards the final result.Uncertainty components include counting statistics, spectral interferences, impurities and background, residual deviations between fit and measurement, physical effects not included in the model and model dependence of the fit result.Normalisation and correlation of uncertainty components are constraints that require specific propagation formulas.Equations and numerical examples can be found in Pommé.Statistical uncertainties are readily introduced into Eq., and the same equation can be used to propagate the interference of an impurity that affects part of the spectrum.Also a mismatch between fit and measured spectrum can be included in the uncertainty budget.Explicit uncertainties can be assigned to fit model dependence, contrary to ignoring this component when relying on the covariance matrix.Eq. is applicable in any situation in which adding an amount Δ to peak k implies the subtraction of the same amount from the rest of the spectrum, so that the total area ΣA remains invariable and the corresponding emission probability changes to Pk=)/ΣA.This situation occurs in the fit of an unresolved doublet, subtraction of tailing from a higher-energy peak and correction for coincidence summing-in and summing-out effects.To a lesser extent, positively correlated uncertainties may also appear for which the propagation factor is smaller.If the relative deviation is the same for all peaks, there is no change in the emission probabilities.Eq. gives an upper limit for the propagated uncertainty.Whereas the convolution of a Gaussian with three left-handed exponentials is very successful in satisfactorily reproducing most high-resolution alpha particle spectra, a more elaborate modelling is needed to fit the most demanding spectra with extremely good counting statistics.A line shape model was proposed that expands the number of left-handed exponentials and also incorporates a number of righted-handed exponentials, which allows obtaining a smoother function, better reproducing changes of slope in the tailing and incorporating spectral broadening at the high-energy side.A line model with up to 10 left-handed and 4 right-handed exponentials was implemented as a function in the spreadsheet application BEST.It uses the functionality of a spreadsheet to perform the search for optimum fit parameters, to select which parameters to keep fixed or to define a relationship between a set of parameters, to store spectral data and all specifics of the fit together in one file, to plot and export the results.Applications include the free fit of individual peaks to determinate alpha emission probabilities or of complete radionuclide emission spectra to determine activity ratios in a mixed sample.The algorithm outperformed existing software at fitting high-resolution 240Pu and 236U spectra with high count numbers.Its applicability extends to thick alpha sources of which the spectrum almost resembles a step function.Further extensions are possible in which different functional shapes are combined, e.g for application with mixed spectra of mono-energetic electrons and x-rays.
Peak overlap is a recurrent issue in alpha-particle spectrometry, not only in routine analyses but also in the high-resolution spectra from which reference values for alpha emission probabilities are derived.In this work, improved peak shape formulae are presented for the deconvolution of alpha-particle spectra.They have been implemented as fit functions in a spreadsheet application and optimum fit parameters were searched with built-in optimisation routines.Deconvolution results are shown for a few challenging spectra with high statistical precision.The algorithm outperforms the best available routines for high-resolution spectrometry, which may facilitate a more reliable determination of alpha emission probabilities in the future.It is also applicable to alpha spectra with inferior energy resolution.
also noted that these bacteria inhibit the growth of other microflora.By virtue of this inhibitory property, Kocuria has been successfully used to control Aeromonas salmonicida infection in rainbow trout.Therefore, this bacterium is a potential candidate constituent for future probiotic ingredients.Kocuria has already been applied in the control of V. anguillarum infections in eels and V. arthritis in rainbow trout.Morphologically, bacteria isolated from internal organs of diseased fish were similar to previously described saprophytic forms.Neither the biochemical properties of Kocuria rhizophila nor the ones of Micrococcus luteus show any significant differences from those of the strains isolated by other authors.The Vitek 2 system and API 20 Staph correctly identified bacteria, even to the species level, as Micrococcus luteus.Our observations suggest that the isolation of Kocuria rhizophila requires a longer incubation period than other bacteria most frequently isolated from fish, and only the time longer than 48 h is sufficient.The same finding was made previously by Savini et al.Sequencing was performed because of the frequent problem of biochemical misidentification of bacterial isolates collected from fish.Genome sequencing was also carried out to study the evolutionary relationships of the taxa and to find the possible source of fish infection.Available data in GenBank showed no strains of Kocuria rhizophila or Micrococcus luteus isolated from diseased fish.Polish isolates of Kocuria rhizophila form a separate cluster, and are very similar to strains isolated from a food processing environment in Denmark.Micrococcus luteus was classified very close to the strains collected from scallops in Canada.Our study fills in the gap in the molecular database on Kocuria rhizophila and Micrococcus luteus strains obtained from moribund fish and supplements the knowledge concerning its pathogenic properties for salmonids.The results of our investigation are indispensable for precise monitoring and control of Kocuria rhizophila and Micrococcus luteus infections in farmed fish in the future.Due to diagnostic difficulties and the lack of knowledge about the influence of these microorganisms on fish health, some outbreaks could be misidentified.In human medicine, outbreaks of human disease caused by Kocuria species are still underestimated.The drug resistance of Kocuria rhizophila and Microccocus luteus and methods of their treatment have been poorly investigated up to now.According to the literature available, there are no specified interpretive criteria for Kocuria rhizophila or Micrococcus luteus, especially for clinical purposes, 2008).Szczerba displayed some data concerning the antimicrobial resistance of Kocuria, Micrococcus, Nesterenkonia, Kytococcus and Dermacoccus, but without species specifications.The author showed that these bacteria were resistant to erythromycin, which was contrary to our results.However, his results concerning bacteria susceptibility to doxycycline, and amoxicillin/clavulanate wholly concur with ours.The roles of Kocuria and Micrococcus species in fish pathology are still uncertain and should be investigated further.Although the presented studies showed pathogenic properties of these microorganisms to trout, further supplementary exploration is necessary to fill in this knowledge gap.This research was supported by the KNOW Scientific Consortium “Healthy Animal — Safe Food”, Ministry of Science and Higher Education resolution no. 05-1/KNOW2/2015.
In 2014 and 2016 in Poland a few disease outbreaks caused by Kocuria rhizophila and Micrococcus luteus were diagnosed in rainbow trout and brown trout.In each of these events, abnormal mortality (approximately 50%) was accompanied by pathological changes in external tissues and internal organs.In the majority of cases uniform growth of bacterial colonies was observed; and sometimes these bacteria appeared in predominant numbers.The bacteria identifications were performed using standard kits (API 20 Staph and Vitek 2).Sequencing was carried out so as to improve the biochemical identification of isolated bacteria and the evolutionary relationships of their taxa.It was also conducted in order to find the possible source of the fish infection.Comparison of our strains' molecular structures with the data available in GenBank showed that Kocuria rhizophila or Micrococcus luteus had never been isolated from diseased fish before, and that our isolates were very similar to the strains which had been isolated from food processing environments (in the case of Kocuria rhizophila) and from scallops (Micrococcus luteus).The challenge tests performed with our strains of Kocuria and Micrococcus on rainbow trout in laboratory aquaria confirmed the three Koch postulates.Antibacterial disc diffusion studies showed that Kocuria rhizophila and Micrococcus luteus are sensitive to most of the drugs tested i.e.The results of these studies show that control of outbreaks of the diseases in rainbow trout or brown trout seems realistic if caused by Kocuria rhizophila or Micrococcus luteus.
Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification.Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according to a measurement norm.This evaluation, however, has limited practical utility since perturbations in the pixel space do not correspond to underlying real-world phenomena of image formation that lead to them and has no security motivation attached.Pixels in natural images are measurements of light that has interacted with the geometry of a physical scene.As such, we propose a novel evaluation measure, parametric norm-balls, by directly perturbing physical parameters that underly image formation.One enabling contribution we present is a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry.Our approach enables physically-based adversarial attacks, and our differentiable renderer leverages models from the interactive rendering literature to balance the performance and accuracy trade-offs necessary for a memory-efficient and scalable adversarial data augmentation workflow.
Enabled by a novel differentiable renderer, we propose a new metric that has real-world implications for evaluating adversarial machine learning algorithms, resolving the lack of realism of the existing metric based on pixel norms.
100 ml of SKY4364 were grown to an OD600 of 0.5 in YPD media.Cells were split into two flasks, one untreated and one which was subjected to 0.02% MMS for 3 h. Cells were harvested and HIS6-tagged Ssa1 along with the associated interactome was isolated as follows: Protein was extracted via bead beating in 500 µl Binding/Wash Buffer.200 µg of protein extract was incubated with 50 µl His-Tag Dynabeads at 4 °C for 15 min.Dynabeads were collected by magnet then washed 5 times with 500 µl Binding/Wash buffer.After final wash, buffer was aspirated and beads were incubated with 100 µl Elution buffer for 20 min, then beads were collected via magnet.The supernatant containing purified HIS6-Ssa1 was transferred to a fresh tube, 25 µl of 5× SDS-PAGE sample buffer was added and the sample was denatured by boiling for 5 min at 95 °C.10 µl of sample was analyzed by SDS-PAGE.To isolate HIS6-tagged Hsp82, SKY4635 expressing HIS6-Hsp82 as the sole Hsp90 isoform in the cell were grown and processed identically to the SKY4364 cells as above.Gel lanes to be analyzed were excised from 4% to 12% MOPS buffer SDS-PAGE gels by sterile razor blade and divided into 8 sections with the following molecular weight ranges: 300–150 kDa, 150–110 kDa, 110–80 kDa, 80–75 kDa, 75–60 kDa, 60–52 kDa, 52–38 kDa and 38–24 kDa.These were then chopped into ~1 mm3 pieces.Each section was washed in dH2O and destained using 100 mM NH4HCO3 pH 7.5 in 50% acetonitrile.A reduction step was performed by addition of 100 μl 50 mM NH4HCO3 pH 7.5 and 10 μl of 10 mM Trisphosphine–HCl at 37 °C for 30 min.The proteins were alkylated by adding 100 μl of 50 mM iodoacetamide and allowed to react in the dark at 20 °C for 30 min.Gel sections were washed in water, then acetonitrile, and vacuum dried.Trypsin digestion was carried out overnight at 37 °C with 1:50 enzyme–protein ratio of sequencing grade-modified trypsin in 50 mM NH4HCO3 pH 7.5, and 20 mM CaCl2.Peptides were extracted with 5% formic acid and vacuum dried.Peptide digests were reconstituted with 60 µl of Tris–HCl buffer solution, then split into two vials with 30 µl each and vacuum dried.In a separate vial, 30 µl of Mag-Trypsin beads was washed 5 times with 500 µl of Tris–HCl buffer solution, then vacuum dried.30 µl of either 16O H2O or 97% 18O H2O was added to the respective 16O or 18O vials and vortexed for 20 min to reconstitute the peptide mixture, which was then added to the prepared Mag-Trypsin bead vial and allowed to exchange overnight at 37 °C.After 18O exchange, the solution was removed and any free trypsin in solution was inactivated with 1 mM PMSF for 30 min at 4 °C.For each sample the +/−MMS digests were combined 1:1 as follows: Forward Sample Set: 16O: 18O and Reversed Sample Set: 16O: 18O, dried and stored at −80 °C until analysis.Three biological replicate experiments were performed per sample.All samples were re-suspended in Burdick & Jackson HPLC-grade water containing 0.2% formic acid, 0.1% TFA, and 0.002% Zwittergent 3–16.The peptide samples were loaded to a 0.25 μl C8 OptiPak trapping cartridge custom-packed with Michrom Magic C8, washed, then switched in-line with a 20 cm by 75 μm C18 packed spray tip nano column packed with Michrom Magic C18AQ, for a 2-step gradient.Mobile phase A was water/acetonitrile/formic acid and mobile phase B was acetonitrile/isopropanol/water/formic acid.Using a flow rate of 350 nl/min, a 90 min, 2-step LC gradient was run from 5% B to 50% B in 60 min, followed by 50–95% B over the next 10 min, hold 10 min at 95% B, back to starting conditions and re-equilibrated.The samples were analyzed via electrospray tandem mass spectrometry on a Thermo LTQ Orbitrap XL, using a 60,000 RP survey scan, m/z 375–1950, with lockmasses, followed by 10 LTQ CAD scans on doubly and triply charged-only precursors between 375 Da and 1500 Da.Ions selected for MS/MS were placed on an exclusion list for 60 s.Data were analyzed and filtered on MaxQuant version 1.2.2 with a FDR setting of 1% against the SPROT Yeast database and at a cutoff of at least 2 peptides seen to assign quantitation ratio.The exact MaxQuant settings used can be found in attached document.Each experiment was normalized to the ratio of the bait protein, i.e. SSA1 files using SSA1 ratio and HSP82 files normalized using HSP82 ratio.This produced a list of interactors and their respective quantitated changes upon DNA damage.Proteins were removed from the file if they were labeled as “Contaminants”, “Reverse” or “Only identified by site”.Three biological replicates were performed, with each biological replicate split into technical replicates labeling and 18O reverse labeling).A protein was considered identified if detected in at least three of the six replicates.Statistical analysis was performed using the R statistical package.Proteins with three out of six observations within each group were retained.Missing values were imputed using row mean imputation.Z-score normalization was performed on the log of all protein ratios.An ANOVA test was then performed to identify proteins that indicate significant variability between biological replicates within each group.These were removed from consideration.The full data obtained was uploaded to the PRIDE repository and can now be found under reference number PXD001284.
The molecular chaperones Hsp70 and Hsp90 participate in many important cellular processes, including how cells respond to DNA damage.Here we show the results of applied quantitative affinity-purification mass spectrometry (AP-MS) proteomics to understand the protein network through which Hsp70 and Hsp90 exert their effects on the DNA damage response (DDR).We characterized the interactomes of the yeast Hsp70 isoform Ssa1 and Hsp90 isoform Hsp82 before and after exposure to methyl methanesulfonate.We identified 256 chaperone interactors, 146 of which are novel.Although the majority of chaperone interaction remained constant under DNA damage, 5 proteins (Coq5, Ast1, Cys3, Ydr210c and Rnr4) increased in interaction with Ssa1 and/or Hsp82.This data presented here are related to [1] (Truman et al., in press).The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium http://proteomecentral.proteomexchange.org) via the PRIDE partner repository (Vizcaino et al.(2013) [2]) with the dataset identifier PXD001284.
Hypohidrotic ectodermal dysplasia is a well-characterized human disease affecting the morphology and number of skin appendages, principally the hair follicles, teeth, and exocrine glands.HED, the most common of the ectodermal dysplasias, is caused by mutations to the ectodysplasin signaling pathway, which is essential for in utero development of ectoderm-derived appendages.The main axis of the pathway comprises the ligand ectodysplasin A, ectodysplasin A receptor, and the adaptor molecule EDAR-associated protein with a death domain.Mutations in any of these pathway components leads to human HED, which is phenocopied in mice.Mutations in the X-linked EDA gene underlie most ectoderm dysplasia cases.EDA, a member of the tumor necrosis factor family of signaling molecules, exists in two highly homologous isoforms, EDA1 and EDA2.EDA1 is specific for the type I transmembrane protein EDAR, whereas EDA2 is specific for the type III, X-linked transmembrane receptor.Mutations to EDA2 do not result in XLHED; however, this ligand is thought to play a role in hair loss during adulthood.To invoke EDAR signaling, EDA ligands are shed from the cell surface before receptor binding.Receptor activation initiates association with the C-terminal death domain of EDAR-associated protein with a death domain, which creates a complex capable of interacting with tumor necrosis factor receptor-associated factors.Activated tumor necrosis factor receptor-associated factor molecules interact with IκB kinase releasing NF-κB family members from their cystolic inhibitors to enter the nucleus and initiate transcription of target genes.In line with the phenotype of XLHED patients, EDAR pathway activation has primarily been linked to the window when appendages develop in utero.In mice, Edar mRNA is expressed from E14 in the developing epidermal basal layer, localized to preappendage placodes.The resultant EDAR protein remains localized to the placode into the final postnatal stages of HF development.In contrast, few studies have explored potential roles for EDAR signaling in adult tissue.Kowalczyk-Quintas et al. recently showed that Edar is expressed within the sebaceous glands of adult mice, and Inamatsu et al. reported Edar expression in the epidermal cells surrounding the dermal papilla.Moreover, Fessing et al. described EDAR expression in the secondary hair germ of telogen HFs, proposing that EDAR signaling is important for adult hair cycle regulation, particularly control of catagen onset through the up-regulation of X-linked inhibitor of apoptosis.Hair cycling and wound healing are both examples of when major morphogenic changes occur in adult skin, a tissue that is normally under strict homeostatic control.To achieve this, numerous “developmental” signaling pathways are “reused” in the adult tissue.Recently, we demonstrated a novel link between HC and the speed of adult skin healing, with a near doubling of healing efficiency in skin containing anagen HC stage follicles.This led us to hypothesize an as yet unidentified role for the EDAR signaling pathway in adult skin wound healing.This hypothesis is supported by a case study from Barnett et al. describing poor skin graft healing in an XLHED patient.Here we provide functional demonstration that EDAR signaling plays an important role in adult skin wound healing.Specifically, mice lacking the ligand EDA displayed reduced healing ability, whereas EDAR signaling augmentation promoted healing, not only in Tabby but also in wild-type mice.EDAR signaling manipulation altered multiple aspects of healing, including peri-wound proliferation, epidermal migration, and collagen deposition.Finally, we show that EDAR stimulation is able to promote human skin healing and is thus an attractive target for future therapeutic manipulation.Eda null mice exhibit delayed wound healing, which can be restored by acute pathway activation.First, we proposed that a role for Edar signaling during wound healing would likely be reflected in wound edge induction.Thus, we analyzed Edar expression by immunofluorescence in both unwounded and wounded skin.We noted immunoreactivity with an anti-Edar antibody in the epidermis of unwounded skin, which appeared expanded in the peri-wound interfollicular epidermis of 24- and 72- hour wounds.To test the hypothesis that EDAR signaling was necessary for timely healing, we first examined the rate of wound repair in Eda null mice.We report significantly delayed excisional wound healing in the absence of EDA.Tabby wounds were larger than those in WT both macroscopically and microscopically, quantified by an increased wound width and delayed rate of re-epithelialization.To confirm that this healing delay was due to EDAR signaling deficiency and not phenotypic differences in Tabby skin, we also performed in utero correction of the Tabby phenotype using the validated EDAR-activating antibody mAbEDAR1.Healing in adult mAbEDAR1-rescued Tabby mice remained delayed and indistinguishable from nonrescued Tabby mice.Thus, developmentally specified structural changes in Tabby skin are unlikely to contribute to the observed adult wound healing phenotype.Finally, we explored the effect of locally activating Edar signaling in adult Tabby mouse wounds.Here, mAbEDAR1 administered directly to the wound site 24 hours before injury entirely rescued the healing delay in Tabby mice.Specifically, the rate of re-epithelialization is increased, and wound width is significantly decreased compared to Tabby, generating a healing phenotype more in line with WT wounds.In line with previous delayed healing murine models, including the HF-deficient tail model, we observed extended epidermal activation in Tabby wounds.Local administration of mAbEDAR1 fully rescued this phenotype, restoring normal peri-wound IFE expression of keratin 6.Induction of wound edge epithelial proliferation is a key aspect of HC-modulated healing.Peri-wound epithelial proliferation, measured by BrdU incorporation assay, was significantly decreased in Tabby mice compared to WT in both IFE and HF.Activation of EDAR signaling accelerates healing in WT mice.To further explore the therapeutic potential of EDAR signaling activation, we next administered mAbEDAR1 locally to the wound site 24 hours before wounding in WT mice.WT mAbEDAR1-treated wounds displayed
The highly conserved ectodysplasin A (EDA)/EDA receptor signaling pathway is critical during development for the formation of skin appendages.Mutations in genes encoding components of the EDA pathway disrupt normal appendage development, leading to the human disorder hypohidrotic ectodermal dysplasia.Spontaneous mutations in the murine Eda (Tabby) phenocopy human X-linked hypohidrotic ectodermal dysplasia.Finally, we show that the healing promoting effects of EDA receptor activation are conserved in human skin repair.Thus, targeted manipulation of the EDA/EDA receptor pathway has clear therapeutic potential for the future treatment of human pathological wound healing.

Dataset Card for "sci_summ"

More Information needed

Downloads last month
10
Edit dataset card