text
stringlengths
330
20.7k
summary
stringlengths
3
5.31k
High-resolution detectors are used in relative dosimetry, such as dose profile measurements, to correctly characterize the penumbra regions with steep dose gradient .Due to the increasing use of small fields in state-of-the-art radiation therapy, high resolution detectors are also employed to obtain the small field output factors .Compact vented ionization chambers have the advantage that the energy dependence of their dose response is weaker compared to silicon diodes .However, the signal of an ionization chamber is affected by ion recombination and the polarity effect, where the latter will become more dominant with decreasing sensitive volume .Furthermore, even with their compact designs, the volume effect may still cause significant signal perturbation, partly caused by the low density air cavity .Nevertheless, compact vented ionization chambers remain an integral component in relative dosimetry.These chambers can be calibrated in standard laboratory to be used in reference dosimetry.Such reference type dosimeters should fulfil the requirements laid out by dosimetry protocols .The compact ionization chamber investigated in this work replaces the existing PinPoint 31014.The new chamber is recommended by the manufacturer to be used in small field sizes down to 2 cm × 2 cm.The present work is based on a previous publication by Delfs et al. , in which detailed dosimetric characterization of a 3D chamber was performed.The same methodology has been applied here to determine the dosimetric properties of the novel PTW 31023 chamber with a new inner electrode and guard ring design.The properties of the new chamber are compared to its predecessor.The effective point of measurement was determined experimentally.The lateral dose response functions were characterized using a narrow beam geometry.The saturation correction factors have been determined at different dose-per-pulse values.The polarity correction factors were measured according to the DIN protocol 6800-2 at two photon nominal energies and field sizes from 5 cm × 5 cm to 40 cm × 40 cm."The beam quality correction factors kQ for reference conditions have been simulated for three different chamber models with varying central electrode's diameter.Furthermore, the non-reference condition correction factors, kNR, that account for the difference in detector response due to spectral changes between the measurement conditions and reference conditions were studied.Small field output correction factors for this chamber, determined according to the TRS 483 , have been published recently .Unless stated, all the measurements were performed at a Siemens Primus linear accelerator in a MP3-M water phantom.The air cavity of the PTW 31023 ionization chamber has a length of 5 mm and a diameter of 2 mm.The diameter of the aluminium central electrode has been increased from 0.3 mm to 0.6 mm.The resulting sensitive volume of the chamber is 0.015 cm3.The chamber wall consists of 0.09 mm graphite shell and 0.57 mm PMMA outer wall.The outer dimensions of the PTW 31023 ionization chamber remain unchanged with respect to the PTW 31014 chamber.A schematic cross-section is shown in Fig. 1.The EPOM of PTW 31023 was determined experimentally.Measurements were performed using 6 and 10 MV photon beams."The detector was positioned radially and axially. "In the first step, the reference percentage depth dose curves were obtained using a Roos chamber, for which the EPOM has been reported to be located at Δz = +0.4 mm ± 0.1 mm below its reference point . "The measurement depth, zM, is therefore given by zB + Δz, where zB is the depth of the detector's reference point and Δz is the shift of the EPOM from the reference point.A positive value of Δz indicates that the EPOM is located downstream of the reference point and vice versa."The reference point of the PTW 31023 chamber is located on the chamber's symmetry axis 3.4 mm below the chamber tip as shown in Fig. 1.Initially, the chamber was positioned with zM equal to zB.The thereby obtained PDD curves were then compared to the reference PDD curve obtained with the Roos chamber.By shifting the PDD curves of the PTW 31023 chamber against the reference curve, the values of Δz were determined by minimizing the square of the relative difference between them.All measurements were performed with a bias voltage of 200 V as recommended by the manufacturer, using a field size of 10 cm × 10 cm and a source-to-surface distance of 100 cm.The measurements were repeated using positive and negative polarity.Three sets of measurements were acquired, where the chamber was removed from the holder and repositioned between the repetitions.The polarity corrected PDD curves were obtained by taking the mean of the values obtained using both polarities."The lateral dose response functions of the PTW 31023 chamber, i.e. the σ values in Eq., were determined for three different chamber orientations: axial; radial lateral; radial longitudinal.Measurements were performed at an Elekta Synergy linac using 6 and 15 MV photon beams at SSD of 100 cm.The signal profiles, M, of a 1 cm × 40 cm field were scanned along its short side in 5 cm water depth.All measurements were performed using a bias voltage of 200 V and repeated using both positive and negative polarity.The dose profiles, D, were obtained using a microDiamond detector in axial chamber orientation, which has been shown to be a good approximation of the dose profile even at regions with steep dose gradient ."The microDiamond's profiles were then convolved with a normalized one-dimensional Gaussian distribution according to Eq.The σ-value was varied to minimize the difference between the convolution product and the measured signal profiles M.The beam quality correction factors kQ were simulated using
This chamber replaces the previous model (PTW 31014), where the diameter of the central electrode has been increased from 0.3 to 0.6 mm and the guard ring has been redesigned.The shifts of the effective point of measurement (EPOM) from the chamber's reference point were determined by comparison of the measured PDD with the reference curve obtained with a Roos chamber.The polarity effect correction factors kP were measured for field sizes from 5 cm × 5 cm to 40 cm × 40 cm.
the Monte-Carlo package EGSnrc with the user code egs_chamber.The chambers were modelled according to the blueprints provided by the manufacturer, including the guard rings.To systematically study the influence of the central electrode on the correction factor, three models of the PTW 31023 chamber with different inner electrode diameter have been investigated.Additionally, the kQ values of the previous PTW 31014 chamber were simulated for comparison.For measurements under non-reference conditions, the correction factor kNR is applied according to the DIN 6800-2 to account for the influence of spectral changes from the reference condition due to the energy dependence of the detector response.The determination of the correction factor kNR for the PTW 31023 was performed using the approach described in Delfs et al. .The response function r of the PinPoint 31023 as a function of photon energy was simulated using the EGSnrc package and the egs_chamber user-code for monoenergetic photon beams with energies of 20 keV to 15 MeV under the conditions of secondary electrons equilibrium.Fig. 2 shows the 6 MV PDD curves obtained with the PTW 31023 chamber positioned with its reference point at the measurement depth and the reference PDD curve was obtained with the Roos chamber positioned with its EPOM at the measurement depth for radial and axial orientation.Deviations between the measurements using positive and negative polarity can be observed using the radial orientation in the build-up region, whereas no difference is observed using the axial orientation.The polarity corrected curves are shown as green lines.To determine the Δz values, the polarity corrected PDD curves of the PTW 31023 were shifted to the left by minimizing the difference to the reference curve.The resulted Δz values are −0.55 mm and −0.56 mm in the radial orientation and −0.97 mm and −0.91 mm in the axial orientation.All values of Δz have an uncertainty of 0.1 mm."Generally, the EPOM in both the radial and axial orientations are always shifted upstream from the chamber's reference point, i.e. Δz is always negative.The lateral signal profiles of a 15 MV photon beam along the narrow side of a 1 cm × 40 cm field measured with the PTW 31023 chamber in all three chamber orientations are shown in Fig. 3.Discrepancies between the measurements using positive and negative polarity can been asserted at the field borders in the axial and the radial lateral chamber orientation.The polarity corrected profiles are shown as green lines.The signal profiles obtained with the microDiamond detector, which were used as the reference dose profile, was then convolved with a 1D Gaussian function according to Eq., as shown in the right panels.The optimal σ-values for a 6 MV photon beam are found to be 0.80 mm, 0.75 mm and 1.76 mm for the axial, radial lateral and radial longitudinal orientation respectively.Small energy dependence can be observed, whereby the σ values for 15 MV photon beam are greater.All σ values have an uncertainty of 0.05 mm.Fig. 4 shows the Jaffe-plots at the highest DPP used in this study for the PTW 31014 chamber and the PTW 31023 chamber.The new PTW 31023 chamber is less subjected to polarity effect.It is also noteworthy that the measured values using positive polarity are greater than using negative polarity for the PTW 31023 chamber, whereas the PTW 31014 shows a reverse behaviour.For both chambers a linear regression was performed only for voltages between 50 V and 200 V.Fig. 5 shows the kS values determined from the polarity corrected values for both chambers at four DPP values.The γ and δ values obtained using linear regression according to Eq. are also presented for both chambers.Generally, the new PTW 31023 chamber shows improved saturation behaviour, where the ions recombination effect is less prominent than that of the PTW 31014 chamber.Fig. 6 shows the polarity effect correction factor kP for both the PTW 31014 and the PTW 31023 chamber at field sizes from 5 cm × 5 cm to 40 cm × 40 cm.The values of kP decrease with increasing field size for the PTW 31014 chamber, while the opposite was observed for the new PTW 31023 chamber.Furthermore, kP for the PTW 31023 chamber also show weaker energy dependence.Under the reference conditions as defined in DIN 6800-2 , the kP of the PTW 31014 chamber is 1.0094 ± 0.0020 and 1.0116 ± 0.0020 for 6 and 10 MV respectively, while the kP of the PTW 31023 chamber is 1.0005 ± 0.0020 and 1.0013 ± 0.0020 for 6 and 10 MV respectively.Fig. 8 shows the correction factors kNR for the PTW 31023 chamber as a function of mean photon energy for 6 MV photon beam.The range of mean energy was obtained by varying the field size from 1 cm × 1 cm to 30 cm × 30 cm and the measurement depth from 2 cm to 30 cm, where kNR is unity for the reference condition.Over the whole range of the investigated mean energy, the PTW 31023 chamber shows only a very small energy dependence, where its corrections amount to less than 1%.The EPOM of the PTW 31023 chamber, when irradiated radially, is found to lie 0.55 mm ± 0.10 mm and 0.56 mm ± 0.10 mm towards the source from its reference point.Since the radius of air cavity of the chamber measures 1 mm, the shifts correspond to 0.55r and 0.56r respectively, which lies between the general value of 0.5r recommended in the DIN 6800-2 and 0.6r recommended by IAEA TRS-398 .No recommendation is given
Results: The shifts of the EPOM from the reference point, Δz, are found to be −0.55 (6 MV) and −0.56 (10 MV) in the radial orientation and −0.97 mm (6 MV) and −0.91 mm (10 MV) in the axial orientation.All values of Δz have an uncertainty of 0.1 mm.The σ values are 0.80 mm (axial), 0.75 mm (radial lateral) and 1.76 mm (radial longitudinal) for 6 MV photon beam and are 0.85 mm (axial), 0.75 mm (radial lateral) and 1.82 mm (radial longitudinal) for 15 MV photon beam.All σ values have an uncertainty of 0.05 mm.Under reference conditions, the polarity effect correction factor kP of the PTW 31014 chamber is 1.0094 and 1.0116 for 6 and 10 MV respectively, while the kP of the PTW 31023 chamber is 1.0005 and 1.0013 for 6 and 10 MV respectively, all values have an uncertainty of 0.002.The correction factor kS of the new chamber is 0.1% smaller than that of the PTW 31014 at the highest DPP investigated.
in the DIN 6800-2 for measurements in axial orientation.However, due to the construction of this chamber, i.e. the length of its air cavity is 2.5 times greater than its diameter, it should be positioned axially for relative profile measurements as recommended by the recent TRS 483 for which the EPOM have been determined in this work.Furthermore, as shown by the PDD curves in Fig. 2, the chamber is subjected to polarity effect in the build-up region due to the absence of secondary electronic equilibrium.Similar behaviour has been reported by other authors .The required polarity effect correction in the build-up region amounts up to 5% when the chamber is positioned radially, where the measured signal at positive polarity, i.e. when positive charge is collected, is higher than at negative polarity.The effect decreases however with increasing depth as secondary electronic equilibrium is being established.Interestingly, the polarity effect is almost negligible when the chamber was positioned axially."Comparing to radial orientation, where the cable is located at almost the same depth as the chamber itself, the cable in the axial orientation is extended along the beam's axis, i.e. towards greater depths where secondary electronic equilibrium has been established.Since cable irradiation was identified as a main source contributing to the polarity effect , the PDD measurement will be subjected to a smaller polarity effect when the chamber is positioned axially.The lateral dose response functions K of the PTW 31023 chamber can be approximated by 1D Gaussian function, for which the σ-values have been determined for 6 and 15 MV using three chamber orientations.Small energy dependence of the σ values was observed, where the value increases slightly with beam energy.Comparing to the Semiflex 3D ionization chamber with σ31021 = 2.1 mm ± 0.05 mm for all chamber orientations , values of PTW 31023 are smaller representing a smaller volume effect.According to the convolution model , the undistorted dose profile D can be computed from M by deconvolution using the knowledge of K.It is also noteworthy that the chamber is subjected to stronger polarity effect at the field borders of small fields, such as the 1 cm wide narrow beam shown in Fig. 3.Instead of the lack of longitudinal secondary electrons equilibrium in the build-up region during PDD measurements, the lack of lateral secondary electrons equilibrium at the field borders contributes towards the observed polarity effect.Therefore, the polarity effect should be accounted for when the chamber is used for profile measurements at small field sizes.The Jaffe-plots in Fig. 4 demonstrate that the detector readings deviate from the linear behaviour between 1/M and 1/U at the higher voltages used in this study.Therefore, a linear regression was performed only for detector readings between 50 V and 200 V to obtain the correction factors kS in this study.This corresponds to the recommended operating voltage of the PTW 31023 chamber of 200 V by the manufacturer.The chamber specific parameters γ and δ according to Eq. have been determined for both the PTW 31014 and 31023 chambers, where the new PTW 31023 chamber exhibit improved saturation behaviour compared to the PTW 31014 chamber.Furthermore, the observed stronger voltage-dependent polarity effect of the PTW 31014 chamber can be attributed to the different work functions between the materials of the guard ring and inner collecting electrode, hence causing a small potential difference between them .Under reference conditions, the polarity effect corrections according to the DIN 6800-2 ) for the PTW 31023 chamber amount to 0.05% and 0.13% for 6 and 10 MV photon beams respectively, whereas the corrections for the PTW 31014 chamber are around 1% higher under the same conditions.The corresponding polarity effect correction factors calculated according to TG-51 , Ppol = |M+| + |M−|/2|M+|, are 0.9989 ± 0.0020 and 0.9996 ± 0.0020 for the PTW 31023 chamber.Therefore, the new chamber fulfils the requirements set by the TG-51 addendum of polarity effect for reference-class dosimeters, which agrees to the finding of a recent publication .For 6 MV photon beam, the PTW 31023 chamber shows small energy dependence for measurements under non-reference conditions along the central axis, where the correction factors kNR alter by less than 1% by varying the field size from 1 cm × 1 cm to 30 cm × 30 cm and the measurement depth from 2 cm to 30 cm.
Introduction: The aim of the present work is to perform dosimetric characterization of a novel vented PinPoint ionization chamber (PTW 31023, PTW-Freiburg, Germany).Correction factors for reference and non-reference measurement conditions were examined.Materials and methods: Measurements and calculations of the correction factors were performed according to the DIN 6800-2.Its lateral dose response functions, which act according to a mathematical convolution model as the convolution kernel transforming the dose profile D(x) to the measured signal M(x), have been approximated by Gaussian functions with standard deviation σ. Additionally, the saturation correction factors kS have been determined using different dose-per-pulse (DPP) values.The influence of the diameter of the central electrode and the new guard ring on the beam quality correction factors kQ was studied by Monte-Carlo simulations.The non-reference condition correction factors kNR have been computed for 6 MV photo beam by varying the field size and measurement depth.Comparisons on these aspects have been made to the previous model.The correction factor kS was found to be 1.0034 ± 0.0009 for the PTW 31014 chamber and 1.0024 ± 0.0007 for the PTW 31023 chamber at the highest DPP (0.827 mGy) investigated in this study.The kP of the new chamber also exhibits a weaker field size dependence.The kQ values of the PTW 31023 chamber are closer to unity than those of the PTW 31014 chamber due to the thicker central electrode and the new guard ring design.The kNR values of the PTW 31023 chamber for 6 MV photon beam deviate by not more than 1% from unity for the conditions investigated.Discussions: Correction factors associated with the new chamber required to perform reference and relative dose measurements have been determined according to the DIN-protocol.Under reference conditions, the correction factor kP of the PTW 31023 chamber is approximately 1% smaller than that of the PTW 31014 chamber for both energies used.The dosimetric characteristics of the new chamber investigated in this work have been demonstrated to fulfil the requirements of the TG-51 addendum for reference-class dosimeters at reference conditions.
change could then introduce tensile or compressive stress in the migrated region, which would fracture the intergranular oxide and decrease its protectiveness.The second one is that Cr was depleted in the migrated grain boundary and protective oxide was difficult to form, resulting in faster intergranular oxidation.The formation of a Ni-rich zone was also observed in the surface oxide and the crack flank oxide.Since both the formation of the grain boundary migration ahead of the crack tip and the Ni-rich zone in the surface oxide were caused by the selective oxidation, once the Ni-rich zone was observed in the surface oxide, the occurrence of porous intergranular oxide and grain boundary migration ahead of the crack tip could be expected.According to the discussion above, both porous intergranular oxide and grain boundary migration ahead of the crack tip were detrimental to SCC.The surface, crack flank and crack tip oxidation have been characterized by high-resolution ATEM.The results are compared and the related oxidation mechanisms are proposed.The implications to SCC are discussed.The main findings are summarized as follows:After 2000 h of exposure, the oxide film formed on 316 L SS has a triplex structure, a Cr-rich penetrative oxidation layer, a Cr-rich inner oxide layer, and a Fe-rich outer oxide layer.A further outer incomplete layer exists, made of Fe-rich discrete oxide particles.The penetrative oxidation layer is formed by the selective oxidation of Cr along the fast-diffusion channels.The formation of inner oxide layer is dominated by the solid-state growth mechanism.The inner layer oxide is Cr-rich spinel and epitaxial to the matrix while the outer layer oxide is amorphous and dissolves into the solution eventually.The formation of the outer surface oxide particles is the result of precipitation of corrosion products and they have no crystallographic orientation relationship with the matrix.The electrochemical potential in the crack is supposed to be similar to that on the sample surface because the microstructure and chemistry of the crack flank oxides are similar to those on the surface.A similar oxidation mechanisms is suggested for both cases, although the water chemistry is different, with a higher concentration of dissolved cations in the open environment causing the precipitation of Fe-rich spinel containing Cr and Ni instead of magnetite.In addition, crack flank oxidation is faster than on the free surface because of the applied-stress.Intergranular selective oxidation develops ahead of the crack tip at a rate over 3 orders of magnitude faster than that at the surface.The enhanced intergranular oxidation rate is supposed to be cause by the higher dislocation density and applied-stress and further contributed by the formation of porous intergranular oxide and grain boundary migration.
Oxidation and stress corrosion cracking (SCC) of 316L stainless steel were studied in simulated pressurized water reactor primary water.Surface, crack flank, and crack tip oxides were analyzed and compared by high-resolution characterization, including oxidation state mapping.All oxides were found to have a triplex structure, although of different dimensions and composition, revealing the effects of local water chemistry and applied stress.The higher oxidation rate at the crack tip could be explained due to the existence of a higher dislocation density, higher level of stress and cation unavailability from the environment.The implications to SCC mechanisms are discussed.
on the relationship between transport location and real estate prices.By building on previous research, they demonstrate a decrease in the willingness-to-pay for proximity on purchasing prices decreased by 42.5% for dwellings in Athens during the ongoing financial crisis between 2011 and 2013.Clearly, the time of analysis accounts for the impact variation of transport infrastructure, with the sensitivity of housing prices to macroeconomic conditions a strong determinant of the willingness-to-pay for location premiums."Given the time-line of the present study coincided to a deterioration of the UK's housing market conditions, it is possible that usually strong determinants of housing prices such as transport interventions lose their impact which may account for the lower bounds of our estimates.For this to be the case, however, it would have to be an effect of the crisis that only affected properties closer to upgraded stations, as otherwise the DiD method would abstracts this unobserved heterogeneity."Yet, for Ealing, it is possible the model's we employ may underestimate the final total effect of the Crossrail intervention. "As Ealing is dominantly an owner-occupied property market, anticipation effects linked to Crossrail's announcement may increase given longer time horizons.This is because owner-occupiers have been found to have shorter-run views of the anticipatory effects of policy interventions8.Under this intuition, if home-buyers who plan to become owner-occupiers are less receptive to the anticipated effects of future rail interventions, there may be less of an incentive to purchase property in investment areas because “they must commute from day one” – i.e. prior to the opening of the transport innovation anyway.Therefore, the premiums linked to the anticipation of policy interventions may rise in housing markets with a higher ratio of landlords.For Ealing, 28% of the total 124,082 properties are privately rented with 53% owner-occupied."With this market share reflecting a dominance of owner-occupied home-buyers, it is expected that price adjustment is more likely to occur nearer to the time of Crossrail's opening. "In this way, as this study's data differences property sales between 2002 and 2014, it is probable our model's estimated premiums underestimate the transport benefits than if property sales were pooled for years closed to Crossrail's completion in 2019. "Therefore, whilst the anticipated benefits of Crossrail was found to have been speculatively internalised into the home-buyer's WTP, the magnitude of these premiums may have been relaxed by sluggish price adjustment to the anticipation of new rail services.Rail transit is a key determinant of land use evolution.Property markets are conduits for the economic impact of transport interventions and so provide a compelling backdrop reflecting these changes.In this paper, we estimate how home-buyers anticipate the benefits of a rail upgrade intervention by considering the area of Ealing in London and the announcement of Crossrail in July 2008.As the Crossrail innovation remains under construction, the intervention we consider is the announcement of the project, rather than its completion.To obtain the most possible accurate estimate of its causal effect on house prices, we use a combination of DiD estimation and spatial econometrics whilst introducing further robustness checks.This approach allows us to isolate the effect of exogenous changes in transport accessibility whilst controlling for spatial effects of property sales and the temporal dimension of the data.In doing so, we explore the anticipatory effect attributed to the implied journey-time savings by estimating the value of service-level improvements to home-buyers who live, or intend to live, in Ealing.Controlling for unobserved spatial effects, our DiD models find for every kilometre a house is closer to a station scheduled for Crossrail upgrades, home-buyers are willing to pay between 2.4% and 2.5% extra, down from the 4% premium estimated by the naive OLS estimator.In support of Gibbons and Machin, we take this as evidence that cross-sectional models overstate the premiums for transport access even when saturated with control variables.Relative to past research, the low magnitude of the coefficient may be linked to two considerations: sluggish price adjustment to the anticipation of the new lines opening; and the intervention was constructed in an area of high transport substitutability – i.e. multiple alternative transportation modes."Irrespective of this, we find the announcement of Crossrail was positively capitalised into Ealing's housing market, with a higher WTP for properties nearer to stations expecting the Crossrail treatment. "This would appear to align with Crossrail's objective to increase residential capital values and impact property investment decisions in London's housing market.Future research might seek to confirm our findings by applying the same quasi-experimental methodology, but by pooling property sales data some time after Crossrail has been completed.
This paper estimates the willingness-to-pay for anticipated journey-time savings introduced by the Crossrail intervention in the London Borough of Ealing.Given Crossrail remains under construction, we estimate how the anticipated benefit of Crossrail's announcement enters the house price determination process.Anticipated journey-time savings should enter the home-buyer's pricing equation because these benefits are speculatively internalised even before the service becomes operational.Using a experimental method that accounts for the possibility of a spatial autoregressive process in housing values, we test the hypotheses that the announcement of a new commuter rail service generated a location premium, and that house price appreciation reflected proximity to Crossrail terminals.
of listening to speech while in noise themselves.In the former case, participants experienced Lombard speech.In the latter, they did not, and we propose that a Lombard listening mode might only be triggered if the characteristics of the speech signal resemble speech plausibly produced in noise.Our perception experiment had two flaws: we attempted a within-listener manipulation, which failed to produce interpretable results due to the repetition of identical stimuli across conditions; and we attempted to use stimuli that were not plausibly produced in noise.Despite these failures, we found a significant effect of location, suggesting that this line of research is worth pursuing further.Hay and Drager hypothesized that language and location should be strongly associated, such that “changes in environment should cause a shift in which phonetic variants are produced and perceived”.We have shown good evidence that this is true for production, and tentative evidence that it is also true for perception.This paper thus adds to the burgeoning literature showing how contextually-rich our speech memories are, and how individuals dynamically exploit these contextual memories in a way that significantly impacts their speech production and perception behaviours.
Some locations are probabilistically associated with certain types of speech.Most speech that is encountered in a car, for example, will have Lombard-like characteristics as a result of having been produced in the context of car noise.We examine the hypothesis that the association between cars and Lombard speech will trigger Lombard-like speaking and listening behaviour when a person is physically present in a car, even in the absence of noise.Production and perception tasks were conducted, in noise and in quiet, in both a lab and a parked car.The results show that speech produced in a quiet car resembles speech produced in the context of car noise.Additionally, we find tentative evidence indicating that listeners in a quiet car adjust their vowel boundaries in a manner that suggests that they interpreted the speech as though it were Lombard speech.
that the mixing activity is very low.At a distance 7D downstream of the nozzle exit, all cases are showing a double peak for the PDFs.The M0 case has a very high probability at a value close to 0.5, followed by the TR case with a high probability for a value close to 0.65.All the other cases have their first peak for a value close to 0.8 with highest probabilities when the intensity of the forcing for the plasma actuator is important.For all the cases, the highest probability is for the second peak for a value of 1, except for the M0 case for which the probability for the value 1 is only 0.4.The PDF data generated at the lipline are also very interesting to discuss in line with the mixing properties of the flow.The most important results are related to the size of the Gaussian-like shape for the PDFs and its peak value.In our set-up, good mixing can be characterized by a low peak value combined with a narrow shape for the probability function and the same value for the highest probability at the centreline and at the lipline.As hinted by the previous results, the sharpest Gaussian-like shape and the lowest peak are obtained for the M0 case.Furthermore, the M0 case is the only case for which the highest probability value is the same on the centreline and at the lipline at 4.5D and 7D from the nozzle exit, suggesting a very good homogeneity for the scalar field.We can therefore conclude that the pulsating control case is showing a great potential for mixing enhancement by comparison to the other cases.The effect of four different control solutions based on eight Dielectric Barrier Discharge plasma actuators located just before the nozzle exit of a turbulent jet were investigated with the aim to enhance the mixing of the jet.One of the controlled cases is based on a pulsating motion for the forcing at a frequency corresponding to the jet preferred frequency.Data were compared with two non-controlled cases, one with no perturbation inside the nozzle and one with a random tripping to trigger instabilities in the nozzle boundary layer.The first important result is that the plasma actuators are able to strongly modify the flow field downstream of the nozzle with more or less the same flow rate as the non-controlled cases.The effect of the plasma actuators can easily be seen with ejections of pairs of elongated streamwise vortical structures generated between two plasma actuators.The breakdown to turbulence is therefore happening closer to the nozzle exit by comparison to the baseline case due to the promotion of strong thin elongated structures around the large ring generated at the nozzle of the jet.As a consequence, a reduced length for the potential core and an increase of the size of the jet were observed for the control cases by comparison to the baseline case.The reduction in length of the potential core is not present when comparisons are made with the TR case.However, the shape of the potential core is affected by the plasma actuators with a thinner potential core.It seems to suggest that the shape of the potential core can influence the mixing properties of the jet.It should also be noted that each controlled case is producing a different pattern for the radial shape of the jet which seems to suggest that the intensity of the forcing is an important parameter for control purposes.The passive scalar study revealed that the pulsating controlled case can enhance mixing by comparison to the non-controlled cases, with a more homogeneous scalar field and low levels of scalar fluctuations.These conclusions were confirmed with PDFs of the scalar field downstream of the lipline with a sharp Gaussian-like shape and the same low peak value by comparison to the other cases on the centreline and at the lipline.Future studies will investigate the influence of the number of plasma actuators inside the nozzle as well as their location with respect to the nozzle exit.Since the pulsating motion was quite successful, various duty cycles will be investigated at various Reynolds numbers.Different excitation modes will be tested, like the first and second helical modes, following the work of Samimy et al., Kim et al., Samimy et al. for supersonic turbulent jets, as well as investigating further the jet column mode in order to find the most effective way to improve the mixing properties of the jet.Finally, the potential of forcing azimuthal modes will be explored in great detail, as they can have an important contribution for triggering instabilities with the potential to affect the mixing and acoustic properties of the jet.
Plasma-controlled turbulent jets are investigated by means of Implicit Large–Eddy Simulations at a Reynolds number equal to 460,000 (based on the diameter of the jet and the centreline velocity at the nozzle exit).Eight Dielectric Barrier Discharge (DBD) plasma actuators located just before the nozzle exit are used as an active control device with the aim to enhance the mixing of the jet.Visualisations of the different cases and time-averaged statistics for the different controlled cases are showing strong modifications of the vortex structures downstream of the nozzle exit, with a substantial reduction of the potential core, an increase of the jet radial expansion and an improvement of the mixing properties of the flow.
rats become equally efficient as controls to terminate an acute stress response.This competency complies well with previous results where BPA-treated males exhibited similar behavioral coping and corticosterone responses to stress as the untreated animals.The finding that a short-term stress can modify FKBP51 levels despite the BPA-induced methylation changes, suggests an enhanced expression plasticity of this regulator that was also observed in the HT22 cell line deriving from male mouse hippocampus.Our findings in male rats are in good agreement with the results obtained in HT22 cells and demonstrate that this cell line is a valid model to study the mechanisms underlying the effects of BPA exposure on Fkbp5 regulation.In the cell line, BPA exposure during differentiation lead to decreased Fkbp5 expression and increased methylation at the corresponding CpGs in intron 5.Of note, these changes were not due to acute effects of BPA as they were observed 3 days after BPA washout."Similarly to the rat model, BPA's effect on Fkbp5/FKBP51 levels was larger than on Fkbp5 methylation.This is at least partly due to the fact that the detection methods do not have the same sensitivity, which makes comparison of the results difficult.Further, other factors and/or genomic regions not investigated here, but involved in Fkbp5 regulation, might be affected by BPA.Interestingly, dexamethasone treatment, simulating the stressful situation in the rat model, increased Fkbp5 expression to a greater extend in BPA-treated than in the untreated cells.The reason for decreased basal expression of Fkbp5 but increased inducibility of the gene by glucocorticoids despite an increase in DNA methylation is not solved.One possibility is that methylation changes at the responsive element under study affect mineralocorticoid receptor binding, responsible for mediating the effects of basal glucocorticoid levels, but not GR binding that follows the increase of hormone levels.Although sharing the DNA recognition sequence, the two receptors are modulated in their activity by diverse co-activators and co-repressors that might be affected differently by the methylation changes.Further studies are necessary to understand the detailed implications of the DNA methylation changes induced by BPA.The ER inhibitor ICI affected Fkbp5 expression and methylation during HT22 differentiation similarly to BPA.Furthermore, ERβ, but not ERα, bound to the differentially methylated region at intron 5, and this binding was disrupted by BPA and ICI."These results imply a function of ERβ in mediating BPA's effects on Fkbp5, which was further supported by the fact that ERβ knock-down abolished the effects of BPA on Fkbp5 expression and DNA methylation. "Notably, however, the effects found here cannot be ascribed to BPA's estrogenic properties as i) they were observed three days after BPA had been washed out of the culture and ii) BPA did not display agonistic properties in the HT22 cells on an ERE-driven transcription, in contrast to E2.Further, there is an agreement in the literature that BPA does not induce the same conformational changes as the natural ligands when binding to ERs, hence presumably attracting a different set of co-regulatory proteins than in the presence of natural ligand.Accordingly, we propose that BPA affects Fkbp5 transcriptional regulation by interfering with ERβ binding to the regulatory region of intron 5, where ERβ controls DNA methylation, a function of ERβ that we have described previously.A tentative model is depicted in Supplemental material, Fig. S3.We could not see any effect of BPA on ER or GR protein expression in our cell model.Further, ERβ levels were not changed in the hippocampi of the treated rats.However, in mice it was shown that BPA exposure leads to decreased ERβ levels in brain areas other than the hippocampus.Thus in other regions BPA might not affect DNA binding of ERβ but rather its protein levels.Ultimately, however, this will lead to the same result, a lack of ERβ binding to intron 5 of Fkbp5 and thus an increase in DNA methylation.Interestingly, BPA seems to affect ERβ expression in the rodent brain in a sexual dimorphic manner and data not shown).This might explain why we could not detect any methylation changes in female rats at the investigated regions of Nr3c1 and Fkbp5.Sexually dimorphic effects of BPA in brain function have been reported in several previous in vivo studies.Furthermore, the few epidemiological studies linking BPA exposure to neuropsychiatric outcomes in children also show differences between girls and boys.This demonstrates the intricate interaction between BPA and the endogenous sex hormones and consequently the importance to investigate its effects on both sexes.We demonstrate here that perinatal exposure of rats to a low BPA dose alters Fkbp5 expression, methylation pattern and inducibility by stress in the hippocampus of male offspring.The observed alterations in Fkbp5 were also detected in differentiating hippocampal neurons of male origin.In the cell model, the mechanism implicates ERβ in the regulation of the epigenetic impact, a finding that requires further studies in the in vivo setting.The BPA-induced changes in hippocampal Fkbp5 confer a link between environmental chemicals and stress-related disorders.The authors have no competing financial interests.
In rats, perinatal BPA exposure modifies stress response in pubertal offspring via unknown mechanisms.Similar effects were obtained in a male hippocampal cell line when exposed to BPA during differentiation.The estrogen receptor (ER) antagonist ICI 182,780 or ERβ knock-down affected Fkbp5 expression and methylation similarly to BPA.Further, BPA's effect on Fkbp5 was abolished upon knock-down of ERβ, suggesting a role for this receptor in mediating BPA's effects on Fkbp5.These data demonstrate that developmental BPA exposure modifies Fkbp5 methylation and expression in male rats, which may be related to its impact on stress responsiveness.
Pulmonary fibrosis is a condition with injured lesions and scars, which cannot work properly and make it difficult to breathe due to the disability of the lungs to carry oxygen to the bloodstream.The most common symptoms are shortness of breath and dry cough.These symptoms may be mild or even in the early stages.It may be asymptomatic, or symptoms become worse when scars develop.In pulmonary fibrosis, normal lung tissue architecture is replaced by scar tissue, which is generally characterized by collagen deposition and fibroblast proliferation.Pulmonary fibrosis is a chronic inflammatory lung disease with potential lethal prognosis with an inappreciable response to available medical therapies .Idiopathic pulmonary fibrosis is a devastating disease of unknown cause.Some drugs include BLM, methotrexate, amiodarone, nitrofurantoin, heavy metal dust in the air and mineral agents like silica, malachite and exposure to dust and radiation can induce pulmonary fibrosis .Pulmonary fibrosis may be due to acute and chronic pulmonary disorders.Pulmonary fibrosis arises from excessive deposition and abnormal accumulation of collagen generated by fibroblasts and myofibroblasts.These events damage alveolar cells and reduce their elasticity and flexibility .BLM is an important chemotherapeutic glycopeptide antibiotic, which is used for many malignancies.It was produced by the bacterium “Streptomyces verticillus” that discovered by Umezawa and colleagues in 1962 .BLM plays a very important and considerable role in the treatment of various cancers such as lymphoma, carcinoma of head and neck, germ cell tumors, testicular carcinoma and ovarian cancer.BLM has no serious immuno- and myelosuppressive effects.The most important toxicities of BLM in humans are pulmonary injury and skin complication .Pulmonary fibrosis is the most severe adverse effects of BLM in cancer patients.Thereafter, administration of a single intratracheal dose of BLM was introduced as the most common animal model of pulmonary injury and fibrosis in mice, rats and hamsters.Intratracheal administration of BLM causes dose-dependent damage to the lungs .BLM is known to generate reactive oxygen species like superoxide and hydroxyl radicals.Generation of the ROS in the lung tissue is because of DNA injury, lipid peroxidation, damage of epithelial cells, and an excessive deposition of extracellular matrix and lung collagen synthesis.Administration of BLM leads to inflammatory and fibrotic reactions that way collagen production is stimulated by fibroblasts .BLM pulmonary toxicity appears as pneumonia, which begins with vascular endothelial damage due to free radicals and cytokines and eventually it may progress to fibrosis .In many types of research, pulmonary adverse effects in most patients are depend on BLM dose, age, the presence of pre-existing lung diseases, smoking and exposure to polluted air in industrial cities .The chemical structure of flavonoids has the general main backbone, which consists of two phenyl rings and heterocyclic C ring in their main structure.Hydroxyl groups on the B ring mediate the most antioxidant activity of flavonoids.There are plenty of polyphenol compounds in many fruits and vegetables, which show antioxidant properties in addition to its nutritional role ."Epicatechin is one of the polyphenol flavonoids that belong to flavan-3-ols group.This compound is polyphenol due to numerous phenol groups in its main structure and is generally found in green tea and cacao .In addition, the antioxidant properties of flavonoids are related to the chelating activity with metal ions and scavenging of free radicals .Previous evidences have shown the protective effects of Epi on oxidative stress and fibrosis.The green tea extract, which consists of Epi, has shown protective and antifibrotic effects in the paraquat model of pulmonary toxicity and fibrosis by controlling oxidative stress and endothelin-1 expression .Furthermore, it is known that tea catechins including Epi, epicatechin gallate, epigallocatechin and epigallocatechin gallate can prevent the oxidative damage due to tert-butyl hydroperoxide by reducing the oxidative stress markers in diabetic rats .In addition, it has been reported that cisplatin nephropathy can be prevented by Epi via decrease in mitochondrial oxidative stress and ERK activity .In order to prevent pulmonary injury caused by BLM, this study was designed to evaluate the preventive effects of Epi on oxidative stress, inflammation and pulmonary fibrosis induced by BLM in mice.Bleomycin was obtained from Chemex Company.-Epicatechin, bovine serum albumin, chloramine T, Ellman’s reagent, thiobarbituric acid and Bradford reagent were purchased from Sigma–Aldrich.Ammonium molybdate, butylated hydroxytoluene, trichloroacetic acid, buffered formalin, HCl and perchloric acid were purchased from Merck Company.Commercial glutathione peroxidase and superoxide dismutase kits were purchased from RANSEL, Randox Com, UK.Transforming growth factor beta commercial enzyme-linked immunosorbent assay kit was provided by Hangzhou, Eastbiopharm.Male NMRI mice were obtained from Ahvaz Jundishapur University of Medical Sciences animal house.Upon arrival, the animals were allowed to acclimatize for 1 week.The mice were kept in cages and given standard mouse chow and drinking water ad libitum.Mice were maintained at a controlled condition of temperature with a 12 h light and 12 h dark cycle.This research was performed in accordance with the Animal Ethics Committee Guidelines of AJUMS for the care and use of experimental animals.This study was conducted on 82 mice weighing 20–25 g in two times to differentiate oxidative stress, inflammation and fibrosis .Mice were randomly divided into six groups of 6 to 8 mice in each time of 7 and 14 days.Experimental mice groups were I. control, II.BLM 4U/kg/2 ml, III-V BLM groups were pretreated with Epi 25, 50 and 100 mg/kg/10 ml, respectively, from three days before until 7 or 14 days after BLM and VI.Epi 100 mg/kg/10 ml.Mice were anesthetized with intraperitoneal injection of 50 mg/kg ketamine and 5 mg/kg xylazine, and then received a single intratracheal dose of either saline or
Lung fibrosis is a chronic and intermittent pulmonary disease, caused by damage to the lung parenchyma due to inflammation and fibrosis.Each group was divided into six subgroups include control, Epi 100 mg/kg, BLM, and BLM groups pretreated with 25, 50 and 100 mg/kg Epi, respectively, from three days before until 7 or 14 days after BLM.Lung tissue oxidative stress markers including the activity of superoxide dismutase, glutathione peroxidase, catalase and the levels of malondialdehyde and glutathione were determined.
GPX activity in a dose-dependent manner.These effects were at the same level compared to the BLM group and showed significant recovery.BLM reduced lung tissue GSH levels in comparison with the control group.Epi in doses 50 and 100 mg/kg increased GSH levels, whereas the treatment with Epi 25 mg/kg had no effect.BLM increased MDA levels and Epi at the doses 50 and 100 mg/kg decreased lung tissue MDA levels.HP content is an important index for collagen deposition in the lung tissue.BLM obviously increased HP content compared with the control group.Epi in doses 50 and 100 mg/kg reduced lung HP but the treatment group with Epi 100 mg/kg showed significant recovery.Treatment with Epi in dose 100 mg/kg decreased TGF-β compared with the BLM group.TGF-β is a pro-fibrotic cytokine that is activated by ROS and triggers a cascade mechanism, which causes fibrosis in the lung tissue.The TGF-β level on day 14 was higher than day 7.Histological examination shows that the treated group with BLM is damaged with the obvious lesions, cell infiltration, and decomposed tissue.In MT staining, the collagen can be seen in the form of blue strings, which is more confirmation of fibrosis.These pathological manifestations are more obvious on day 14.The treated group with Epi 25 mg/kg in both time courses is like BLM and there are no significant changes.At both early and late phases of BLM injury, treated groups with Epi 50 and 100 mg/kg showed recovery very well.Pretreated group with 100 mg/kg has similar manifestations to the control group.Grading of alveolitis and inflammation by the Szapiel method on day 7, and scoring for fibrosis by the Ashcroft method on day 14, in mice model of pulmonary injury induced by BLM are shown in Fig. 9.Epi in doses of 50 and 100 mg/kg has been able to attenuate inflammatory lesions on day 7 and inhibit the progression of fibrosis on day 14.In this study, we investigated possible protective effects of different doses of Epi against the harmful effect of BLM.BLM is a chemical agent for many malignancies.One of its most important adverse effects is pulmonary fibrosis; therefore, BLM is used as a model of pulmonary fibrosis.The BLM model of lung injury and fibrosis has some similarities to human lung fibrosis.Pulmonary fibrosis is a lethal lung disease that occurs when the lung tissue becomes hardly damaged by thickening of the alveolar cell walls with collagen.Alveolar wall thickening is associated with coughing, shortness of breath and dyspnea .In this model of lung toxicity, damage can be observed as an early phase with oxidative and inflammatory events, which usually continues until day 7 and late phase with fibrotic outcomes, which will continue until 14 to 21 days after bleomycin.Thus, the days 7 and 14 were selected as endpoint days .BLM causes apoptotic changes in the alveolar and bronchiolar epithelial cells .Therapeutic agents can prevent apoptotic effects induced by BLM.BLM causes toxicity by cleaving DNA in a process dependent on the presence of molecular oxygen and iron ion as cofactors in DNA double-strand demolition.BLM binds DNA and Fe and molecular oxygen and produces a complex that can attack the DNA, and then the ROS mediators cause lipid peroxidation .The role of iron has been determined in the oxidative stress and damage caused by bleomycin .Therefore, the chelating ability of flavonoids may be responsible for attenuation of lung injury due to BLM in mice.The antioxidant activity of Epi is related to free radical scavenging and metal ion chelating ability."The antioxidant activity of Epi is due to the ortho 3,4'-dihydroxy moiety in the B ring and its chelating ability is due to the o-phenolic groups in the3,4'-dihydroxy positions in the B ring .The endogenous antioxidant defense system including enzymatic and non-enzymatic was examined in this study.The oxidative stress that activates inflammatory signaling pathways causes oxidative damage .In BLM model of lung injury, oxidative damage is a condition due to the lack of balance between ROS production and the antioxidant defense system .It has been reported that oxidative stress and inflammation are early and fibrosis is a late event in BLM model of lung damage .However, in present research it seems that oxidative stress and inflammation remain at the time of fibrosis and play a role in the development and maintenance of fibrosis.In addition, the fibrotic lesions are noticeably visible on Day 7.In confirmation, the data indicate that the level of TGF-β increases after BLM administration and their levels are well visible on days 7 and 14.TGF-β increases ROS overproduction and ROS activates TGF-β cytokine production.This can be a reason to keep oxidative stress high in the late phase of pulmonary injury on day 14.TGF-β is a pro-fibrotic agent that causes fibrosis by the proliferation of fibroblast and accumulation of excessive ECM.Furthermore, TGF-β can decrease GSH levels in the lung tissue .BLM caused 25% mortality in one-week study and 50% in two-week study.Epi could reverse mortality percent of BLM treated animals.In a similar study, which pulmonary toxicity induced by BLM 0.1 U/100 μl/mouse, i.t., BLM induced 60% mortality after 15 days .The increased body weight during administration of Epi 100 mg/kg is similar to that of the control group.In BLM treated mice the body weight of animal decreases and this refers to the condition of the disease.In the present study, body weight returned to almost normal weight in Epi 100 mg/kg pretreated mice in the BLM model of lung injury.This suggests that Epi leads to recovery and weight
Accordingly, animals were randomly assigned into two groups of 7 and 14 days to evaluate the role of Epi in the early oxidative and late fibrotic phases of BLM-induced pulmonary injury, respectively.Epi exerted protective effects against BLM-induced pulmonary injury in a dose-dependent manner in two early and late phases of lung injury.Oxidative stress markers persisted until the late fibrotic phase, as pro-fibrotic events were present in the early oxidative phase of BLM-induced injury.
gain by improving the damage induced by BLM.A significant increase in lung index on day 7 may be due to the role of both inflammation and pro-fibrotic lesions.Pretreatment with Epi, especially with dose 100 mg/kg in the BLM lung injury model, has shown the best reversing effect according to increase the activity of GPX, SOD, CAT and GSH level.Epi reduced tissue levels of MDA, HP, TGF-β and lung index.The improvement of pathological manifestations in the lung tissue can be attributed the antioxidant activity of Epi in the lung.Pretreatment with Epi in dose of 50 mg/kg in the BLM model of lung fibrosis also provides recovery, but damage can still be seen and there is no complete recovery.There is no difference between Epi 25 mg/kg in BLM lung fibrosis with the BLM group and Epi with this dose cannot reverse the lung inflammatory and fibrotic damage.As previously reported, Epi 1 mg/kg by gavage for 14 days increases the expression of GPX, SOD and CAT enzymes in senile mice .It can be concluded that enhanced activity of SOD, CAT and GPX in Epi pretreated mice of the lung injury may be due to the induction of expression of these enzymes by Epi in this model of lung injury.However, Epi dose in mentioned study is low in comparison with the Epi doses in our study.It has been reported that pretreatment with Epi 50 mg/kg decreased the oxidative damage and showed hepatoprotective effects in the rat model of hepatitis .In another study in doxorubicin model of brain toxicity in the rats, the used Epi dose was 10 mg/kg per day for 4 weeks .It has been shown that the mice doses should be divided by 12.3 to convert animal doses to human equivalent doses .Accordingly, effective Epi doses of this study can be considered for the relevance of Epi doses in mice to humans.As a result, daily human Epi doses in the range of 4–8 mg/kg, which is equivalent to 280–560 mg for a person of 70 kg can be estimated for clinical application.However, careful pharmacokinetic and pharmacodynamic studies along with considerations of species differences and human variability should be made to estimate the right dose in humans.In this regard, a review article on the effects of Epi in the range of 25–447 mg on human cognition has reported that there is not enough evidence for the optimal Epi dose for positive cognitive effects .Polyphenol compounds have poor bioavailability and short half-life, hence their prophylactic and therapeutic uses are limited .Thus, there are ongoing efforts to enhance their bioavailability and consequently biological activity by using nanotechnology .There is evidence that Epi absorption and excretion are dose-dependent in rats .Thus, poor bioavailability of Epi can be compensated with high or repeated doses of Epi.Overall, Epi repeated administration, nanotechnology techniques, Epi inhalation use and intravenous application of Epi are highlighted to overcome poor bioavailability of Epi.Epi and Epi containing foods have beneficial effects .However, as a question in this study, can the inhibitory effects of Epi on BLM-induced pulmonary damage be extended to the reversing effects of Epi on anticancer activity of BLM?,Can Epi be co-administered with BLM in clinical application for cancer patients?,Hence, it is mentioned that green tea polyphenols induce selective toxicity in cancer cells and could be a valuable adjuvant in the treatment of cancer .Furthermore, the beneficial effects of tea polyphenols in combination with anticancer agents have been widely accepted by cancer researchers .In addition, Epi as a naturally known tea polyphenol compound can enhance the anti-proliferative effect of BLM on cancer cells without any toxicity to normal cells .Nevertheless, Epi is believed to be able to alleviate the negative side effects of BLM and enhance its anticancer efficacy.Data from Epi 100 mg/kg without BLM treatment indicated that Epi has no harmful effect on the lungs and its effects are similar to the control group.Thus, Epi may be considered for inhalation or systemic use before using BLM or at the time of injury in cancer patients.In conclusion, our project showed that Epi could reverse the toxic effects of BLM through the attenuation of oxidative stress, inflammation and fibrosis in mice.Overall, based on the data of this study, Epi effects on BLM-induced pulmonary lesions are schematically shown in Fig. 10.Epi as a restorative agent can improve and control lung damage induced by BLM and maybe increases the quality of life in cancer patients.However, further studies are needed to confirm the safety of Epi and co-treatment of systemic or inhaled Epi with BLM requires more safety and efficacy studies for clinical application in cancer patients.The authors did not have any conflict of interest.
Epicatechin (Epi) as a flavonoid has antioxidant and anti-inflammatory properties.This study was conducted to evaluate the effect of Epi on oxidative stress, inflammation and pulmonary fibrosis induced by bleomycin (BLM) in mice.Finally, it is concluded that Epi can protect the lung against BLM-induced pulmonary oxidative stress, inflammation and fibrosis.
voice research more generally, by optionally exporting a wealth of intermediate signals and analysis results in convenient data formats.With extra multichannel hardware, FonaDyn supports the acquisition of additional signals in parallel and in synchrony, such as pressure, respiration, larynx height tracking, etc., for subsequent co-analysis with the EGG data, using matlab or similar tools.Such hardware, with frequency response down to DC, is available from the music analog synthesizer industry.It is connected via ADAT inputs, which can be found on some high-end audio interfaces.FonaDyn has not yet been deployed to other users, pending the publication of the present article as the primary reference.However, it has been a tool in several student theses and pilot studies, and the principle of EGG waveform clustering has been reported in several journal papers by these authors.At seminars and conference demos, potential users have expressed great interest.FonaDyn 1.5, while perfectly usable, is still a research prototype.With our prior experience of commercializing other software, we realize that much further work is needed to implement its functions in a form that is to be robust in the clinic or in the voice studio.In placing FonaDyn and its source code in the public domain, we invite those interested to develop such a system, with due acknowledgment of the present work.The authors have no competing interests to declare.Once the cluster configuration has been tailored specifically to the user’s research question, the FonaDyn system is able to classify and/or stratify various phonatory regimes of interest, in real time, with visual feedback.Its novel contributions are: phonatory regimes and voice instabilities are mapped over voice sound level and phonation frequency; the statistical clustering obviates the need for pre-defining thresholds or categories; and, the sample entropy shows promise as a metric of perceptually relevant voice instabilities.The program can also serve as a workbench for general voice-related data acquisition and analysis.FonaDyn is hereby provided to the voice research community, as freeware under public license.
From soft to loud and low to high, the mechanisms of human voice have many degrees of freedom, making it difficult to assess phonation from the acoustic signal alone.FonaDyn is a research tool that combines acoustics with electroglottography (EGG).It characterizes and visualizes in real time the dynamics of EGG waveforms, using statistical clustering of the cycle-synchronous EGG Fourier components, and their sample entropy.The prevalence and stability of different EGG waveshapes are mapped as colored regions into a so-called voice range profile, without needing pre-defined thresholds or categories.With appropriately ‘trained’ clusters, FonaDyn can classify and map voice regimes.This is of potential scientific, clinical and pedagogical interest.
Complex vasculobiliary injuries constitute 12–61% of biliary injuries, most commonly following laparoscopic cholecystectomy .They present a formidable challenge to the multidisciplinary team treating these patients.During management, complete right and left hepatic arterial occlusion due to accidental coil migration during embolization of a cystic artery stump pseudoaneurysm is an extremely rare complication .We present a case depicting our strategy to tackle this obstacle in the management of CVBI.This work has been reported in line with the SCARE criteria .A 35 year old healthy lady presented to our department on sixth postoperative day with an external biliary fistula and intra-abdominal sepsis.She had undergone laparoscopic converted to open cholecystectomy for acute calculous cholecystitis.She had a biliary injury that was identified intra-operatively, managed by Roux-en-y hepaticojejunostomy.The anastomosis leaked.An interno-external percutaneous transhepatic biliary drainage extending across the leak was performed at our hospital on POD 7 for both right and left hepatic ducts.On POD nine, she had an upper gastrointestinal bleed.Esophagogastroduodenoscopy and Contrast enhanced computed tomography abdomen did not reveal the source of bleeding.On conventional hepatic arteriogram, a leaking cystic artery pseudoaneurysm was identified.During angioembolisation, due to a short stump of cystic artery, coils were placed in right hepatic artery .However, one of the coils accidentally migrated into the left hepatic artery and could not be retrieved.LHA stenting was performed, with good flow of contrast across the stent.However, LHA developed spasm in its distal part, resulting in complete block of LHA and RHA.On first day after coiling, there was significant elevation of liver enzymes with features of ischemic hepatitis.CECT abdomen with arteriography revealed poor enhancement of hepatic arterial tree in the segmental branches with partial revascularization from inferior phrenic and retroperitoneal arteries.The patient’s relatives were explained the possibility of a need for an emergency liver transplant.The patient improved over the next 48 h, was transferred out of intensive care unit and oral feeds were started.Abdominal drain was removed after it stopped draining bile.She was discharged on POD 28 with PTBD catheters in situ.On POD 33, her liver function tests were within normal limits, percutaneous transhepatic cholangiogram showed trickle of contrast across the RYHJ, and the PTBD catheters were clamped.PTBD catheters were kept longer than 6 months and intervening periodic cholangiograms revealed anastomotic narrowing.Balloon dilatation of the stenosed anastomosis was performed on multiple occasions.Liver function was normal all along.Fourteen months after surgery, cholangiogram revealed a worsening of RYHJ stricture.A CECT abdomen and conventional angiogram performed at 18 months showed a blocked RHA and LHA.Multiple collaterals were seen arising from the right inferior phrenic artery, retroperitoneum and along the hepatoduodenal ligament.In view of the collateral formation and persistent tight stricture with failed multiple dilatations, a surgical revision of the anastomosis was planned.Prior to surgery, right PTBD catheter was maneuvered from right to left duct across the hilum, draining externally.During surgery, utmost care was taken not to release any adhesions in the hepatoduodenal ligament and not to mobilize liver from the diaphragmatic attachments to preserve all collaterals.Hilar dissection was kept to a minimum and the PTBD catheters guided the identification of biliary hilar confluence.A minor segement IV hepatotomy was done to expose the left hepatic duct and the roof of biliary confluence.Previous anastomosis was resected, and a Roux loop was prepared.A 2 cm single layer, interrupted 5-0 polydioxanone RYHJ was fashioned to the anterior wall of the biliary confluence with an extension onto the left duct.Her postoperative recovery was uneventful.Cholangiogram on POD 10 showed a good flow across the anastomosis with no sign of a leak, and she was discharged with a clamped PTBD catheter.The PTBD catheter was removed 6 weeks after the surgery.The patient is doing well at 1 year follow up.Liver function is normal.She is being followed up to evaluate for secondary patency.CVBI, most commonly seen after a laparoscopic cholecystectomy, is defined as any biliary injury that involves the confluence or extends beyond it, any biliary injury with major vascular injury or any biliary injury in association with portal hypertension or secondary biliary cirrhosis .Major vascular injury is injury involving one or more of aorta, vena cava, iliac vessels, right hepatic artery, cystic artery, or the portal vein, seen in 0.04–0.18% of the operated patients and more so in patients with biliary injury .The vascular injury is suspected intraoperatively when there is significant bleeding during laparoscopic cholecystectomy, when there is a sudden rise in alanine aminotransferase during early postoperative course, or when there are multiple metallic clips on plain film images of the abdomen .The arterial injury may have been the result of the primary bile duct injury or may be the result of the attempted biliary repair .Vascular injury can present as pseudoaneurysm, usually within the first 6 weeks of post-operative period, abscesses over 1–3 weeks or ischemic liver atrophy after many years .Proper hepatic artery/both right and left hepatic artery involvement in CVBI has been very scarcely reported in literature.The reported cases in English literature and their management plan are presented in Table 1 .While these cases deal with intraoperative proper hepatic artery injury/occlusion, our case had a post RYHJ cystic artery pseudoaneurysm to begin with and the proper hepatic artery occlusion was an unfortunate event that took place during the embolization.There are occasional case reports of interventional proper hepatic artery pseudoaneurysm coiling leading to complete proper hepatic arterial occlusion with good outcome .However, a biliary injury complicated by cystic artery pseudoaneurysm bleed and then a proper hepatic artery occlusion has not been reported
Introduction: Complete proper hepatic arterial [PHA] occlusion due to accidental coil migration during embolization of cystic artery stump pseudoaneurysm resulting from a complex vasculobiliary injurie [CVBI] post laparoscopic cholecystectomy [LC] is an extremely rare complication with less than 15 cases reported.We present a case depicting our strategy to tackle this obstacle in management of CVBI and review the relevant literature.Presentation of case: A 35 year old lady presented on sixth postoperative day with an external biliary fistula following Roux-en-y hepaticojejunostomy [RYHJ] for biliary injury during LC.She developed a leaking cystic artery pseudoaneurysm, during angioembolisation of which, one coil accidentally migrated into left hepatic artery resulting in complete PHA occlusion.Fourteen months later, cholangiogram revealed a worsening RYHJ stricture despite repeated percutaneous balloon dilatations.Revision RYHJ was fashioned to the anterior wall of biliary confluence with an extension into left duct.Postoperative recovery was uneventful.The patient is doing well at 1 year follow up.
so far, to the best of our knowledge.Proper hepatic arterial injury induces biliary ischaemia and worsens a biliary injury.The hepatic ischemia is usually transient.The effect is more profound when the biliary confluence is disrupted along with a proper hepatic artery injury, since the hilar marginal collateral is also disrupted in these injuries .The effect on blood supply also affects the surgical outcomes.On univariate analysis in a study on factors affecting the surgical outcomes in biliary injury cases, VBI and sepsis were identified as factors for treatment failure.Also, 75% of these cases had complications and a poor long term patency rate .5.6–15% of CVBI cases in another published series required liver resection.Known indications of hepatectomy included concomitant vascular injury and high biliary injury, liver atrophy, and intrahepatic bile duct strictures in their series .The clearing function of the liver with the translocated intestinal bacteria is impaired after ischemia and hence, it is important to maintain these patients with high antibiotic levels in the blood just to avoid septic complications in the ischemic liver parenchyma .Table 1 shows that these cases have a very high mortality rate and overall outcomes are poor.The management options include hepatic resection, liver transplantation in cases with fulminant liver failure and revision RYHJ with or without an arterial repair.Most of these cases have been managed by an early attempted RYHJ and arterial repair .However, temporary percutaneous biliary intervention to allow the collaterals to form has not been attempted in these cases.Since Michel’s study on collateral circulation of liver to the present angiographic studies, it is now known that there are a lot of possible collateral channels to liver and bile ducts as shown in Fig. 5.Collateral flow can be demonstrated 10 h after the occlusion.With time, the collaterals are good enough to sustain the biliary system .As seen in Fig. 5, collaterals can develop from common hepatic artery, gastroduodenal artery, ligamentous arteries from falciform, coronary and right triangular ligaments, pancreatoduodenal arteries, the intercostal arteries and the phrenic arteries .In our case, the predominant collaterals were in the hepatoduodenal ligament and from superior and inferior phrenic arteries.Whilst waiting for the collateral supply to develop, the biliary anastomotic leak can be managed percutaneously by balloon dilatation and/or stenting as was performed in our case.Surgical planning involves identification of all collateral channels on the arteriogram.During surgery, liver mobilization has to be kept to minimum to preserve the ligamentous collaterals and hepatoduodenal ligament dissection also needs to be minimum .The revision anastomosis is performed with the standard surgical principles of RYHJ.Vascular assessment should be part of investigation of all complex biliary injury cases.Delayed definitive repair in cases involving PHA occlusion to allow collateral circulation to be established within the hilar plate, hepatoduodenal ligament and perihepatic/peribiliary collaterals and thereby provide an adequate arterial blood supply to the biliary confluence and the extrahepatic portion of bile duct is a feasible management option.Minimum dissection should be done during surgery to preserve biliary and hepatic neovasculature.Authors declare that they have no conflict of interest.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.Institutional ethics committee approval was taken for the publication.Informed consent was obtained from the patient for this publication.Gunjan Desai: collecting data, analysis of data, preparing the initial draft of the manuscript, critical revision of the manuscript for intellectual content, technical support, material support, study supervision.Prasad Pande: critical revision of the manuscript for intellectual content, technical support, material support, study supervision.Rajvilas Narkhede: critical revision of the manuscript for intellectual content, technical support, material support, study supervision.Dattaprasanna Kulkarni: study concept, critical revision of the manuscript for intellectual content, administrative, technical support, material support, study supervision.This is a case report and does not require such registration.The publication was performed after the approval of research protocols by the Ethics Committee of Lilavati Hospital and Research Centre in accordance with international agreements.Gunjan S Desai is the article guarantor.Not commissioned, externally peer-reviewed.
Multiple collaterals had developed.Minimum hilar dissection ensured preservation of collateral supply to the biliary enteric anastomosis.Discussion: Definitive biliary enteric repair should be delayed till collateral circulation is established within the hilar plate, hepatoduodenal ligament and perihepatic/peribiliary collaterals to provide an adequate arterial blood supply to biliary confluence and extrahepatic portion of the bile duct.Conclusion: Assessment of hepatic arteries should be part of investigation of all complex biliary injuries.Delayed definitive biliary enteric repair ensures a well-perfused anastomosis.Minimum hilar dissection is the key to preserve biliary and hepatic neovasculature.
Advances in nanomaterial synthesis have led to the development of solar cells that can potentially combine high efficiency with lower production costs than conventional cells."One example is the dye sensitised solar cell, first demonstrated in 1991 by O'Regen and Grätzel , which uses nano-structured TiO2 to enhance the absorption by a thin layer of dye and for which efficiencies have reached 12% .Quantum dot sensitised solar cells are a variation of this design in which colloidal quantum dots replace the organic dye.A number of the properties of QDs make them well-suited to the role of photoabsorber.In particular, they have a band gap that can often be size-tuned to optimise exploitation of the solar spectrum; they are photo-stable and highly absorbing; and they can in some cases exhibit multiple exciton generation.This process has the potential to enable the Shockley–Queisser limit of solar cell efficiency to be exceeded .QDs can also be used to replace the electrolyte as the hole-transporting medium .QDs composed of a number of different materials have been used as photoabsorbers in QDSSCs, including: CdS, CdSe, CdTe, PbS, PbSe, InP, GaAs and HgTe .The efficiencies of QD-based cells have shown rapid improvement in recent years, with the greatest efficiency reported currently 7% .Recently, QDs with a Type-II or quasi-Type-II structure have begun to be investigated as photoabsorbers.Type-II QDs have a core/shell structure in which the electron and hole localise in different regions.In contrast, both charge carriers are contained within the same volume in Type-I QDs; in a quasi-Type-II structure, one carrier is delocalised over the whole QD whilst the other is confined to a particular region.A Type-II or quasi-Type-II structure reduces the overlap between the electron and hole wave-functions, decreasing the rate of direct recombination and hence potentially improving the efficiency of charge extraction from the QD.In these structures the band edge transition is between the valance band of one component and the conduction band of the other, which red-shifts the absorption edge from what can be achieved in QDs composed of either material alone.This can be an advantage in some cases because it allows the band-edge to be shifted closer to the optimum value for exploitation of the solar spectrum, ~ 1.35 eV .QDSSCs sensitised by ZnSe/CdS , CdS/ZnS and ZnTe/ZnSe Type-II QDs have been investigated previously.However, all of these had absorption edges in the visible part of the spectrum at wavelengths less than 600 nm, and so were not well-suited to the efficient use of the solar spectrum.Very recently, a QDSSC incorporating CdTe/CdSe QDs has been reported , which had an absorption edge in the near-infrared.There are a number of out-standing issues that require study before the design of Type-II QDs can be optimised for the sensitization of QDSSCs.In particular, a consequence of using a Type-II structure is that the probability that one or other of the charge carriers is found in the shell region is significantly increased.This will lead to an increased likelihood that this carrier will interact with surface traps.Surface trapping has been shown to result in very rapid recombination in Type-II QDs , which might more than offset the reduction in direct recombination.It is also not clear whether there is an advantage n localising the hole in the core of the QD, away from surface traps, rather than the electron, or vice versa.The region in which each charge is localised will also affect the rate at which it can be extracted from the QD.In this study, we compare the performance of QDSSCs sensitised by CdTe/CdSe and CdSe/CdTe QDs, which localise the hole in the core and shell regions, respectively.We also investigate QDSSCs incorporating CdTe/CdSe/CdS and CdSe/CdTe/CdS core/shell/shell QDs.The conduction and valance band structure for each of these QDs is shown schematically in Fig. 1.The values shown are band offsets calculated from bulk ionisation potentials and electron affinities .The CdS outer shell acts as a potential barrier to holes, reducing the overlap of their wave-functions with any surface states.CdTe has been shown to be vulnerable to corrosion by the sulphide electrolyte commonly used in QDSSCs and the CdS layer is also protection against this .Initially, the synthesis of each of the QDs is detailed.The structural characterisation of the QDs by scanning transmission electron microscopy and powder X-ray diffraction is also reported, as is their optical characterisation by absorption and photoluminescence spectroscopy.We also describe an assessment of the effect of surface traps on the recombination rate for each QD using transient PL spectroscopy and PL quantum yield measurements.Finally, the photovoltaic performance of QDSSCs based on each of the QDs is reported and discussed .Cadmium oxide, oleic acid, octadecene, selenium, tri-n-octylphosphine, octadecylamine, tetradecylphosphonic acid, tellurium, and sulphur were used as purchased.All reactions were carried out under nitrogen conditions using Schlenk techniques.Anhydrous solvents were used in all procedures.QDs with a spherical shape are believed to more effectively penetrate nanoporous TiO2 electrodes than other shapes .CdSe nanoparticles can be grown either with a wurtzite or zinc blende crystal structure, with only the latter being isotropic and hence ensuring uniform growth in all directions and a spherical shape.Thus, to obtain spherical growth, a method was chosen to ensure that a zinc blende crystal structure is formed, as opposed to the more stable, at higher temperatures, hexagonal wurtzite structure.For CdTe shell growth, 0.04 mol dm− 3 cadmium oleate and 0.05 mol dm− 3 TOP-Te stock solutions were prepared and stored under nitrogen.In a typical reaction 1 mL CdSe cores were injected
CdTe/CdSe and CdSe/CdTe core/shell colloidal quantum dots, both with and without a second CdS shell, have been synthesised and characterised by absorption and photoluminescence spectroscopies, scanning transmission electron microscopy and X-ray diffraction.Each type of quantum dot had a zinc blende crystal structure and had an absorption edge in the near-infrared, potentially enabling the more efficient exploitation of the solar spectrum.
it is unlikely that CTAB would have affected this cell fraction.Only minor fractions of CTAB were recovered as FFA, and phospholipid, with 25.61 ± 6.05% of CTAB remaining on the column.This data suggests that the adsorbed CTAB did not significantly affect the composition of the lipid fraction.Therefore, CTAB must have increased the amount of phospholipids available for extraction due to the solubilisation of the phospholipid cell membrane ; this may also have contributed to the increased total recovered lipid when harvesting by foam flotation.The neutral lipids found within microalgae extracts are mainly comprised of triacylglycerols .Although both polar and neutral lipids can be converted to biodiesel, neutral lipids are the desired fraction as TAGs are easily transesterified to biodiesel .FFA can also be converted after esterification ; however, Van Gerpen reported that phosphorus containing compounds in the crude lipid oil did not convert into the methyl esters, which may cause problems during conversion and combustion processes .However, whilst not necessarily valuable for biodiesel production, there are established and growing markets for certain phospholipids and their by-products that may form important outputs as part of an algae biorefinery.CTAB is used as a food grade chemical for the extraction of pigments from red beet; therefore procedures to ensure that the product is fit for human/animal consumption are established .Phospholipids can also be recycled as sources of nitrogen and phosphorus for microalgae cultivation, which could significantly reduce the operational production costs .The effect of foam flotation on the fatty acid profile was investigated.It was important to characterise the fatty acid profile as this can dramatically affect the quality of the biodiesel product and inform the economics of a biorefinery producing lipid-based high value products.No significant difference was found between the quantity of FAMEs gained from cells harvest by centrifugation or foam flotation, confirming that the adsorbed CTAB did not significantly affect the neutral lipid fraction.The total tranesterifiable lipid for cells harvested by centrifugation was 6.4 ± 1.3% dry weight and 5.6 ± 0.3% for cells harvested by foam flotation.However, discernible changes within the FAME profiles were noted.Significantly higher yields of the monounsaturated fatty acid oleic acid were recovered from cells harvested by foam flotation compared to cells harvested by centrifugation.Significantly greater yields of total MUFA and saturated fatty acids were also recovered from foam flotation harvested cells."In terms of biodiesel quality, higher proportions of MUFA and SFA are desirable as they increase the fuels' energy yield, cetane number, and also improve the oxidative stability .Surprisingly, cells harvested by centrifugation had higher yields of total polyunsaturated fatty acids at 32.5 ± 0.48% DW compared to 25.3 ± 0.54% DW for cells harvested by foam flotation; including higher yields of linoleic and linolenic acids.There was no significant difference between the C18 series between either harvest method.Knothe stated that palmitic, stearic, oleic, and linolenic acids are the most common fatty acids present in biodiesel.These components equate to 24.7 ± 0.46% for centrifugation and 23.3 ± 0.30% for foam flotation; there was no significant difference between the harvesting methods.Lee et al. tested the effect of three different flocculating methods: pH adjustment, treatment with aluminium sulphate, and treatment with Pestan, on the lipid content of Botryococcus braunii.It was found that the total lipid content was unaffected by the harvest method; however, no investigation into the fatty acid profile was carried out.Borges et al. also found no significant difference for the total microalgae lipid content with respect to harvest method when comparing anionic and cationic polyacrylamide flocculants; however, the fatty acid profile differed significantly between different flocculants.It would therefore appear that the choice of harvest method can greatly affect lipid product quantity and quality.Harvesting of Chlorella sp. by foam flotation is most effective during phases of active culture growth, suggesting that foam flotation may prove particularly advantageous for species that synthesise desirable biochemicals during active growth, but not as beneficial necessarily for species cultured specifically for biodiesel production.A greater quantity of lipid was recovered when biomass was harvested by foam flotation as opposed to centrifugation.This study is the first to investigate the effect of CTAB-aided foam flotation harvesting on lipid content and fatty acid profiles.The improved lipid recovery occurred due to a combination of an increase in the total extractable lipid caused by the solubilisation of the phospholipid bilayer by the surfactant CTAB, and also a proportion of the CTAB dose becoming adsorbed onto the cell and entering the lipid extraction process.Foam flotation resulted in a predominance of saturated and monounsaturated fatty acids within the fatty acid profile, which provide many favourable features for biodiesel production.Foam flotation is an advantageous microalgae harvesting technique and a full technoeconomic analysis in relation to microalgae biorefining is greatly needed.
Foam flotation is an effective and energy efficient method of harvesting microalgae.Surprisingly, the quantities of lipid recovered from microalgae harvested by foam flotation using the surfactant cetyl trimethylammonium bromide (CTAB), were significantly higher than from cells harvested by centrifugation.Further, cells harvested by CTAB-aided foam flotation exhibited a lipid profile more suited to biodiesel conversion containing increased levels of saturated and monounsaturated fatty acids.However, further evidence also suggested that CTAB promoted in situ cell lysis by solubilising the phospholipid bilayer, thus increasing the amount of extractable lipid.This work demonstrates substantial added value of foam flotation as a microalgae harvesting method beyond energy efficient biomass recovery.
in Training schools.iLEAPS also supports participants from developing countries to conferences and training schools to ensure truly global coverage.The iLEAPS newsletter with a hard-copy circulation of ca. 3000 and available online to the whole community and beyond highlights important aspects of iLEAPS work to large audiences.With the emerging Future Earth, iLEAPS will initiate and join integrated activities that aim at providing sustainable solutions via co-designing with funders, scientists, and private sector stakeholders relevant to the question at hand.The Initiatives that iLEAPS is developing in collaboration with its sister projects include the joint iLEAPS-GEWEX initiative Aerosols, Clouds, Precipitation, Climate that was restructured in early 2013; the joint iLEAPS-GLP-AIMES initiative Interactions among Managed Ecosystems, Climate, and Societies; the joint IGAC-iLEAPS-WMO Interdisciplinary Biomass Burning Initiative; the joint iLEAPS-GEWEX theme on Bridging the Gaps between Hydrometeorological and Biogeochemical Land-Surface Modeling; the joint iLEAPS-ESA initiative Biosphere-Atmosphere-Society Index; the Extreme Events and Environments initiative that aims to connect the two separate communities working on temporary climate extremes and permanently extreme environments, respectively, to shed light on the adaptive capacities of the Earth’s ecosystems and societies; and the ambitious international programme The Pan-Eurasian Experiment that includes more than 40 institutes in Europe, Russia, and China.In addition to these active initiatives, more work is in preparation: iLEAPS is planning to engage with the adaptation community on hotspot areas especially in the Arctic and in Africa, with Latin America and East and South Asia in mind as well.The regional offices of iLEAPS are a crucial element of this work.Over the last decade, the importance of land-atmosphere processes and feedbacks in the Earth system has been shown on many levels and with multiple approaches, and a number of publications have shown the crucial role of the terrestrial ecosystems as regulators of climate and atmospheric composition.Modelers have clearly shown the adverse effect of neglecting land cover changes and other feedback processes in current Earth system models and recommended actions to improve them.Unprecedented insights of the long-term net impacts of aerosols on clouds and precipitation have also been provided.Land-cover change has been emphasized with model intercomparison projects that showed that realistic land-use representation was essential in land surface modeling.Crucially important tools in this research have been the networks of long-term flux stations and large-scale land-atmosphere observation platforms that are also beginning to combine remote sensing techniques with ground observations.The first decade of iLEAPS work focussed mainly on natural, pristine environments.The result has been a substantial increase in our understanding of the processes controlling land-atmosphere interactions, but still the uncertainties related to their role in the Earth system are large.In addition, increasingly, the main influence modifying ecosystems is human society.Humans are now one of the strongest influences on climate and the environment in the history of the Earth, and can no longer be ignored in studies of the Earth system and land-atmosphere interactions.The second phase of iLEAPS will concentrate on interactions between natural and human systems and on feedbacks among climate, atmospheric chemistry, land use and land cover changes, socioeconomic development, and human decision-making.iLEAPS will contribute to Future Earth’s agenda to provide research and knowledge to support the transformation of societies toward global sustainability.
The integrated land ecosystem-atmosphere processes study (iLEAPS) is an international research project focussing on the fundamental processes that link land-atmosphere exchange, climate, the water cycle, and tropospheric chemistry.The project, iLEAPS, was established 2004 within the International Geosphere-Biosphere Programme (IGBP).During its first decade, iLEAPS has proven to be a vital project, well equipped to build a community to address the challenges involved in understanding the complex Earth system: multidisciplinary, integrative approaches for both observations and modeling.The iLEAPS community has made major advances in process understanding, land-surface modeling, and observation techniques and networks.The modes of iLEAPS operation include elucidating specific iLEAPS scientific questions through networks of process studies, field campaigns, modeling, long-term integrated field studies, international interdisciplinary mega-campaigns, synthesis studies, databases, as well as conferences on specific scientific questions and synthesis meetings.Another essential component of iLEAPS is knowledge transfer and it also encourages community- and policy-related outreach activities associated with the regional integrative projects.As a result of its first decade of work, iLEAPS is now setting the agenda for its next phase (2014-2024) under the new international initiative, future Earth.Human influence has always been an important part of land-atmosphere science but in order to respond to the new challenges of global sustainability, closer ties with social science and economics groups will be necessary to produce realistic estimates of land use and anthropogenic emissions by analysing future population increase, migration patterns, food production allocation, land management practices, energy production, industrial development, and urbanization.
one function parameter in the probability distribution function and the parameter name must be “aa”, “bb” or “cc” corresponding to the first, second or the third fitting parameter.Remind also that, a sample file for the AMIDAS code can be downloaded from the AMIDAS website.For doing Bayesian fitting, users need to set the lower and upper bounds of the scanning range for each fitting parameter:Moreover, once the probability distribution function of one fitting parameter has been chosen as, e.g. Gaussian-distributed, it will be required automatically to set also the expected value and the standard deviation of this parameter:On the other hand, for the case that an user-defined probability distribution function has been used for one fitting parameter, users have also to give the notation and the unit of this parameter for the output files and plots:Finally, as the last information for our Bayesian analysis procedure, users have the opportunity to choose different scanning method.So far in the AMIDAS-II package we programmed three different scanning methods:For a finer scanning, the third option above “scan the whole parameter space roughly and then the neighborhood of the valid points more precisely” let AMIDAS-II scan immediately and randomly the neighborhood of the point for finding a better point, once this point is valid or almost valid.13,Users can set the number of scanning points for one fitting parameter:Remind that the default AMIDAS simulation setup shown in Sections 2.2–2.4, Sections 3.2–3.7 and Sections 5.2–5.4 has been generally used; the total number of WIMP-signal events is now set as 500 on average .Meanwhile, for numerical simulations, two plots for users’ comparisons will also be offered.One is only the comparison of the reconstructed rough velocity distribution with the input functional form :The distribution of each Bayesian reconstructed fitting parameter, e.g. the characteristic Solar and Earth’s Galactic velocities, in all simulated experiments or analyzed data sets will also be provided by the AMIDAS-II package:Moreover, for reconstructions with more than one fitting parameter, the projection of the result points on each 2-D plane of the parameter space will also be given:In this paper, we give a detailed user’s guide to the AMIDAS package and website, which is developed for online simulations and data analyses for direct Dark Matter detection experiments and phenomenology.AMIDAS has the ability to do full Monte Carlo simulations as well as to analyze real/pseudo data sets either generated by another event generating programs or recorded in direct DM detection experiments.Recently, the whole AMIDAS package and website system has been upgraded to the second phase: AMIDAS-II, for including the new developed Bayesian analysis technique.Users can run all functions and adopt the default input setup used in our earlier works for their simulations as well as analyzing their own real/pseudo data sets.The use of the AMIDAS website for users’ simulations and data analyses has been explained step-by-step with plots in this paper.The preparations of function/data files to upload for simulations and data analyses have also been described.Moreover, for more flexible and user-oriented use, users have the option to set their own target nuclei as well as their favorite/needed one-dimensional WIMP velocity distribution function, elastic nuclear form factors for the SI and SD WIMP–nucleus cross sections and different probability distribution functions needed in the Bayesian reconstruction procedure.As examples, the AMIDAS-II codes for all user-uploadable functions are given in Sections 3 and 5 as well as Appendices B and C.In summary, up to now all basic functions of the AMIDAS package and website have been well established.Hopefully this new tool can help our theoretical as well as experimental colleagues to understand properties of halo WIMPs, offer useful information to indirect DM detection as well as collider experiments, and finally discover Galactic DM particles.
In this paper, we give a detailed user's guide to the AMIDAS (A Model-Independent Data Analysis System) package and website, which is developed for online simulations and data analyses for direct Dark Matter detection experiments and phenomenology.Recently, the whole AMIDAS package and website system has been upgraded to the second phase: AMIDAS-II, for including the new developed Bayesian analysis technique.AMIDAS has the ability to do full Monte Carlo simulations as well as to analyze real/pseudo data sets either generated by another event generating programs or recorded in direct DM detection experiments.Moreover, the AMIDAS-II package can include several "user-defined" functions into the main code: the (fitting) one-dimensional WIMP velocity distribution function, the nuclear form factors for spin-independent and spin-dependent cross sections, artificial/experimental background spectrum for both of simulation and data analysis procedures, as well as different distribution functions needed in Bayesian analyses.
robots .With the ability to incorporate enhanced hardware and AI/machine learning to carry out many everyday jobs, smart automation now enables the discovery of new molecules and improvements to existing chemical synthesis .In addition, AI/machine learning coupled with ‘big data’-generating systems will, and already can in some cases, directly provide outputs for many purposes in the various fields of chemistry.This is because the fundamental nature of AI/machine learning permits the model to be updated and continuously refreshed as new data is produced, leading to more discoveries that cover a larger area of chemical space and eliminating negative confounding factors.In our view, the enthusiasm of the field should now be focusing on the potential of developments for chemical discovery with emphasis on automation coupled with machine learning, harnessing the powerful capabilities of these approaches shown throughout this opinion article.Here, we have shown how automation and machine learning can improve efficiency and accuracy and therefore are a universal combination for synthesis, optimization, and discovery in the chemistry laboratory.Outstanding Questions,How can we enable synthetic chemists to operate chemputers without having to know how to code?,How much of current chemistry can be done with the Q12019 chemputer?,Can we drive adoption of the chemputer via development of a new way to write synthesis protocols?
There is a growing drive in the chemistry community to exploit rapidly growing robotic technologies along with artificial intelligence-based approaches.Applying this to chemistry requires a holistic approach to chemical synthesis design and execution.Here, we outline a universal approach to this problem beginning with an abstract representation of the practice of chemical synthesis that then informs the programming and automation required for its practical realization.Using this foundation to construct closed-loop robotic chemical search engines, we can generate new discoveries that may be verified, optimized, and repeated entirely automatically.These robots can perform chemical reactions and analyses much faster than can be done manually.As such, this leads to a road map whereby molecules can be discovered, optimized, and made on demand from a digital code.
stress.However, it is important to note that an oxide would generate a compressive lateral stress which must be balanced by a tensile stress present deeper into the matrix.Therefore, if the oxide is intergranular a lateral compressive stress is generated which is balanced by a tensile stress ahead of the oxide along the grain boundary.This tensile stress, which cannot be measured with FIB micro-hole drilling technique, can enhance the oxidation kinetics.Once a considerable amount of intergranular oxide is formed, it would help to reduce the Ni transport to the surface along the grain boundary, leading to higher compressive stress near the GB and resultant higher biaxial tensile stress ahead the oxidation front.This high compressive stress would then be partially relieved by pipe diffusion of Ni atoms to the surface with Ni nodules formation also in the regions adjacent to the GB itself) .The extent of the intergranular oxide penetration measured via FIB cross-sectioning analysis was found to vary according to the sign and magnitude of residual stress.The intergranular oxide penetration depth profile reported in Fig. 8 clearly highlighted this behaviour.In particular, regions of the specimen subjected to a compressive residual stress always showed intergranular oxide penetrations of less than 280 nm whereas regions characterized by tensile residual stress exhibited GB oxide penetration greater than 400 nm.The increased depth of preferential intergranular oxide penetration in regions with tensile residual stress could be associated with the stress-assisted diffusion of Cr to the GB and the vacancy migration away from the GB.In fact, Cr is expected to diffuse faster to GBs under a tensile stress, thus increasing the intergranular oxidation rate.Moreover, in presence of a tensile stress the O solubility in the material is markedly increased and its diffusivity is affected by the hydrostatic stress gradient present in the host metal in a similar manner to other interstitial elements such hydrogen .Oxygen locally diffuses from compressive or low-tension zones towards those in high tension.In the current case when a residual tensile stress is present, a stress gradient is generated between the surface and the bulk, and a hydrostatic stress can be present only in the bulk material.This stress gradient is ultimately believed to be responsible for the enhanced oxygen diffusion along the grain boundary, and for the phenomenon of stress-assisted preferential intergranular oxidation.The beneficial effect of compressive stresses on the intergranular oxidation susceptibility is markedly visible in region “A” of Fig. 1.At the extrados, where the residual compressive stress was high, the intergranular oxide penetration was less than 200 nm, but as the residual compressive stress decreased, the intergranular oxide penetration markedly increased reaching a depth of 300 nm in vicinity of region “A” to “B” transition where the residual compressive stress was much lower.It is also important to note that the specimen was plastically bent and therefore a considerable amount of plastic strain was present in the sample, especially in regions “A” and “E” of Fig. 1.Thus, the effect of plastic strain on the intergranular oxidation susceptibility must also be considered and evaluated.In fact, it is well-known that the presence of plastic strain promote strain incompatibilities and strain gradient across the grain boundary, which can affect cracking initiation and propagation as well as increase the intergranular oxidation susceptibility and cracking of Alloy 600 and austenitic alloys .Therefore, it might have an addictive effect to stress, promoting intergranular oxidation even in regions where a compressive residual stress would hinder it.The results of this study have shown that the combination of advanced micromechanical and analytical techniques are providing new insight into processes occurring at the surface/near-surface regions in Alloy 600 during exposure in H2–steam at 480 °C, and are thus relevant to the early stages of SCC initiation phenomena:Advanced AEM characterisation of FIB-produced cross-section specimens demonstrated that Alloy 600 underwent both internal and preferential intergranular oxidation in H2-steam at 480 °C.The XRD stress profiles revealed a marked variation in the stresses at the GB suggesting a correlation between the oxide surface morphology, the intergranular oxidation susceptibility and the residual stress.The FIB-based micro-hole drilling technique was successfully employed to obtain residual stress profiles across grain boundaries for the first time on polycrystalline material.These residual stress results are correlated with the susceptibility of the material to preferential intergranular oxidation.The presence of residual tensile stress enhances the intergranular oxidation susceptibility of Alloy 600 whereas plastic strain has a secondary influence.The local and random GB oxide morphology variations were often seen at the intersection of twin and high-angle grain boundaries suggesting that the GB character can play an important role on the oxidation susceptibility.The presence of Al and Ti oxides at HAGBs might play a fundamental role in the first stage of Alloy 600 preferential intergranular oxidation acting as a precursor event for the subsequent Cr-rich oxide formation.The Novel FIB micro-hole drilling technique could be employed to measure stresses in a variety of polycrystalline materials in order to understand the effect of stress localization on environmentally assisted cracking susceptibility.
Analytical electron microscopy was employed to characterize the early stages of oxidation to aid in developing an understanding of the stress corrosion cracking behaviour of this alloy.The measurements of residual stresses at the microscopic level using a recently-developed FIB micro-hole drilling technique revealed a correlation between local stress variations at the grain boundaries and the oxide morphology.
According to the ambitious targets of climate change mitigation made in the Paris Agreement, there is a need for rapid and effective reductions in global greenhouse gas emissions.The Paris Agreement aims at holding the increase in the global average temperature well below 2 °C, compared to pre-industrial levels, and pursues an even smaller increase of 1.5 °C.Boreal forests and forestry may largely contribute to the global carbon cycle and mitigation of climate change.This is because boreal forests sequestrate large amounts of carbon dioxide from the atmosphere and provide forest biomass for the growing needs of the bioeconomy, which will reduce the use of fossil fuels.One way to reduce the GHG emissions from the production and use of fossil-based products and fuels is to replace them with wood-based products and fuels.Increased use of wood-based products and fuels can limit GHG emissions by the substitution effect and enhance the removal of CO2 from the atmosphere by increasing the carbon stocks in wood-based products.The climate benefits of wood utilization are typically considered self-evident if sustainable forestry holds, i.e. when the harvested forest area remains as a forest and new trees will replace the harvested trees in the area.The wood utilization is, however, more complicated from the viewpoint of climate change mitigation if time aspects are taken into account.Firstly, wood harvesting reduces the carbon stocks of forests, compared to unharvested forests.Secondly, most of the carbon in new wood-based products and fuels will also be released back to the atmosphere rapidly, especially from biofuels and paper products.This will lead to a situation in which GHG emissions measured as carbon dioxide equivalents are increased in the atmosphere in a certain time interval if harvested wood with substitution effects cannot compensate for carbon debt in forests before the new forest growth.From a climate change mitigation point of view, increased biogenetic CO2 emissions is analogous to an increase in fossil-based carbon emissions, especially when studied over short time periods.In this sense, to gain climate benefits over time, harvested wood should be used for products and fuels that would release less GHG emissions to the atmosphere than substituted fossil-based products and fuels.Additionally, we should simultaneously increase carbon sequestration in forests.In recent years, the climate impacts of forest-based bioeconomy have been assessed in many simulation-based studies, considering changes in carbon stocks in forests and wood-based products and fuels."In many previous studies, the climate benefit of substituting non-wood products and fuels with wood-based ones have been quantified through a displacement factor, which expresses the amount of reduced GHG emissions per mass unit of wood use, when producing a functionally equivalent product or fuel.In its calculation, the GHG emissions of all stages of the life cycles of products and fuels are taken into account, but DFs do not cover the impacts of wood harvesting on the carbon stocks of forests and wood-based products and fuels.When the interpretation of the climate impacts of wood-based products and fuels is based only on the values of DFs, changes in carbon stocks in forests and wood-based products are not considered.However, they should be considered in the evaluation of net climate impacts for forest biomass use over time.The predicted results of carbon stock development in forests by simulation models depend especially on the quality of input data and on models’ capability to describe relevant carbon flow processes in forests.For assessing DFs, life cycle assessments of both wood- and non-wood-based products and fuels include also uncertainties.Although this methodology has been standardized and there exist guidelines for the calculation rules of LCA.For example, forest industry produces a wide range of wood product types and materials, the DFs of which are difficult to assess on regional and market levels because of data gaps in real substitution situations and the challenges related to the GHG assessments in product comparisons.In practice, the assessments employ different choices and assumptions in the methodology and input data, which may be site- and region-specific."The reported DFs have in most cases been positive for wood-based products.This means that they cause less GHG emissions compared to fossil-based alternatives.In general, the use of wood-based products and fuels may be assumed to have positive net climate impacts over time, if their emission reductions due to DFs are greater than the reduction in the carbon stocks in forests and wood-based products and fuels in a selected time period.In this study, the aim was to develop a methodology to assess a required displacement factor for all wood products and bioenergy manufactured and harvested in a certain country in order to achieve zero CO2 equivalent emissions from increased forest utilization over time in comparison with a selected baseline harvesting scenario.We applied the methodology in the real case of Finland to assess the RDF at the country level.In order to interpret the RDF results, a magnitude of average DFs for all domestic wood-based products and fuels produced in the Finnish forest industry was assessed.In this study, WU in Eq. includes all wood material that is harvested from forest sites.Furthermore, the calculation of GHG emissions is based on the use of GWP factors for different GHG emissions in order to express results as CO2 equivalents of the emissions.GHG emissions represent fossil-based emissions along the life cycles of products and fuels in the techno-sphere.If ΔCF is positive in Eq., forests act as carbon sinks.If Net C is negative, the forest utilization causes more CO2 equivalent emissions than it reduces them.If the result of Eq. is negative,
However, the DFs of individual products and their production volumes could not be used alone to evaluate the climate impacts of forest utilization.For this reason, in this study we have developed a methodology to assess a required displacement factor (RDF) for all wood products and bioenergy manufactured and harvested in a certain country in order to achieve zero CO2 equivalent emissions from increased forest utilization over time in comparison with a selected baseline harvesting scenario.
of INT1 the corresponding emissions will be clearly smaller, i.e. 367 Mt CO2 equivalents for the first 50 years and 696 Mt CO2 equivalents for the whole period.Finnish GHG emissions in 2015 were 55.6 Mt CO2 equivalents.Assumptions and uncertainties in models and their input data will contribute to the results of RDF.In addition, the uncertainty aspects related to the estimation of an average DF for all domestic wood-based products and fuels produced in a country play an important role in the interpretation of RDF results.In our study, the average RDFs during the 100-year period obtained from the difference between the basic and INTs scenarios in 2017–2116 are clearly greater than our average displacement factors for domestic wood-based products and fuels produced from Finnish forests.Our estimation is quite similar to the average DF of 1.2 tC tC−1 obtained in the meta-analysis by Leskinen et al., in which DFs were derived from 51 case studies on products mostly covering wood used in construction materials.However, it is important to notice that substitution impacts of forest utilization on country levels have been estimated to be lower than estimations calculated for individual products.On the country level, two recent studies report average DFs of 0.5 tC tC−1 in Switzerland and Canada.The results indicate that our rough estimation of DFF may be overestimated and it can be considered as “a maximum value”.For this reason, our estimates of DFF will probability lead to too positive interpretations for the climate impacts of wood utilization.The Monsu model has been developed by utilizing large sets of empirical observations on forest growth and soil respiration, considering also changes in tree growth due to climate change.Sensitivity analyses have been conducted with the Monsu model on changes in the carbon pools of living forest biomass, dead organic matter and wood products, as well as carbon releases from harvesting in regard to management and wood use intensity.It can be assumed that Monsu can describe well the current carbon balance of forest utilization in Finland, but possible changes in environmental circumstances will be challenging for future predictions in all models.For example, simulation models seldom consider the effects of forest management and harvesting intensity, and climate change, on different abiotic and biotic disturbances.The aging of forests and increasing volume of growing stock, especially in Norway spruce, may increase different abiotic and biotic damage to forests by windstorms, drought, insects, pathogens, and forest fires.As a result of large-scale disturbances, forest carbon stocks may decrease and large amounts of carbon may be released into the atmosphere.One way to check the reliability of our calculations is to compare them to corresponding results produced by different forest simulation models.As showed in Section 4.1, the MELA model will produce quite similar results, but the timeframe of the comparison was only 30 years.However, the comprehensive comparisons had not been available.For these reasons, it is important to carry out comparative studies in order to understand the behavior of forest simulation models and their possible limitations to improve conclusions about the reliability of the calculated RDFs.In the method developed in this study, determination of the required displacement factor for additional domestic wood harvesting was based on the difference in the carbon stocks in forests and wood-based products and fuels between two wood harvesting scenarios during a certain time period.A RDF expresses here the minimum efficiency of using forest biomass to reduce net GHG emissions.The 100-year simulation of the use of domestic round wood by the Finnish forest industry revealed that increasing wood harvesting permanently by 19 Mm3 yr−1 from the basic level would lead to a required displacement factor of 2.4 tC tC−1 for wood-based products and fuels obtained from the increased harvest in 2017–2116.This would compensate for the decreased carbon sinks in forests and changes in the carbon stocks of wood-based products.However, reported displacement factors for wood-based products and fuels and the share of wood-based products and fuels manufactured in Finland indicate that the average displacement factor of wood-based products and fuels produced in the Finnish forest industry is probably under 1.1 tC tC−1.The lower value of DFF compared to the assessed value of RDF means more net GHG emissions to the atmosphere.The increase of 9.6 Mm3 yr−1 in wood harvesting in Finland will cause only slightly smaller RDFs during the next 100 years compared to the increase of 19 Mm3 yr−1.The results indicate that the increase of harvesting intensity in the current situation represents a challenge for the Finnish forest-based bioeconomy from the viewpoint of climate change mitigation.Our method is also applicable in other countries and it is straightforward to apply at a country level to calculate the RDFs for additional harvesting and utilization of domestic round wood for different wood-based products and fuels, if forest simulation models and required input datasets are available.However, to reduce the uncertainty of RDF calculations and to improve the interpretation of results, there is a need to produce corresponding results using also other simulation models and different circumstances.Better estimations on the average DF of wood-based products and fuels manufactured from domestic wood for the current situation and in the future are also needed.
A displacement factor (DF) may be used to describe the efficiency of using wood-based products or fuels instead of fossil-based ones to reduce net greenhouse gas (GHG) emissions.Input data for calculations were produced with the simulation model, Monsu, capable of predicting the carbon stocks of forests and wood-based products.We tested the calculations in Finnish conditions in a 100-year time horizon and estimated the current average DF of manufactured wood-based products and fuels in Finland for the interpretation of RDF results.However, the estimated average DF of manufactured wood-based products and fuels currently in Finland was less than 1.1 tC tC−1.The results indicate strongly that the increased harvesting intensity from the current situation would represent a challenge for the Finnish forest-based bioeconomy from the viewpoint of climate change mitigation.For this reason, there is an immediate need to improve reliability and applicability of the RDF approach by repeating corresponding calculations in different circumstances and by improving estimations of DFs on country levels.
5 V.The measurement was repeated with 1500 ns pulses and compared to NDMOS with gate grounded.In this case, the device capability is determined by avalanche breakdown of the vertical NDMOS.On-time of the clamp was also measured, and is approximately 1 μs.Qualification showed that the product passes 3 kV HBM and 750 V CDM.An unexpected LU failure was detected on the 1st revision of the silicon.Measurements were performed on bench in order to understand the root-cause of the failure.All board capacitances on the supply pin were removed in order to analyse the failure w/o causing destruction.The supply was powered with a 57 V source and a negative current sweep was done on the impacted IO.At −3 mA the supply went abruptly in compliance.When the LU source was removed, the device resumed normal operation.Triggering of the supply clamp was suspected.A closer look to the layout of the failing product revealed the presence of a substrate NPN between “any N-type pocket”, substrate and IO under test.The suspected victim collector pockets were the cathodes of the zener stack used as static trigger of the supply clamp.A FIB was performed to disconnect these zeners, which indeed solved the problem.On the Rev2 silicon, the static trigger of the supply clamp was removed and additional NPLUG guard-rings were also added.The device now passes 100 mA LU.Up to 1 kV differential and 2 kV common-mode surge robustness is required for indoor PoE applications.The surge generator has a rise-time of 8 μs and decay time of 20 μs when discharged into a short circuit .For data lines, a 40 Ω resistance is placed in series with the 2 Ω surge generator, resulting into 48 A peak current for a 2 kV surge.For the analysis, a differential surge on unpowered device has been assumed as a worst-case.Powered surge benefits from the CPD capacitance, and common-mode surge results into reduced discharge current, depending on common-mode impedance to earth .For symmetrical communication lines a 5/320 μs common mode surge is applicable as well.During system-level events the current needs to be handled by the TVS and the supply clamp of the IC should have a Vt2 above the TVS clamping voltage .A pulsed absmax characterization of the Rev2 silicon and the TVS devices was done from 100 ns until DC.The short pulses were measured with TLP.Intermediate pulses were generated with a custom pulse generator.The results are summarized on Fig. 13.Furthermore a characterization of the TVS devices was done with 20 μs rectangular pulses as a worst-case approximation of 8/20 μs surge.We observed that the on-resistance of the TVS devices depends strongly on the pulse length, indicating that this is related to the self-heating of the TVS.When the TVS is used sufficiently below its thermal capability, the voltage drop of the TVS is significantly below the maximum value listed in the datasheet and will therefore adequately protect the controller IC.When larger surge protection levels are required, a secondary protection approach is proposed.This scheme uses 2 external resistors and an optional secondary TVS.The resistances RVPP & RRTN can be inserted without impacting the operation of the controller if their resistance value is low enough.It is recommended to limit RVPP to 10 Ω in order not to disturb the communication between PD and PSE during classification.Adding the RRTN resistance is also possible on the 802.3bt PD controller with integrated pass-switch by using a dedicated sense pin on the controller, separated from the high-current path through the drain of the pass-switch.The primary TVS is either SMAJ58A or 1SMB58A.The optional series resistors RVPP and RRTN are 10 Ω, and the optional secondary TVS is SMF58A.Measurements were done with several configurations of DTVS_1, RVPP = RRTN and DTVS_2.Tables 3 & 4 list the current capability for 20 μs and 1 ms rectangular pulses.Table 3 indicates that by using SMF as secondary TVS it was not possible to make the application fail with the current capability of the test setup.On Fig. 16, a rectangular pulse of 20 μs was applied.The measured configuration uses DTVS_1 = SMAJ58A, RVPP = RRTN = 10 Ω and DTVS_2 = SMF58A.The v waveform shows the voltage across the primary TVS, and v is the voltage across the controller IC.The primary TVS was operated close to its capability and confirms the measurement on Fig. 14, showing VPORTP ~120 V.Table 4 indicates that by using the secondary protection approach, the surge capability is solely determined by the primary TVS.During the design of an ESD supply power clamp it is important to consider also the application level surge requirements and the characteristics of the external TVS typically used to clamp the surge current.A pulsed measurement method was demonstrated in order to extract the characteristics of the IC and the external TVS, allowing an analytical design of the application diagram for a given protection level.Co-design of the IC and application schematic revealed the possibility to insert a small series resistor on dedicated pins not impacting the operation of the controller.This is optionally complemented with a small secondary TVS, and allows a scalable surge protection level.A robustness against >50 A for 20 μs pulses was demonstrated, equivalent with 2 kV test level for a 8/20 μs current surge applied line-to-line.This approach also opens up the possibility to further extend the surge immunity to the levels required for extreme outdoor environments.
In this paper we present the development of an electrostatic discharge (ESD) protection supply clamp for an 802.3bt PD PoE controller.A characterization method for extending the System-Efficient ESD design (SEED) approach to surge pulses is presented and applied to the IC and externals.Finally an application diagram for enhanced surge protection is proposed that enables the PD PoE controller to operate in harsh electrical environments.
that is taken up by each slice, which is always displayed in the Investigator View.The final percentages can be easily logged by the investigator, e.g., by taking a screen shot.In the Participant View, the participant uses the knobs to manipulate sizes of slices on the wheel.The participant is also able to use the reset button to re-initialize the wheel to equally sized slices.The participants can view the slice titles and if the investigator allows, the participant can also see the percentages of the wheel that are currently occupied by each slice.It is simple to toggle between the Investigator and Participant Views.In a typical usage setting, the investigator will use the Investigator View to set the desired settings for the question to be asked, and the participant will adjust the slices of the wheel with the touchable knobs to give a visual estimate of their answer.The investigator can then switch back to Investigator View, log the information and proceed to the next question.The probability wheel could be used to help a person estimate the probabilities of possible events occurring.For example, in a consultation about transverse rectus abdominis muscle flap breast reconstruction, the attending surgeon could use the probability wheel to estimate the probabilities that no complications will arise; that the worst complication will be minor enough to only require local wound care; that the worst complication will be serious enough to require an invasive procedure; or that the worst complication will be a life-threatening complication.Using the above case as an example, in the mobile app, the attending surgeon can set up four options such as no complications, minor complications with local wound care, serious complications with invasive procedures, and life-threatening complication in the investigator view, and use the probability wheel in the participant view to estimate the probabilities of these possible events occurring.The attending surgeon can also use the mobile app’s visualization as a graphic aid while explaining these possible outcomes to the patient.The probability wheel could also be used to assist in the utility assessment process.The probability wheel could be used to vary the probabilities of two complementary health states to determine the evaluation of an intermediate health state.For example, in a breast reconstruction consultation, the complementary health states could be a TRAM flap reconstruction surgery with no complications vs. a TRAM flap reconstruction with life threatening complications.The attending surgeon can use the mobile app’s to show a visualized utility assessment between these two possible events to the patient.This notion of a standard gamble, where the anchors are the best possible state and the worst possible state, is well-established in the decision science literature .In clinical decision analysis, the standard gamble evaluates an intermediate health state compared to the best possible health state and the worst possible health state .Procedures and tools for computerized utility assessment have been previously developed .Here we presented a mobile app implementation of the probability wheel on both Android and iOS platforms.Compared to the physical prop of a traditional probability wheel, this software application is more portable, available, and versatile, e.g., it is simple to adjust the number of slices.We envision that this tool will improve decision consultations by enabling accurate quantitative estimates of probabilities and utilities.
A probability wheel app is intended to facilitate communication between two people, an “investigator” and a “participant”, about uncertainties inherent in decision-making.A user adjusts the sizes of the slices to indicate the relative value of the probabilities assigned to them.A probability wheel can improve the adjustment process and attenuate the effect of anchoring bias when it is used to estimate or communicate probabilities of outcomes.The goal of this work was to develop a mobile application of the probability wheel that is portable, easily available, and more versatile.
in persons with Mtb infection or active TB represents a possibility that must be considered when testing therapeutic TB vaccines on persons with active TB disease.The possibility of pulmonary and systemic inflammatory reactions , and the breakdown of granuloma structure potentially resulting in Mtb dissemination should be considered and monitored.Respiratory function tests may be useful to monitor lung safety and measure therapeutic impact on lung morbidity.Mostly minor adverse events have been observed in most therapeutic vaccine studies conducted to date , but enrollment in a phase 2 trial of the M72/AS01E vaccine candidate was interrupted prematurely after the observation in TB patients of local reactions larger than expected .A careful safety risk management plan should be instituted to identify and mitigate potential risks.Safety data will need to be interpreted in the context of co-administered drug treatment to sick individuals.The potential severity of TB, compared to healthy people at minimal risk of health outcomes, may justify considering different acceptable safety thresholds for therapeutic vaccines than for prophylactic vaccines.The primary audience for this WHO PPC document includes all entities involved in the development of therapeutic vaccines for improvement of TB treatment outcomes.This PPC is presented as a complement to the existing WHO PPC on the development of TB vaccines intended to prevent TB disease , providing guidance to scientists, funding agencies, and public and private sector organizations developing therapeutic TB vaccine candidates.It is anticipated that PPCs provide a framework for developers to design their development plan and define in more detail specific target product profiles.WHO PPCs are developed following a consensus-generating wide consultation process involving experts and stakeholders in the field.Key policy considerations are highlighted, but preferred attributes expressed here do not pre-empt future policy decisions.The PPC criteria proposed are aspirational in nature.Some aspects of a potentially effective therapeutic vaccine may diverge from those proposed in this PPC, which would not necessarily preclude successful licensure and policy decision for clinical application.Achieving proof-of-concept for a therapeutic vaccine based on the ability of the vaccine to reduce recurrence rates over one year or more following a drug-mediated cure represents the preferred short-term strategic goal of therapeutic TB vaccine development.Assessing the efficacy of a vaccine delivered at some point during treatment, possibly at the end of the initial intensive treatment phase, against end-treatment failure and recurrence, may also be considered as a short-term goal.Initial studies should be conducted in the context of standard recommended drug treatment.Mtb samples obtained prior to the initiation of treatment should be available to assess whether a given recurrence is due to reactivation of the same, initially infecting Mtb strain, or whether it results from a de novo, Mtb infection that occurred near or post end-of-treatment.While initial proof-of-concept may best be generated in patients with drug-sensitive TB, reducing the emergence of drug-resistant TB represents an important strategic goal.Investigations in special populations should be initiated rapidly after initial demonstration of efficacy.Shortening and simplifying drug regimens represent essential long-term goals.Proof-of-concept should imperatively trigger vaccine evaluation for other indications including prevention of TB in the general population or in recently exposed individuals.Initial demonstration of preventive vaccine efficacy in other target populations should trigger evaluation of the vaccine as a potential therapeutic adjunct, as these various indications are of importance to public health.Tuberculosis Vaccine Initiative, Lelystad, The Netherlands
Treatment failure and recurrence after end-of-treatment can have devastating consequences, including progressive debilitation, death, the transmission of Mycobacterium tuberculosis – the infectious agent responsible for causing TB – to others, and may be associated with the development of drug-resistant TB.The burden on health systems is important, with severe economic consequences.Vaccines have the potential to serve as immunotherapeutic adjuncts to antibiotic treatment regimens for TB.A therapeutic vaccine for TB patients, administered towards completion of a prescribed course of drug therapy or at certain time(s) during treatment, could improve outcomes through immune-mediated control and even clearance of bacteria, potentially prevent re-infection, and provide an opportunity to shorten and simplify drug treatment regimens.The preferred product characteristics (PPC) for therapeutic TB vaccines described in this document are intended to provide guidance to scientists, funding agencies, public and private sector organizations developing such vaccine candidates.This document presents potential clinical end-points for evidence generation and discusses key considerations about potential clinical development strategies.
time series of food availability.Trends in these variables may help to explain the increased likelihood of maturation at small lengths, but could not be considered in this study as the relevant data were not available.In light of the apparent correlation between regional differences in the fisheries and the rate of change in Lp50, we suggest further investigation into the role of fishing.Time series of fishing mortality rates could be included in Eq., as well as interactions between fishing and the other environmental variables.If fishing mortality rates and a greater number of environmental or physiological variables were included in Eq. then it may be possible to confirm if fishing explains some of the trends in Lp50, either directly or through interactions.The potential role of fisheries-induced evolution could also be assessed by calculating time series of selection differentials due to fishing.Estimates of fishing gear selectivity and time series of fishing mortality rates in the Clyde will be needed to investigate how fishing may have been influencing maturation, so conducting stock assessments to derive these estimates will be a necessary first step.Time series of PMRNs describe temporal changes to maturation propensity independently from potential changes in growth, so temporal trends in growth were not considered in this paper.Growth rates can vary in response to the same conditions as maturation schedules – environmental, physiological and selective conditions – and a similar investigation into the growth of fish from the Clyde and wider west coast of Scotland will complement this study.Haddock, whiting and female cod in the Scottish west coast have been maturing at progressively smaller lengths and younger ages since 1986, and this has occurred most rapidly in the Clyde populations of haddock and whiting.As decreases in lengths at maturation can reduce lengths-at-age and maximum lengths by prematurely slowing growth rates, the steep decline in the abundance of large fish in the Clyde may be partially explained by these trends in maturation.Declines in Clyde landings coincided with decreases in large fish abundance, and typical catches increasingly consisted of small unmarketable individuals.Trawl fishing always truncates length structures, lowering the abundance of large fish, but if it has also been causing increasingly early maturation then the fishing process has induced a response which may have further reduced the probability of individuals growing to a large size.A reversal of these trends in maturation may promote increases in the abundance of large fish, which is needed if the Clyde demersal fishery is to be restored.If fishing has caused the observed declines in Lp50 then the amount of time since Clyde vessels stopped targeting demersal fish – from 2005 – has been insufficient for a recovery.This may be due to Nephrops vessels continuing to catch large quantities of fish.If discarding levels of the Nephrops fleet have not reduced since the 1980s and 1990s then current fishing activity may be preventing lengths at maturation from increasing, this may also explain why the community length structure has not shown signs of improvement.Furthermore, if the changes in maturation schedules have been partially caused by evolutionary responses to size-selective fishing, then this process may be ongoing through the Nephrops fishery, and may even have been accelerated by the increased use of nets with smaller mesh sizes.If there is an evolutionary component to the declines in maturation lengths then increases in Lp50 will be gradual and likely to require periods of time similar to the initial decreases.Further work is still needed to determine why Clyde demersal fish have shown such rapid declines in length at maturation, and to assess means of reversing these trends.
Probabilistic maturation reaction norms (PMRNs) were used to investigate the maturation schedules of cod, haddock and whiting in the Firth of Clyde to determine if typical lengths at maturation have changed significantly since 1986.Some potential sources of growth-independent plasticity were accounted for by including sea-surface temperature and abundance variables in the analysis.The PMRNs of the Clyde populations were compared with those from the wider west coast, in conjunction with regional differences in the fishery, to assess whether fishing may have been driving the observed trends of decreasing lengths at maturation.The lengths at which haddock, whiting and female cod were likely to mature decreased significantly during 1986-2009, with rates of change being particularly rapid in the Clyde.It was not possible to estimate PMRNs for male cod due to limited data.Trends in temperature and abundance were shown to have only marginal affects upon PMRN positions, so temporal trends in maturation schedules appear to have been due to a combination of plastic responses to other environmental variables and/or fishing.Regional differences in fishing intensity and the size-selectivity of the fisheries suggest that the decreases in lengths at maturation have been at least partially due to fishing.The importance and scale of the Clyde Nephrops fishery increased as demersal landings declined, and the majority of demersal fish landings have come from Nephrops bycatch since about 2005 when the demersal fishery ceased.Since it appears as though fishing may have caused increasingly early maturation, and a substantial Nephrops fishery continues to operate in the Clyde, reversal of these changes is likely to take a long time - particularly if there is an evolutionary component to the trends.If size-selective fishing has contributed to the lowered abundance of large fish by encouraging maturation at increasingly small lengths, then large fish may remain uncommon in the Clyde until the observed trends in maturation lengths reverse.
Prefrontal cortex is a part of the brain which is responsible for behavior repertoire.Inspired by PFC functionality and connectivity, as well as human behavior formation process, we propose a novel modular architecture of neural networks with a Behavioral Module and corresponding end-to-end training strategy. This approach allows the efficient learning of behaviors and preferences representation.This property is particularly useful for user modeling and recommendation tasks, as allows learning personalized representations of different user states. In the experiment with video games playing, the resultsshow that the proposed method allows separation of main task’s objectives andbehaviors between different BMs.The experiments also show network extendability through independent learning of new behavior patterns.Moreover, we demonstrate a strategy for an efficient transfer of newly learned BMs to unseen tasks.
Extendable Modular Architecture is proposed for developing of variety of Agent Behaviors in DQN.
In this work, we propose a self-supervised method to learn sentence representations with an injection of linguistic knowledge.Multiple linguistic frameworks propose diverse sentence structures from which semantic meaning might be expressed out of compositional words operations.We aim to take advantage of this linguist diversity and learn to represent sentences by contrasting these diverse views.Formally, multiple views of the same sentence are mapped to close representations.On the contrary, views from other sentences are mapped further.By contrasting different linguistic views, we aim at building embeddings which better capture semantic and which are less sensitive to the sentence outward form.
We aim to exploit the diversity of linguistic structures to build sentence representations.
of the electric field occurs so fast that there is no excess ion diffusion in the direction of the field.This indicates that the electrode polarization and space charge effects are prevalent.The decrease of dielectric constants with increasing frequency is mainly attributed to the mis-match of interfacial polarization of composites to external electric fields at high frequencies .It is also evident that the dielectric constant is a function of dopant concentration as shown in Fig. 12.The real component ε′, which represents storage of energy during each cycle of the applied electric field, increases with filler loading and is attributed to the fractional increase in charges due to the addition of VO2+ ions in the pure PV/MAA:EA polymer blend whereas the imaginary component ε″, which represents the loss of energy in each cycle of the applied electric field, decreases with an increase in filler loading and is attributed to the reduction of charge transport due to the building up of space charge near the electrode/electrolyte interface resulting in high conductivity .The pure and VO2+ doped PVA/MAA:EA polymer blend films have been successfully synthesized using solution casting method.A structural analysis shows an increase in amorphicity of the doped polymer blend films.The dTGA study shows an enhancement of thermal stability of the system with increase in dopant concentration.The optical absorption spectrum exhibits three bands corresponding to the transitions 2B2g→2A1g, 2B2g→2B1g and 2B2g→2Eg, characteristic of VO2+ ions in octahedral symmetry with tetragonal distortion and reveals that band gap values shift towards longer wavelength on VO2+ ions doping which is due to interband transitions.The EPR results show that g|| < g⊥ <ge and A|| G > A⊥ G for all VO2+ ions doped polymer blend films which confirm that VO2+ ions exist in the polymer blend as VO2+ in octahedral coordination with tetragonal compression.The conductivity study shows that addition of VO2+ ions to the polymer blend system enhances the ionic conductivity which is attributed to the increase in amorphicity.
Pure and VO2+ ions doped PVA/MAA:EA polymer blend films were prepared by a solution casting method.XRD pattern reveals an increase in amorphicity with increase in doping.The dTGA study shows an enhancement of thermal stability of the system with increase in dopant concentration.The optical absorption spectrum exhibits three bands corresponding to the transitions 2B2g→2A1g, 2B2g→2B1g and 2B2g→2Eg, characteristic of VO2+ ions in octahedral symmetry with tetragonal distortion and reflects that the optical band gap decreases with the increase of mol% of VO2+.EPR spectra of all the doped samples show a characteristic eightline hyperfine structure of VO2+ ions, which arises due to the interaction of unpaired electron with the 51V nucleus.The spin-Hamiltonian parameters (g and A) evaluated from the EPR spectra confirm that the vanadyl ions exist as VO2+ ions in octahedral co-ordination with a tetragonal compression and have a C4v symmetry.The impedance spectroscopic study shows that the addition of VO2+ ions into the polymer blend system enhances the ionic conductivity which is explained in terms of an increase in the amorphicity.
The presented data contains the microbial composition of a drinking water supply system for O'Kiep, Namaqualand, South Africa.Table 1 represents the bacterial composition of the source point at the lower Orange River while Table 2 shows the microbial composition of the treated water, distributed by a state owned agency responsible for water management activities in the region."Table 3 represents the microbial composition from a local municipal reservoir at O'Kiep storing the treated water from the water agency, which is further distributed to individual households in O'Kiep. "Tables 4–10 represents microbial composition at the point-of- use, i.e. households' tap.The DWSS samples were obtained from a 100km long pipe system designed to deliver a flow of 18 ML/day."Freshwater is sourced from the lower Orange River by a regional water supply system to the nearby towns including O'Kiep which is located in the Northern Cape, Namaqualand region of South Africa .DWSS samples were collected in April 2017 from the source to the point-of-use, i.e. at numerous household taps, in non-transparent 500 mL sterile polyethylene bottles which were immediately placed on ice prior to transportation to the laboratory.A composite sample was initially collected from lower Orange River.The second sample was composed of the treated water prior to distribution at the local water supply agency reservoir."A similar composite sample from the local municipal reservoir and samples were randomly collected from households' taps.All samples were handled according to the guidelines used for drinking water quality standard quantification .The samples were filtered through a 0.22-μm micropore cellulose membrane and the membrane was pre-washed with a sterile saline solution followed by the isolation of the genomic DNA using a PowerWater® DNA isolation kit as per the manufacturer guidelines.The DNA purity and concentration were quantified using a microspectrophotometry and the DNA concentration ranged from 10.7 to 17.3 ng/μL.The purified DNA was PCR amplified using the 16S rRNA forward bacterial primers 27F–16S-50-AGAGTTTGATCMTGGCT- CAG-‘3 and reverse primers 518R-16S-50-ATTACCGCGGCTGCTGG- ‘3 that targeted the V1 and V3 regions of the 16S rRNA.The PCR amplicons were sent for sequencing at Inqaba Biotechnical Industries, a commercial NGS service provider.Briefly, the PCR amplicons were gel purified, end repaired and illumina® specific adapter sequence were ligated to each amplicon.Following quantification, the samples were individually indexed, followed by a purification step.Amplicons were then sequenced using the illumina® MiSeq-2000, using a MiSeq V3 kit.Generally, 20 Mb of the data were produced for each sample.The Basic Local Alignment Search Tool-based data analyses was performed using an Inqaba Biotech in-house developed data analysis system.Overall, sequences were deposited in two databases, i.e. the National Centre of Biotechnology and the Sequence Read Archive database, prior to the generation of accession numbers for individual bacterial species.
The metagenomic data presented herein contains the bacterial community profile of a drinking water supply system (DWSS) supplying O'Kiep, Namaqualand, South Africa.Representative samples from the source (Orange River) to the point of use (O'Kiep), through a 150km DWSS used for drinking water distribution were analysed for bacterial content.PCR amplification of the 16S rRNA V1–V3 regions was undertaken using oligonucleotide primers 27F and 518R subsequent to DNA extraction.The PCR amplicons were processed using the illumina® reaction kits as per manufactures guidelines and sequenced using the illumina® MiSeq-2000, by means of MiSeq V3 kit.The data obtained was processed using a bioinformatics QIIME software with a compatible fast nucleic acid (fna) file.The raw sequences were deposited at the National Centre of Biotechnology (NCBI) and the Sequence Read Archive (SRA) database, obtaining accession numbers for each species identified.
Rheumatoid arthritis is a progressive systemic inflammatory disease characterized by joint destruction and functional disability .RA occurs globally in about 1.0% of the general population, with 2–4-times higher prevalence in women than in men .Although the etiology of RA is not quite clear, some inflammatory cytokines such as tumor necrosis factor α have been shown to play a central role in the occurrence and progression of RA .Infliximab, an inhibitor of TNF-α, is one of the most widely used biological disease-modifying antirheumatic drugs; combined use of infliximab and methotrexate shows clinical and radiographic benefits compared with placebo in patients inadequately controlled with therapeutic doses of MTX .Because the therapeutic effects of infliximab have been demonstrated in several clinical studies , the primary goal of RA treatment has shifted from the achievement of clinical remission to sustained remission without biologic DMARDs particularly in patients with RA in sustained remission .The first study reporting the possibility of biologic-free treatment in patients with RA was the TNF20 study .This trial indicated that early treatment of RA with infliximab induces a permanent response that persists, even after discontinuation of the drug.After publication of the TNF20 study, the Behandelstrategieёn study evaluated biologic-free treatment in much larger cohort .Sixty-four percent of patients with early RA were able to discontinue infliximab and in 56% patients treated with MTX monotherapy for 2 years, low disease activity was maintained and progression of joint damage was inhibited.In established RA patients exhibiting an inadequate response to MTX, the Remission induction by Remicade in RA patients study also examined the possibility of biologic-free remission or low disease activity .The patients enrolled in the study were those who had reached and maintained a disease activity score 28 of less than 3.2 for more than 24 weeks with infliximab treatment and who then agreed to discontinue the treatment.Among the 102 evaluable patients who completed the study, 56 maintained low disease activity after 1 year and showed no progression in radiological damage and functional disturbance; 44 remained in clinical remission.In this context, subanalysis of the dose-escalation study of infliximab with MTX showed a significant interaction between baseline TNF-α and the dose of infliximab in the clinical response.Additionally, the clinical response and disease activity were significantly better when the treatment was applied at 10 mg/kg than at 3 and 6 mg/kg, with a high baseline TNF-α .To achieve the clinical response and its sustained remission, serum TNF-α could be considered as a key indicator for optimal dosing of infliximab for RA treatment.The Remission induction by Raising the dose of Remicade in RA study was planned to compare the proportions of clinical remission based on the simplified disease activity index after 1 year of treatment and its sustained remission rate after another 1 year between the investigational treatment strategy and the standard strategy of 3 mg/kg per 8 weeks of infliximab administration in infliximab-naïve patients with RA showing an inadequate response to MTX.In this study, we describe the study design and baseline characteristics of the enrolled patients.Patients with RA were eligible for enrollment if they had active disease equal to or greater than 6 mg MTX weekly, were 18 years of age or older at the time of enrollment, and experienced no prior infliximab use.Patients were excluded if they were taking corticosteroids at doses higher than 10 mg prednisolone equivalents/day, had an SDAI ≤11.0, had severe infections, had active tuberculosis or evidence of latent tuberculosis, were given a diagnosis of systemic lupus erythematosus or any other form of concomitant arthritis, had congestive heart failure, or were pregnant or lactating women during or 6 months after treatment.All the patients gave written informed consent in accordance with the Declaration of Helsinki, and the trial was approved by the institutional review board at each participating institution.This trial was registered with University Hospital Medical Information Network.The RRRR study was conducted as an open-label, parallel group, multicenter randomized controlled trial.Eligible patients with RA who had active disease despite taking equal to or greater than 6 mg of MTX weekly were able to participate.They were randomly assigned in a 1:1 ratio to receive either a standard treatment or a programmed treatment with the starting dose of infliximab based on the three categories of baseline TNF-α in addition to baseline MTX after 10 weeks of enrollment.To ensure a balanced group design, the Clinical Research and Medical Innovation Center at Hokkaido University Hospital centrally performed the randomization using a computer-generated random number-producing algorithm.Patients were randomly assigned one to one ratio to the standard treatment arm or the programmed treatment arm with the use of permuted block within each stratum.Sixteen strata for randomization consisted of disease duration, baseline SDAI, and baseline TNF-α."Treatment allocation was blinded for the reviewer of the patients' disease, but was open for both the patients and the physicians.Clinical response was measured using SDAI, which is a well-validated measure of composite clinical disease activity .If patients had achieved an SDAI of less than or equal to 3.3 at the end of 54 weeks, they discontinued infliximab.Discontinuation of infliximab was maintained throughout follow-up until 158 weeks after enrollment unless patients showed clinical or radiologic progression.The treatment plans for the standard treatment arm and the programmed treatment arm are described in detail below.After enrollment, patients received 3 mg/kg infliximab at 0, 2, and 6 weeks.The same dose was taken every 8 weeks after 14 weeks.If the patients showed an SDAI of less than or equal
Infliximab, an inhibitor of TNF-α, is one of the most widely used biological disease-modifying antirheumatic drugs.Recent studies indicated that baseline serum TNF-α could be considered as a key indicator for optimal dosing of infliximab for RA treatment to achieve the clinical response and its sustained remission.The Remission induction by Raising the dose of Remicade in RA (RRRR) study is an open-label, parallel group, multicenter randomized controlled trial to compare the proportions of clinical remission based on the simplified disease activity index (SDAI) after 1 year of treatment and its sustained remission rate after another 1 year between the investigational treatment strategy (for which the dose of infliximab was chosen based on the baseline serum TNF) and the standard strategy of 3 mg/kg per 8 weeks of infliximab administration in infliximab-naïve patients with RA showing an inadequate response to MTX.Target sample size of randomized patients is 400 patients in total.
to 3.3 at 54 weeks, they discontinued infliximab.After the enrollment, the patients received 3 mg/kg infliximab at 0, 2, and 6 weeks.The dose of infliximab was selected based on baseline serum TNF-α.If serum TNF-α was less than 0.55 pg/mL, infliximab was kept at 3 mg/kg every 8 weeks after 14 weeks.If serum TNF-α was greater than 0.55 pg/mL to less than 1.65 pg/mL, infliximab was increased to 6 mg/kg at 14 weeks and maintained at 6 mg/kg every 8 weeks after 22 weeks.If serum TNF-α was 1.65 pg/mL or greater, infliximab was increased to 6 mg/kg at 14 weeks and to 10 mg/kg at 22 weeks; the dose of 10 mg/kg was then administered every 8 weeks after 30 weeks.If the patients showed an SDAI ≤3.3 at 54 weeks, they discontinued infliximab.The allocated dose could not be changed.Patients were dropped from the trial if they used biological DMARDs except for infliximab, increased the dose in the standard treatment arm, did not increase the dose in the programmed treatment arm, could not continue the treatment due to adverse events, were re-introduced infliximab after the discontinuation of infliximab, or had other reasons.During the infliximab treatment period, the same dose of concomitant treatment at baseline was accepted, and dose reduction or halting of concomitant treatments was also possible if necessary.The primary endpoint was the proportion of patients who sustained discontinuation of infliximab 1 year after discontinuation of infliximab at the time of 54 weeks after the first administration of infliximab.The secondary endpoints were the proportion of clinical remission at the time of 54 weeks after the first administration of infliximab; the portion of patients who sustained discontinuation of infliximab at 2 years after discontinuation of infliximab; the proportion of clinical remission based on SDAI and changes in SDAI from baseline at each time point; the proportion of clinical remission based on DAS28-ESR, DAS28-CRP, and Boolean-based definitions and change in each value at each time point; radiographs of the hands, wrists, and feet, which were centrally assessed and assigned a score according to the van der Heijde modifications of the total Sharp score; rheumatoid factor and matrix metalloproteinase-3; health assessment questionnaire and EQ-5D; serum infliximab concentration at the time of 54 weeks after the first administration of infliximab; and adverse events.Table 1 shows the details of data collection during the trial.Based on the RISING study, the proportions of clinical remission were assumed to be 21% and 34% for the standard treatment arm and programmed treatment arm, respectively .After the discontinuation of infliximab, if we assumed that the proportion of patients who sustained discontinuation was set as 55% in the standard treatment arm and 65% in the programmed treatment arm , the proportions of patients who sustained discontinuation of infliximab at 1 year after a discontinuation of infliximab at the time of 54 weeks after the first administration of infliximab in the standard treatment arm and programmed treatment arm were calculated as 11.6% and 22.1%, respectively.Based on 11.6% in the standard treatment and 22.1% in the programmed treatment arm, 199 randomized patients were needed for each treatment arm to have 80% of power at a two-sided 5% level of significance.Considering the dropout rate of approximately 10% between the enrollment and the randomization, we sought to enroll 450 patients at most in the trial until the end of September 2013.Primary analysis will be conducted based on the intention-to treat population, which included all the patients enrolled and randomized in the trial.The proportion of sustained discontinuation at 1 year after a discontinuation of infliximab at the time of 54 weeks will be compared using the Cochrane-Mantel-Haenszel test and stratification factors with disease duration and baseline SDAI.Risk difference of the proportion of sustained discontinuation at 1 year after a discontinuation of infliximab and its 95% confidence intervals will be calculated.To confirm the robustness of the primary results, the same analyses will be conducted in the population restricted to the patients who completed the planned infliximab and entered the infliximab-free period.Subgroup analysis based on the disease duration, baseline SDAI, and baseline TNF-α concentration will be planned.For the secondary endpoints, we will conduct the same analysis for a proportion of patients with clinical remission at the time of 54 weeks and a portion of patients who sustained discontinuation of infliximab at 2 years after a discontinuation of infliximab.The proportions of clinical remission according to DAS28-ESR-, DAS28-CRP-, and Boolean-based definitions will be calculated.Changes from baseline in SDAI, DAS28, rheumatoid factor, MMP-3, HAQ, EQ-5D, and the total Sharp score will be analyzed using a mixed model for repeated measures .Means and standard deviations will be calculated for all time points and displayed as a transition diagram.Time to discontinuation of infliximab and time until the loss of efficacy will be plotted using the Kaplan-Meier method with key survival statistics.Treatment arms will be compared using log-rank tests, and hazard ratios and 95% CIs will be estimated using a Cox proportional hazards model.Safety analysis will be conducted based on the safety population, which included all patients who enrolled in the study and received infliximab at least once.The combined results with the treatment arms will be shown before randomization and shown separately for each treatment arm after the randomization.The numbers and proportions of adverse events will be calculated.As an exploratory analysis, logistic regression analyses will be performed in order to identify the predictors of clinical remission at 52 weeks and sustained remission after 1 year.Baseline characteristics
The primary endpoint is the proportion of patients who kept discontinuation of infliximab 1 year after discontinued infliximab at the time of 54 weeks after the first administration of infliximab.The secondary endpoints are the proportion of clinical remission based on SDAI and changes in SDAI from baseline at each time point, other clinical parameters, quality of life measures and adverse events.
time on crop monitoring and have a suboptimal timing of on-farm activities.We found that the number of plots had a significant, positive impact on TE.In other words, the potential positive TE impacts of having more plots as a way to adapt to variations in micro-level agro-climatic conditions outweighed the potential negative effects for the farmers in this sample.This finding is consistent with the positive effect of land fragmentation on TE previously observed in the Jiangxi and Gansu Provinces of China.Finally, the significant positive effect of the township dummy indicated that farms in the Luocheng township have a lower TE than those in the Heiquan township, when other factors affecting TE remain constant.Differences in agro-climatic factors and market conditions may explain this finding.Intercropping systems generally have higher land-use efficiencies than monocropping systems.It remains unclear, however, to what extent the higher yields per unit of land are obtained at the expense of the efficiency with which other inputs such as labour, water and nutrients can be used.In this study, we examined the contribution of intercropping to the TE of a smallholder farming system in northwest China.TE measures the output obtained for a certain crop, or combination of crops, as a share of the maximum attainable output from the same set of inputs used to produce the crop.The farm-level TE of a cropping system is a key determinant of its profitability, and thus an important determinant of the livelihood strategies of smallholder farmers.Although our analysis is limited to a relatively small region in northwest China, the insights it provides are likely to be relevant for other regions where intercropping methods are practiced, both in China and the rest of the world.The contribution of intercropping to TE was examined by estimating a translog stochastic production frontier and an efficiency equation using farm input and output data collected from 231 farm households in Gaotai County, in the Heihe River basin in northwest China.Our main finding is that intercropping has a significant positive effect on TE, implying that the potential negative effects of intercropping on the use efficiency of labour and other resources are more than offset by its higher land-use efficiency when compared with monocropping.The estimated elasticity of the proportion of land under intercropping was 0.744, indicating that TE goes up by 0.744% if the proportion of land used for intercropping increases by 1%.The large and significant value of this estimate gives strong support to the view that intercropping is a relatively efficient land-use system in the studied region.Our results imply that there is still considerable scope for increasing TE in Gaotai County without bringing in new technologies.Increasing the proportion of land used for intercropping may play an important role in this respect, given that only 60% of the land in this region was under intercropping in 2013 and that the elasticity of TE in terms of the proportion of land under intercropping is close to 0.8.The expected increase in TE will contribute to increasing farm output and farm profits without affecting the availability of scarce land resources.It should be noted, however, that this conclusion only holds under the assumption of constant output prices.If the production of non-grain crops like cumin and seed watermelon would increase, this could result in lower prices for these crops and negatively affect the TE, and hence the profitability, of these intercropping systems.Recent price declines for maize in China, on the other hand, have increased the TE of maize-based intercropping systems when compared with single maize crops.Farm size was found to play a key role among the control variables affecting TE.The non-linear relationship between TE and the area of cultivated land implies that ongoing policies aimed at increasing agriculture through the promotion of so-called family farms and the renting of land to co-operatives and private companies may make a positive contribution to the overall efficiency of farming in the region we examined.TE was estimated to be highest for farms that are twice as large as the average size observed in our study.The TE analysis employed in this study takes the available technology at the time of the survey as a given; however, productivity gains could also be made through the development and introduction of new technologies, both in intercropping systems and monocropping systems.In the case of intercropping systems, these changes could involve the promotion of new varieties to replace conventional cultivars of component crops, as well as the development of specialised machinery for intercropping systems to reduce its large labour demand.
This traditional farming method generally results in a highly efficient use of land, but whether it also contributes to a higher technical efficiency remains unclear.Technical efficiency refers to the efficiency with which a given set of natural resources and other inputs can be used to produce crops.In this study, we examined the contribution of maize-based relay-strip intercropping to the technical efficiency of smallholder farming in northwest China.Data on the inputs and crop production of 231 farms were collected for the 2013 agricultural season using a farm survey held in Gaotai County, Gansu Province, China.Controlling for other factors, we found that the technical efficiency scores of these farms were positively affected by the proportion of land assigned to intercropping.This finding indicates that the potential negative effects of intercropping on the use efficiency of labour and other resources are more than offset by its higher land-use efficiency when compared with monocropping.
statistical precision in the measurement data.Comparison of fit results with different codes does reveal biases that are not under statistical control.The preferred way of dealing with uncertainties in radionuclide metrology is to make for each peak a detailed uncertainty budget and perform proper uncertainty propagation towards the final result.Uncertainty components include counting statistics, spectral interferences, impurities and background, residual deviations between fit and measurement, physical effects not included in the model and model dependence of the fit result.Normalisation and correlation of uncertainty components are constraints that require specific propagation formulas.Equations and numerical examples can be found in Pommé.Statistical uncertainties are readily introduced into Eq., and the same equation can be used to propagate the interference of an impurity that affects part of the spectrum.Also a mismatch between fit and measured spectrum can be included in the uncertainty budget.Explicit uncertainties can be assigned to fit model dependence, contrary to ignoring this component when relying on the covariance matrix.Eq. is applicable in any situation in which adding an amount Δ to peak k implies the subtraction of the same amount from the rest of the spectrum, so that the total area ΣA remains invariable and the corresponding emission probability changes to Pk=)/ΣA.This situation occurs in the fit of an unresolved doublet, subtraction of tailing from a higher-energy peak and correction for coincidence summing-in and summing-out effects.To a lesser extent, positively correlated uncertainties may also appear for which the propagation factor is smaller.If the relative deviation is the same for all peaks, there is no change in the emission probabilities.Eq. gives an upper limit for the propagated uncertainty.Whereas the convolution of a Gaussian with three left-handed exponentials is very successful in satisfactorily reproducing most high-resolution alpha particle spectra, a more elaborate modelling is needed to fit the most demanding spectra with extremely good counting statistics.A line shape model was proposed that expands the number of left-handed exponentials and also incorporates a number of righted-handed exponentials, which allows obtaining a smoother function, better reproducing changes of slope in the tailing and incorporating spectral broadening at the high-energy side.A line model with up to 10 left-handed and 4 right-handed exponentials was implemented as a function in the spreadsheet application BEST.It uses the functionality of a spreadsheet to perform the search for optimum fit parameters, to select which parameters to keep fixed or to define a relationship between a set of parameters, to store spectral data and all specifics of the fit together in one file, to plot and export the results.Applications include the free fit of individual peaks to determinate alpha emission probabilities or of complete radionuclide emission spectra to determine activity ratios in a mixed sample.The algorithm outperformed existing software at fitting high-resolution 240Pu and 236U spectra with high count numbers.Its applicability extends to thick alpha sources of which the spectrum almost resembles a step function.Further extensions are possible in which different functional shapes are combined, e.g for application with mixed spectra of mono-energetic electrons and x-rays.
Peak overlap is a recurrent issue in alpha-particle spectrometry, not only in routine analyses but also in the high-resolution spectra from which reference values for alpha emission probabilities are derived.In this work, improved peak shape formulae are presented for the deconvolution of alpha-particle spectra.They have been implemented as fit functions in a spreadsheet application and optimum fit parameters were searched with built-in optimisation routines.Deconvolution results are shown for a few challenging spectra with high statistical precision.The algorithm outperforms the best available routines for high-resolution spectrometry, which may facilitate a more reliable determination of alpha emission probabilities in the future.It is also applicable to alpha spectra with inferior energy resolution.
also noted that these bacteria inhibit the growth of other microflora.By virtue of this inhibitory property, Kocuria has been successfully used to control Aeromonas salmonicida infection in rainbow trout.Therefore, this bacterium is a potential candidate constituent for future probiotic ingredients.Kocuria has already been applied in the control of V. anguillarum infections in eels and V. arthritis in rainbow trout.Morphologically, bacteria isolated from internal organs of diseased fish were similar to previously described saprophytic forms.Neither the biochemical properties of Kocuria rhizophila nor the ones of Micrococcus luteus show any significant differences from those of the strains isolated by other authors.The Vitek 2 system and API 20 Staph correctly identified bacteria, even to the species level, as Micrococcus luteus.Our observations suggest that the isolation of Kocuria rhizophila requires a longer incubation period than other bacteria most frequently isolated from fish, and only the time longer than 48 h is sufficient.The same finding was made previously by Savini et al.Sequencing was performed because of the frequent problem of biochemical misidentification of bacterial isolates collected from fish.Genome sequencing was also carried out to study the evolutionary relationships of the taxa and to find the possible source of fish infection.Available data in GenBank showed no strains of Kocuria rhizophila or Micrococcus luteus isolated from diseased fish.Polish isolates of Kocuria rhizophila form a separate cluster, and are very similar to strains isolated from a food processing environment in Denmark.Micrococcus luteus was classified very close to the strains collected from scallops in Canada.Our study fills in the gap in the molecular database on Kocuria rhizophila and Micrococcus luteus strains obtained from moribund fish and supplements the knowledge concerning its pathogenic properties for salmonids.The results of our investigation are indispensable for precise monitoring and control of Kocuria rhizophila and Micrococcus luteus infections in farmed fish in the future.Due to diagnostic difficulties and the lack of knowledge about the influence of these microorganisms on fish health, some outbreaks could be misidentified.In human medicine, outbreaks of human disease caused by Kocuria species are still underestimated.The drug resistance of Kocuria rhizophila and Microccocus luteus and methods of their treatment have been poorly investigated up to now.According to the literature available, there are no specified interpretive criteria for Kocuria rhizophila or Micrococcus luteus, especially for clinical purposes, 2008).Szczerba displayed some data concerning the antimicrobial resistance of Kocuria, Micrococcus, Nesterenkonia, Kytococcus and Dermacoccus, but without species specifications.The author showed that these bacteria were resistant to erythromycin, which was contrary to our results.However, his results concerning bacteria susceptibility to doxycycline, and amoxicillin/clavulanate wholly concur with ours.The roles of Kocuria and Micrococcus species in fish pathology are still uncertain and should be investigated further.Although the presented studies showed pathogenic properties of these microorganisms to trout, further supplementary exploration is necessary to fill in this knowledge gap.This research was supported by the KNOW Scientific Consortium “Healthy Animal — Safe Food”, Ministry of Science and Higher Education resolution no. 05-1/KNOW2/2015.
In 2014 and 2016 in Poland a few disease outbreaks caused by Kocuria rhizophila and Micrococcus luteus were diagnosed in rainbow trout and brown trout.In each of these events, abnormal mortality (approximately 50%) was accompanied by pathological changes in external tissues and internal organs.In the majority of cases uniform growth of bacterial colonies was observed; and sometimes these bacteria appeared in predominant numbers.The bacteria identifications were performed using standard kits (API 20 Staph and Vitek 2).Sequencing was carried out so as to improve the biochemical identification of isolated bacteria and the evolutionary relationships of their taxa.It was also conducted in order to find the possible source of the fish infection.Comparison of our strains' molecular structures with the data available in GenBank showed that Kocuria rhizophila or Micrococcus luteus had never been isolated from diseased fish before, and that our isolates were very similar to the strains which had been isolated from food processing environments (in the case of Kocuria rhizophila) and from scallops (Micrococcus luteus).The challenge tests performed with our strains of Kocuria and Micrococcus on rainbow trout in laboratory aquaria confirmed the three Koch postulates.Antibacterial disc diffusion studies showed that Kocuria rhizophila and Micrococcus luteus are sensitive to most of the drugs tested i.e.The results of these studies show that control of outbreaks of the diseases in rainbow trout or brown trout seems realistic if caused by Kocuria rhizophila or Micrococcus luteus.
Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification.Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according to a measurement norm.This evaluation, however, has limited practical utility since perturbations in the pixel space do not correspond to underlying real-world phenomena of image formation that lead to them and has no security motivation attached.Pixels in natural images are measurements of light that has interacted with the geometry of a physical scene.As such, we propose a novel evaluation measure, parametric norm-balls, by directly perturbing physical parameters that underly image formation.One enabling contribution we present is a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry.Our approach enables physically-based adversarial attacks, and our differentiable renderer leverages models from the interactive rendering literature to balance the performance and accuracy trade-offs necessary for a memory-efficient and scalable adversarial data augmentation workflow.
Enabled by a novel differentiable renderer, we propose a new metric that has real-world implications for evaluating adversarial machine learning algorithms, resolving the lack of realism of the existing metric based on pixel norms.
100 ml of SKY4364 were grown to an OD600 of 0.5 in YPD media.Cells were split into two flasks, one untreated and one which was subjected to 0.02% MMS for 3 h. Cells were harvested and HIS6-tagged Ssa1 along with the associated interactome was isolated as follows: Protein was extracted via bead beating in 500 µl Binding/Wash Buffer.200 µg of protein extract was incubated with 50 µl His-Tag Dynabeads at 4 °C for 15 min.Dynabeads were collected by magnet then washed 5 times with 500 µl Binding/Wash buffer.After final wash, buffer was aspirated and beads were incubated with 100 µl Elution buffer for 20 min, then beads were collected via magnet.The supernatant containing purified HIS6-Ssa1 was transferred to a fresh tube, 25 µl of 5× SDS-PAGE sample buffer was added and the sample was denatured by boiling for 5 min at 95 °C.10 µl of sample was analyzed by SDS-PAGE.To isolate HIS6-tagged Hsp82, SKY4635 expressing HIS6-Hsp82 as the sole Hsp90 isoform in the cell were grown and processed identically to the SKY4364 cells as above.Gel lanes to be analyzed were excised from 4% to 12% MOPS buffer SDS-PAGE gels by sterile razor blade and divided into 8 sections with the following molecular weight ranges: 300–150 kDa, 150–110 kDa, 110–80 kDa, 80–75 kDa, 75–60 kDa, 60–52 kDa, 52–38 kDa and 38–24 kDa.These were then chopped into ~1 mm3 pieces.Each section was washed in dH2O and destained using 100 mM NH4HCO3 pH 7.5 in 50% acetonitrile.A reduction step was performed by addition of 100 μl 50 mM NH4HCO3 pH 7.5 and 10 μl of 10 mM Trisphosphine–HCl at 37 °C for 30 min.The proteins were alkylated by adding 100 μl of 50 mM iodoacetamide and allowed to react in the dark at 20 °C for 30 min.Gel sections were washed in water, then acetonitrile, and vacuum dried.Trypsin digestion was carried out overnight at 37 °C with 1:50 enzyme–protein ratio of sequencing grade-modified trypsin in 50 mM NH4HCO3 pH 7.5, and 20 mM CaCl2.Peptides were extracted with 5% formic acid and vacuum dried.Peptide digests were reconstituted with 60 µl of Tris–HCl buffer solution, then split into two vials with 30 µl each and vacuum dried.In a separate vial, 30 µl of Mag-Trypsin beads was washed 5 times with 500 µl of Tris–HCl buffer solution, then vacuum dried.30 µl of either 16O H2O or 97% 18O H2O was added to the respective 16O or 18O vials and vortexed for 20 min to reconstitute the peptide mixture, which was then added to the prepared Mag-Trypsin bead vial and allowed to exchange overnight at 37 °C.After 18O exchange, the solution was removed and any free trypsin in solution was inactivated with 1 mM PMSF for 30 min at 4 °C.For each sample the +/−MMS digests were combined 1:1 as follows: Forward Sample Set: 16O: 18O and Reversed Sample Set: 16O: 18O, dried and stored at −80 °C until analysis.Three biological replicate experiments were performed per sample.All samples were re-suspended in Burdick & Jackson HPLC-grade water containing 0.2% formic acid, 0.1% TFA, and 0.002% Zwittergent 3–16.The peptide samples were loaded to a 0.25 μl C8 OptiPak trapping cartridge custom-packed with Michrom Magic C8, washed, then switched in-line with a 20 cm by 75 μm C18 packed spray tip nano column packed with Michrom Magic C18AQ, for a 2-step gradient.Mobile phase A was water/acetonitrile/formic acid and mobile phase B was acetonitrile/isopropanol/water/formic acid.Using a flow rate of 350 nl/min, a 90 min, 2-step LC gradient was run from 5% B to 50% B in 60 min, followed by 50–95% B over the next 10 min, hold 10 min at 95% B, back to starting conditions and re-equilibrated.The samples were analyzed via electrospray tandem mass spectrometry on a Thermo LTQ Orbitrap XL, using a 60,000 RP survey scan, m/z 375–1950, with lockmasses, followed by 10 LTQ CAD scans on doubly and triply charged-only precursors between 375 Da and 1500 Da.Ions selected for MS/MS were placed on an exclusion list for 60 s.Data were analyzed and filtered on MaxQuant version 1.2.2 with a FDR setting of 1% against the SPROT Yeast database and at a cutoff of at least 2 peptides seen to assign quantitation ratio.The exact MaxQuant settings used can be found in attached document.Each experiment was normalized to the ratio of the bait protein, i.e. SSA1 files using SSA1 ratio and HSP82 files normalized using HSP82 ratio.This produced a list of interactors and their respective quantitated changes upon DNA damage.Proteins were removed from the file if they were labeled as “Contaminants”, “Reverse” or “Only identified by site”.Three biological replicates were performed, with each biological replicate split into technical replicates labeling and 18O reverse labeling).A protein was considered identified if detected in at least three of the six replicates.Statistical analysis was performed using the R statistical package.Proteins with three out of six observations within each group were retained.Missing values were imputed using row mean imputation.Z-score normalization was performed on the log of all protein ratios.An ANOVA test was then performed to identify proteins that indicate significant variability between biological replicates within each group.These were removed from consideration.The full data obtained was uploaded to the PRIDE repository and can now be found under reference number PXD001284.
The molecular chaperones Hsp70 and Hsp90 participate in many important cellular processes, including how cells respond to DNA damage.Here we show the results of applied quantitative affinity-purification mass spectrometry (AP-MS) proteomics to understand the protein network through which Hsp70 and Hsp90 exert their effects on the DNA damage response (DDR).We characterized the interactomes of the yeast Hsp70 isoform Ssa1 and Hsp90 isoform Hsp82 before and after exposure to methyl methanesulfonate.We identified 256 chaperone interactors, 146 of which are novel.Although the majority of chaperone interaction remained constant under DNA damage, 5 proteins (Coq5, Ast1, Cys3, Ydr210c and Rnr4) increased in interaction with Ssa1 and/or Hsp82.This data presented here are related to [1] (Truman et al., in press).The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium http://proteomecentral.proteomexchange.org) via the PRIDE partner repository (Vizcaino et al.(2013) [2]) with the dataset identifier PXD001284.
Hypohidrotic ectodermal dysplasia is a well-characterized human disease affecting the morphology and number of skin appendages, principally the hair follicles, teeth, and exocrine glands.HED, the most common of the ectodermal dysplasias, is caused by mutations to the ectodysplasin signaling pathway, which is essential for in utero development of ectoderm-derived appendages.The main axis of the pathway comprises the ligand ectodysplasin A, ectodysplasin A receptor, and the adaptor molecule EDAR-associated protein with a death domain.Mutations in any of these pathway components leads to human HED, which is phenocopied in mice.Mutations in the X-linked EDA gene underlie most ectoderm dysplasia cases.EDA, a member of the tumor necrosis factor family of signaling molecules, exists in two highly homologous isoforms, EDA1 and EDA2.EDA1 is specific for the type I transmembrane protein EDAR, whereas EDA2 is specific for the type III, X-linked transmembrane receptor.Mutations to EDA2 do not result in XLHED; however, this ligand is thought to play a role in hair loss during adulthood.To invoke EDAR signaling, EDA ligands are shed from the cell surface before receptor binding.Receptor activation initiates association with the C-terminal death domain of EDAR-associated protein with a death domain, which creates a complex capable of interacting with tumor necrosis factor receptor-associated factors.Activated tumor necrosis factor receptor-associated factor molecules interact with IκB kinase releasing NF-κB family members from their cystolic inhibitors to enter the nucleus and initiate transcription of target genes.In line with the phenotype of XLHED patients, EDAR pathway activation has primarily been linked to the window when appendages develop in utero.In mice, Edar mRNA is expressed from E14 in the developing epidermal basal layer, localized to preappendage placodes.The resultant EDAR protein remains localized to the placode into the final postnatal stages of HF development.In contrast, few studies have explored potential roles for EDAR signaling in adult tissue.Kowalczyk-Quintas et al. recently showed that Edar is expressed within the sebaceous glands of adult mice, and Inamatsu et al. reported Edar expression in the epidermal cells surrounding the dermal papilla.Moreover, Fessing et al. described EDAR expression in the secondary hair germ of telogen HFs, proposing that EDAR signaling is important for adult hair cycle regulation, particularly control of catagen onset through the up-regulation of X-linked inhibitor of apoptosis.Hair cycling and wound healing are both examples of when major morphogenic changes occur in adult skin, a tissue that is normally under strict homeostatic control.To achieve this, numerous “developmental” signaling pathways are “reused” in the adult tissue.Recently, we demonstrated a novel link between HC and the speed of adult skin healing, with a near doubling of healing efficiency in skin containing anagen HC stage follicles.This led us to hypothesize an as yet unidentified role for the EDAR signaling pathway in adult skin wound healing.This hypothesis is supported by a case study from Barnett et al. describing poor skin graft healing in an XLHED patient.Here we provide functional demonstration that EDAR signaling plays an important role in adult skin wound healing.Specifically, mice lacking the ligand EDA displayed reduced healing ability, whereas EDAR signaling augmentation promoted healing, not only in Tabby but also in wild-type mice.EDAR signaling manipulation altered multiple aspects of healing, including peri-wound proliferation, epidermal migration, and collagen deposition.Finally, we show that EDAR stimulation is able to promote human skin healing and is thus an attractive target for future therapeutic manipulation.Eda null mice exhibit delayed wound healing, which can be restored by acute pathway activation.First, we proposed that a role for Edar signaling during wound healing would likely be reflected in wound edge induction.Thus, we analyzed Edar expression by immunofluorescence in both unwounded and wounded skin.We noted immunoreactivity with an anti-Edar antibody in the epidermis of unwounded skin, which appeared expanded in the peri-wound interfollicular epidermis of 24- and 72- hour wounds.To test the hypothesis that EDAR signaling was necessary for timely healing, we first examined the rate of wound repair in Eda null mice.We report significantly delayed excisional wound healing in the absence of EDA.Tabby wounds were larger than those in WT both macroscopically and microscopically, quantified by an increased wound width and delayed rate of re-epithelialization.To confirm that this healing delay was due to EDAR signaling deficiency and not phenotypic differences in Tabby skin, we also performed in utero correction of the Tabby phenotype using the validated EDAR-activating antibody mAbEDAR1.Healing in adult mAbEDAR1-rescued Tabby mice remained delayed and indistinguishable from nonrescued Tabby mice.Thus, developmentally specified structural changes in Tabby skin are unlikely to contribute to the observed adult wound healing phenotype.Finally, we explored the effect of locally activating Edar signaling in adult Tabby mouse wounds.Here, mAbEDAR1 administered directly to the wound site 24 hours before injury entirely rescued the healing delay in Tabby mice.Specifically, the rate of re-epithelialization is increased, and wound width is significantly decreased compared to Tabby, generating a healing phenotype more in line with WT wounds.In line with previous delayed healing murine models, including the HF-deficient tail model, we observed extended epidermal activation in Tabby wounds.Local administration of mAbEDAR1 fully rescued this phenotype, restoring normal peri-wound IFE expression of keratin 6.Induction of wound edge epithelial proliferation is a key aspect of HC-modulated healing.Peri-wound epithelial proliferation, measured by BrdU incorporation assay, was significantly decreased in Tabby mice compared to WT in both IFE and HF.Activation of EDAR signaling accelerates healing in WT mice.To further explore the therapeutic potential of EDAR signaling activation, we next administered mAbEDAR1 locally to the wound site 24 hours before wounding in WT mice.WT mAbEDAR1-treated wounds displayed
The highly conserved ectodysplasin A (EDA)/EDA receptor signaling pathway is critical during development for the formation of skin appendages.Mutations in genes encoding components of the EDA pathway disrupt normal appendage development, leading to the human disorder hypohidrotic ectodermal dysplasia.Spontaneous mutations in the murine Eda (Tabby) phenocopy human X-linked hypohidrotic ectodermal dysplasia.Finally, we show that the healing promoting effects of EDA receptor activation are conserved in human skin repair.Thus, targeted manipulation of the EDA/EDA receptor pathway has clear therapeutic potential for the future treatment of human pathological wound healing.
accelerated healing, with a clear induction of re-epithelialization.Peri-wound proliferation levels were assessed by BrdU incorporation assay.In contrast to Tabby mice, mAbEDAR1-treated WT mice displayed increased proliferation only in peri-wound HFs and not IFE.During development, EDA/EDAR is essential for epidermal/dermal cross-talk required for effective appendage morphogenesis.Thus, EDAR signaling could have nonepidermal roles in skin homeostasis and repair.Indeed, careful analysis revealed increased wound collagen content in mAbEDAR1- versus placebo-treated WT mice.Specifically, picro Sirius red staining revealed increased collagen deposition in mAbEDAR1-treated wounds, in addition to a defined increase in the proportion of fine fibers.Immunofluorescence for Col3a1 confirms increased wound bed collagen in mAbEAR1-treated mice, despite no reports of active EDAR signaling in fibroblasts.We note that in a previous transcriptional study, Cui et al. reported up-regulation of Col1a1 and Col3a1 in adult Tabby skin.However, they also reported increased expression of Col3a1 in EDA overexpressing transgenic mice, suggesting complex paracrine signaling.EDAR signaling activation restores healing in ovariectomized mice.OVX mouse provides a widely used, physiologically relevant model of delayed healing in which to test the beneficial effects of EDAR signaling activation.Here, OVX mice received mAbEDAR1 or placebo local to the wound site 24 hours before wounding.In comparison to control, OVX mice treated with mAbEDAR1 displayed significantly accelerated healing, with increased re-epithelialization and decreased wound area compared to control.Similarly to WT mice we observed no differences in peri-wound IFE proliferation but a strong trend toward increased peri-wound HF proliferation in mAbEDAR1-treated OVX mice.This intriguing HF-specific peri-wound proliferative response is in line with the findings of Fessing et al., who reported a role for EDAR signaling in adult HF cycling.Given that both epidermal and dermal aspects of healing were affected in WT mice, we investigated the collagen content of OVX wounds by Masson trichrome staining.Here, collagen content in OVX +mAbEDAR1 is greater than in placebo-treated mice.Activation of EDAR signaling promotes human healing.The EDAR signaling pathway is highly conserved between mouse and human, as highlighted by the similarities in the XLHED and Tabby developmental phenotypes.However, no study to date has demonstrated a role for EDAR signaling in adult human tissue.We thus assessed the potential of EDAR activation to promote human skin wound healing.First, mAbEDAR1 treatment in the EDAR-responsive human HaCaT keratinocyte cell line resulted in a statistically significant dose-dependent increase in scratch wound closure, demonstrating that human keratinocytes are competent to respond to exogenous pathway activation.Increased scratch wound closure may indicate that activation of Edar signaling can increase cell proliferation in vitro.Although we did not find this to be the case, careful analysis of cell migration in vitro indicates that Edar signaling can increase cell motility.We also investigated the ability of primary human keratinocytes to respond to mAbEDAR1 treatment and found that scratch wound closure in this model was also significantly accelerated.To confirm efficacy at the physiological level, we turned to the validated human whole skin ex vivo wound model.In this ex vivo partial-thickness human skin wound model, direct topical administration of mAbEDAR1 significantly increased both wound re-epithelialization and peri-wound proliferation.Timely wound healing requires the communication of multiple cell types, a phenomenon for which the EDAR pathway is essential during in utero development, as highlighted by the XLHED disease phenotype.This study discusses a previously unreported direct link between EDAR signaling and wound healing.We found immunoreactivity with an anti-Edar antibody to be increased in peri-wound IFE compared to unwounded IFE, and, more importantly, in the absence of active EDAR signaling wound healing was significantly delayed.Moreover, local activation of EDAR signaling promoted healing in EDAR-deficient mice, a non–EDAR-linked mouse model of delayed healing and crucially in ex vivo human wounds.Surprisingly, EDAR deficiency/treatment effects are not exclusively confined to the epidermis but also manifest in the dermis.Reports of EDAR expression in adult murine skin suggested an active role for EDA/EDAR signaling postnatally.Given that the role of EDA/EDAR signaling is predominantly in the rearrangement of the epidermis into placodes during development, we focused on epithelial aspects of the wound phenotype, reporting EDAR-dependent changes in both peri-wound proliferation and re-epithelialization.The re-epithelialization phenotype particularly fits with the report of Mustonen et al. in which overactivation of EDAR during development resulted in an increase in placode size, that is, an EDAR-dependent increase in cell migration.Our in vitro analysis of cell motility provides further evidence that Edar signaling may influence cell migration during re-epithelialization.Our analysis of proliferation in mAbEDAR1-treated WT mice revealed no difference in wound edge IFE proliferation but an clear EDAR dependency of wound edge HF proliferation.That HF cells would be more capable of responding to EDAR signaling is in line with the findings of Fessing et al., who described a postnatal role for EDAR in HC.We note with interest that during development, pre-appendage placode proliferation was unaltered in K14-EDAtg mice.Thus, it seems that the EDAR pathway has greater influence over appendage proliferation in adulthood than development, or that additional wound-derived signals contribute to the observed effects in adult skin.HF cell involvement in wound healing has been well characterized in several elegant studies, and it is possible that Edar signaling may regulate the involvement of different HF cell populations.This is particularly interesting in the context of HF stem cell contribution to healing, which seems o be influenced by the stage of repair.Intriguingly, we also observed increased peri-wound sebaceous gland proliferative response in mAbEDAR1 treated mice.Mechanistically, the proliferation inducing effects of EDAR are most likely explained by the pathway convergence on Cyclin D1, which will drive cell cycle progression.More interesting still, it remains unclear what
Little is known about the role of EDA signaling in adult skin homeostasis or repair.Because wound healing largely mimics the morphogenic events that occur during development, we propose a role for EDA signaling in adult wound repair.Here we report a pronounced delay in healing in Tabby mice, demonstrating a functional role for EDA signaling in adult skin.
signals are involved in communicating an injury response signal to peri-wound follicles.While several candidates have been suggested it is tempting to speculate that EDAR may be involved in this initial wound-HF cross-talk.Here, we note with interest that Inamatsu et al. have previously reported Edar-expressing ectopic HFs when fetal dermis is recombined with adult epidermis, further supporting the concept that adult skin can reactivate embryonic processes under specific circumstances.That we observe alterations in dermal aspects of healing when the EDAR pathway is stimulated was unexpected, given that there are no reports of EDAR expression in fibroblasts.Increased wound collagen following mAbEDAR1 treatment could be explained by secondary signaling and epidermal-dermal cross-talk.A more simple explanation would be that this could be a reflection of a more progressed stage of healing.However, observations of Cui et al., who report an increase in collagen mRNA expression in EDA-overexpressing mice, suggest the former explanation.The highly conserved nature of the EDAR pathway led us to test whether the effects of EDAR signaling on wound healing would also be applicable to human skin healing.EDAR activation in the EDAR-expressing HaCaT cell line increased the rate of scratch wound closure.Moreover, in primary human keratinocytes and the whole human skin ex vivo wound model, we also observed a marked improvement in re-epithelialization after EDAR signaling activation.To our knowledge this is a previously unreported demonstration that EDAR signaling can directly influence keratinocyte proliferation and migration within human skin.In summary, our data reveal a previously unidentified role for EDA/EDAR signaling in adult cutaneous wound healing in both mice and humans.Surprisingly we show the EDA/EDAR pathway to be involved in multiple aspects of healing, despite EDAR expression being epidermally restricted.However, the complex role of this pathway in coordinating HF morphogenesis, in which both epidermal and dermal skin components contribute, fully support our observations.In a prior publication we demonstrated a strong link between HC stage and wound healing outcome.The concept of manipulating HC to promote wound healing is experimentally attractive but likely to prove clinically challenging.We suggest that targeting the EDA/EDAR pathway offers a more practical, therapeutically attractive solution.All animal procedures were approved by the UK Home Office after local ethical approval.Seven-week-old male Tabby or WT mice were anaesthetized and wounded following our established protocol.Mice were individually housed postoperatively and left to heal by secondary intention with analgesia.Anti-EDAR1 monoclonal mouse IgG1 or placebo was administered 24 hours before wounding via subcutaneous injection at the wound site.BrdU was administered intraperitoneally 2 hours before sacrifice.Three days after wounding, samples were excised, bisected, and fixed in 10% buffered formalin or snap frozen.Histological sections were prepared from fixed, paraffin-embedded tissue.Five-micrometer sections were stained with hematoxylin and eosin, Sirius red, or Masson trichrome, or underwent immunohistochemical analysis with the following antibodies: anti-Edar anti-keratin 6, anti-keratin 14, anti-BrdU, or anti-Col3a1.Primary antibody incubation was followed by the appropriate biotinylated antibody, ABC reagent and NovaRed mounted in Pertex or streptavidin Cy3 secondary mounted in Mowiol containing DAPI.Images were captured as follows: bright field—Eclipse E600 microscope and spot camera; picro Sirius red—plane polarized light microscope/camera; fluorescence—Leica MDLB/camera.Image analysis was performed using ImagePro Plus or Metamorph software and Corel Paintshop Pro.Wound measurements were made from hematoxylin and eosin-stained images.Percentage re-epithelialization was quantified as follows:*100.Wound width was quantified as the distance between normal dermal architecture at the edge of each wound, and wound area was quantified as the area of granulation tissue beneath the scab.The HaCaT human keratinocyte cell line was cultured at 37 °C, 5% CO2 in DMEM with 10% fetal bovine serum.Scratch wounds were generated in a confluent cell layer using a 1-ml sterile pipette tip.mAbEDAR1 or phosphate buffered saline was added, and 24 hours later cells stained with crystal violet.Images were captured and cell migration quantified.Cell migration was quantified as a percentage increase in closure compared to scratches photographed at 0 hours, and measurements were taken at 30 individual points across a single scratch and averaged.Primary human keratinocytes were isolated from adult female abdominal skin, cultured and scratched as described earlier.Ex vivo methodology was as previously described using tissue collected after ethical approval was received.Briefly, adult female abdominal skin was washed in sterile phosphate buffered saline and excess fat removed.Each 8-mm-diameter biopsy-punched construct was partial thickness wounded with a 3-mm punch.Constructs were then cultured at the air-liquid interface in 1% antibiotic-antimycotic/10% fetal bovine serum-supplemented DMEM.mAbEDAR1 or Aprily1 was applied directly to the central punch wound.Biopsies were maintained at 37 °C, 5% CO2 for 3 days, then formalin fixed before tissue processing and sectioning as described earlier.HaCaT cells at P41 were seeded into wells of a 24-well plate at 1.5 × 104 cells per well.Just before imaging, cells were treated with either mAbEdar1 or Aprily and imaged for 24 hours under wide-field live imaging microscope.Cell migration was then analyzed as either track displacement or track length using a Wavelet plugin on Imaris.Statistical differences were determined by the Student t test.P < 0.05 was considered significant.All animal work was approved by the UK Home Office following local ethical approval.Human skin donors provided written consent before undergoing surgery, and tissue was handled and stored in accordance with the Human Tissue Act.PS and NK are shareholders of Edimer Pharmaceuticals.KMH is employed by Edimer Pharmaceuticals.NK is a director and employee of Edimer Pharmaceuticals.
Moreover, pharmacological activation of the EDA pathway in both Tabby and wild-type mice significantly accelerates healing, influencing multiple processes including re-epithelialization and granulation tissue matrix deposition.
the model is able to provide an adequate description of outcomes of pre- and post-shock in real-world scenarios, then the insights drawn from the model can be used to identify pockets of vulnerable populations so that a more timely and effective policy response can be implemented.In particular, since food insecurity can be traced in the model, it can point to required policy measures to minimize starvation with limited aid resources.Furthermore, timely action can alleviate bottlenecks through targeted policy response and help limit secondary spillover effects, namely mass internal migration and the disruption of functioning markets in other parts of the region, that might hamper regional growth and well-being in the long-run.Much remains to be done in helping low-income regions prepare for natural disaster relief and bolster communities’ resilience.The framework presented here can be extended in several ways before any real policy implications can be drawn.First, a larger, more detailed geographical component can be added to the model that can help more accurately predict population and goods flows.This, for example, can include altitude and slope information, variations in road types, and weather conditions.Second, a more detailed behavioral component can be added where more complex household decisions are simulated.This, for example, can include households with multiple members, community-based network decisions, heterogeneity in skill endowments, heterogeneity in access to information, and incorporating learning behavior in a limited information environment.In addition to this, geo-simulations are well suited to incorporate cultural and sociological aspects of decision making as well.For example, different behavioral rules for men, women, and children, role of asset ownership and property rights in decision-making processes, and community-based versus family-based migration decisions.Third, given modern technologies, real-time data can be integrated within such a model.This can include incorporating satellite data that are currently available at frequent intervals and can quickly give damage estimates especially on infrastructure losses.Additionally, crowd-sourced information can help recalibrate the model based on some real-time information.This, for example, can include identifying food shortages, transport bottlenecks, and location preferences for migration as they emerge.In conclusion, a geo-simulation framework can provide a rich tool for estimating a host of policy questions in a lab-like setting, allowing for a more accurate and nuanced policy response that can minimize second-round impacts of natural disasters and help reduce risk in the long-run.Such a tool can play an essential role in low-income regions where knowledge of local markets and community-specific behavioral responses can be simulated to estimate post-shock outcomes for an effective, and timely response, with limited resource availability.
Adverse post-natural disaster outcomes in low-income regions, like elevated internal migration levels and low consumption levels, are the result of market failures, poor mechanisms for stabilizing income, and missing insurance markets, which force the affected population to respond, and adapt to the shock they face.In a spatial environment, with multiple locations with independent but inter-connected markets, these transitions quickly become complex and highly non-linear due to the feedback loops between the micro individual-level decisions and the meso location-wise market decisions.To capture these continuously evolving micro–meso interactions, this paper presents a spatially explicit bottom-up agent-based model to analyze natural disaster-like shocks to low-income regions.The aim of the model is to temporally and spatially track how population distributions, income, and consumption levels evolve, in order to identify low-income workers that are “food insecure”.The model is applied to the 2005 earthquake in northern Pakistan, which faced catastrophic losses and high levels of displacement in a short time span, and with market disruptions, resulted in high levels of food insecurity.The model is calibrated to pre-crisis trends, and shocked using distance-based output and labor loss functions to replicate the earthquake impact.Model results show, how various factors like existing income and saving levels, distance from the fault line, and connectivity to other locations, can give insights into the spatial and temporal emergence of vulnerabilities.The simulation framework presented here, leaps beyond existing modeling efforts, which usually deals with macro long-term loss estimates, and allows policy makers to come up with informed short-term policies in an environment where data is non-existent, policy response is time dependent, and resources are limited.
the energy analysis point of view, 14 Wh achieved at the last Pee Power field trial at Glastonbury 2017 would be the equivalent to 0.23 British pence of energy saving for every kWh.This is based on raw power data produced during the field trial where the MFC stack was operated on neat human urine and does not take into account the saving that would be gained for every litre of wastewater treated.Further understanding of ion transport selectivity and economic membrane preparation methods are vital to enable wider employment of ion exchange membranes in technical processes for sustainable development.Further progress is needed to provide field equipment that is more robust and reliable over time as well as the development of novel energy storage and energy harvesting methods .In energy storage, the use of external capacitors has been implemented in numerous practical applications however an integration of internal supercapacitors could be a novel way to boost and/or control the output .The knowledge built on the existing pilot studies and implementation attempts is driving the innovation towards wider acceptance and market.Only one type of BES can break down waste and generate electricity, and that is the MFC.Future advances should be focused on the technology applicability and the system design in order to meet the criteria of high performance and low cost in real-world conditions.The general trend towards the future MFC scale-up is firstly: the reduction in size of units but also the multiplicity of the total numbers of units, by use of modularity, as a way of overcoming transport limitations and ohmic losses instead of enlarging a single unit.Secondly, it is the design of the scaled-up units through compacting the system footprint to achieve high power densities but at the same time making it functional thus applicable in real-life scenarios.Thirdly, to ensure the longevity of the system and its components, both internal and external elements should be resistant to biofouling, scaling and corrosion.Finally, new developments should include MFC power management systems and the incorporation of energy harvesting and storage systems such as supercapacitors in order to enhance system performance for practical use.Papers of particular interest, published within the period of review, have been highlighted as:• Paper of special interest.•• Paper of outstanding interest.
This short review focuses on the recent developments of the Microbial Fuel Cell (MFC) technology, its scale-up and implementation in real world applications.Microbial Fuel Cells produce (bio)energy from waste streams, which can reduce environmental pollution, but also decrease the cost of the treatment.Although the technology is still considered “new” it has a long history since its discovery, but it is only now that recent developments have allowed its implementation in real world settings, as a precursor to commercialisation.
tinnitus on quality of life.The THI questionnaire is currently one of the most accepted methods for assessing tinnitus, having been validated in several consensuses.32,This study also used the THI questionnaire to assess the quality of life of patients with tinnitus, as it has been used in several other studies.32,33,The present study demonstrated statistically significant differences in the THI questionnaire scores between the AG and CG, between the initial and final evaluation.This study showed that there is effects of tinnitus symptom relief as a result of Chinese scalp acupuncture."Other non-specific factors unrelated to the vestibulocochlear line of Chinese scalp acupuncture may be related to this effect, such as induction by the patient's subjectivity and the increased attention given by doctors to the study patients.9",The significant levels of improvement of the effect justify the use of this technique.This study shows that the technique is safe and does not cause any side effects for patients.However, more studies are required to establish other possible effects of Chinese scalp acupuncture on the auditory system.Among the limitations of this study is the lack of follow-up.Thus, it was not possible to assess whether the acupuncture program had a medium- or long-term effect on perception of tinnitus, decrease in intensity, and improved quality of life of individuals with tinnitus, and whether the effectiveness obtained was palliative or therapeutic.Another limitation was the possibility of the placebo effect of the acupuncture intervention, since the control group did not undergo any intervention to obviate this possibility; thus the effect of placebo treatment cannot be ruled out in cases of tinnitus.Furthermore, although this study followed the sample size calculation, it would be prudent to perform a study with a larger number of participants, i.e., to ensure a safety margin considering possible losses that occurred throughout the study.More studies with high methodological quality and low risk of bias are required to assess the effect of acupuncture on the treatment of tinnitus.The rules of the Consort statement must be followed and described in detail, as well as the sample size calculation, so that a systematic review with meta-analysis can be performed.The patients showed a significantly improved tinnitus perception.The technique of Chinese scalp acupuncture associated with bilateral electroacupuncture showed a statistically significant improvement in reducing tinnitus intensity level, as well as improving the quality of life in patients with tinnitus in the short term.The authors declare no conflicts of interest.
Introduction Tinnitus is a subjective sensation of hearing a sound in the absence of an external stimulus, which significantly worsens the quality of life in 15–25% of affected individuals.Objective To assess the effectiveness of acupuncture therapy for tinnitus.Methods Randomized clinical trial (REBEC: 2T9T7Q) with 50 participants with tinnitus, divided into two groups: 25 participants in the acupuncture group and 25 participants in the control group.The acupuncture group received acupuncture treatment and the control group received no treatment.After a period of 5 weeks, they were called to perform the final evaluation and the control group received acupuncture treatment for ethical reasons.Results A statistically significant result was found for the primary outcome, reducing the intensity of tinnitus, with p = 0.0001 and the secondary endpoint, showing improvement in quality of life, with p = 0.0001.Conclusion Chinese scalp acupuncture associated with bilateral electroacupuncture demonstrated, in the short term, a statistically significant improvement by reducing the level of tinnitus intensity, as well as improving the quality of life of individuals with tinnitus.
this, it is found that the aquifer system lacks clay layer that can serve as natural barriers to protect groundwater from such imminent fluoride sources.The industry zone is Pumice and highly fractured basalt.These areas are highly exposed for industrial waste.Hence, if it is damped as in the “Chalalak” wetland, the pollutant chemicals can easily join the fractured aquifer system, which is the cause for fluoride rich water.These water bearing formations there will be a great possibility of surface pollution in to the aquifer.On the other hand, the far southern part, where relatively the deepest wells, in this study area were located, the water striking point were also found deeper and the major aquifers formed of, course, medium and fine grained sand.The fine grain sand is dominant in the layer from 184 m to 196 m from the surface, where the water bearing formation is located.The top non-water bearing strata from top 18 m to 166 m is found to be dominated by weather pumice/igneous rock, which is of high source of fluoride.However, as the water comes from only sandy formations it is of good quality with respect to its F-and TDS concentration.In general pumice and highly fractured and weathered scoracious formation dominates the water bearing strata and fine to course grained sand is the main water source in depth beyond 100 m. Thus, lack of confining rocks and or clay layer in the medium strata indicates that groundwater occurs in unconfined conditions in the weathered basalts.The analysis result of the fluoride level and total dissolved solids indicated that, the fluoride ion concentration values range from 0.65 mg/l to 11 mg/l, whereas the total dissolved solids ranges from 170 to 725 mg/l.The two water quality parameters result showed that the respective concentration values are somewhat higher at the shallower depths.The values, in the shallower wells it is found that they didn’t satisfy the standard set for drinking water.For most of the cases the tabulated result showed that, the ground water from shallow aquifers were retch in fluoride ion.This could be associated with the geologic formation of the area.As it is discussed in the previous section, the dominant water bearing formation of these areas are weathered pumice/igneous rocks and volcanic rocks and accompanied by high temperature.Hence, these rocks along with the high temperature ensure chemical reaction for the formation of the Fluoride ion.The groundwater sampled from wells in such formations showed high fluoride ion concentration up to 11 mg/l.This depicts that the major sources of fluoride ion pollution in the study area could be the inherent property of the water bearing formation.On top of that, the evaporation due to high temperature in the area could possibly increase the concentration of the fluoride ion in the shallow aquifer.For instance in areas like Hawassa Flour Factory; there is high fluoride concentration of 11, 9, 8 and 8.7 mg/l respectively.These wells are located in shallow water bearing stratum where the weathered pumice/ igneous rock are dominant, which is rich source of fluoride.Though shallow aquifers containing recent infiltration of rainwater have low fluoride content, fluoride concentration of shallow aquifers increases as a consequence of temperature-enhanced fluorite solubility and hydrogen fluoride gas dissolution due to hydrothermal activation caused in active volcanic areas as that of the study area.Generally, deeper aquifers have better water quality.Therefore the water from the deep aquifer is of good quality which fulfills the Ethiopian standard in general and those wells of >80 m depth fulfill WHO standards even securely by looking into the other conditions like climate and feeding habits of the consumers.Using SPSS statistical analysis is conducted to see the significance of the observed parameters.For both parameters, at α < 0.001 and 95% Confidence Interval i.e. it is strongly significant fitting non-linear model.The result clearly shows, the deeper the well depth the better is the Fluoride and TDS concentration.As it is depicted from the water quality of the GaraRiqata, the top non-water bearing formation is dominated by weathered pumice rock, which is the major source of fluoride rich water.However, since the depth of the well/the water bearing strata, which is dominated by fine sand, is below such formation it was possible to get good quality of water in terms of fluoride and TDS concentration.Indeed, water-rock interactions commonly occur during the evolution of a basin.Especially, at deep burial strata under the condition where the ground water is rich in carbonates and/or bicarbonates, the dissolution of fluoride ion could possibly happen during the water-mineral interaction as a major source of the ion in the ground water as expressed in reaction Eqs. and:CaF2+Na2CO3→ CaCO3 + 2F+2NaCaF2 + 2NaHCO3→ CaCO3 + 2Na+2F+H2O + CO2In the above reactions, the bicarbonate-rich water in a weathered rock formation accelerates the dissolution of CaF2 to release fluoride into the groundwater with time.Minerals rich in Calcite, also favor the dissociation of fluoride from fluoride rich minerals as expressed in reaction and:CaCO3 + H + 2F→CaF2+HCO3CaF2→Ca+2 F,Certain sensitive factors behind high fluoride concentration in groundwater include crystalline rocks especially granites of alkaline nature with calcium deficiency.The fluoride content of groundwater during mineral dissolution is governed by the solubility of CaF2, whereas the solubility of calcite and fluorite controls the dissolution of Ca2+in groundwater.Among all fluoride–rich minerals, fluorite is the most abundant and occurs in almost all rocks and detrimental minerals, which are true for the water bearing formations in the study area.Most of the rocks in the study area are weathered, which
Study region: The Main Ethiopian Rift valley (MER) region, where millions rely on fluoride contaminated drinking water that is by far higher than the WHO standard resulting skeletal and tooth decay.The concentration of fluoride ion, ranging from 0.65mg/l to 11mg/l is under significance influence by the geochemistry.Higher temperature at the shallow aquifer along with geological process like weathering of rocks and dissolution of CaF2 promotes the concentrated availability of fluoride ions.
reveals the occurrence of hydrolysis, dissociation and dissolution.Hence, the above explained chemical reaction process could possibly associated with the high fluoride ion dissolution and pollution of the groundwater in the study area.The fluoride ion concentration special distribution analysis showed that the Northwestern part of the study area was beyond the allowable standard set for drinking water.Whereas, the Southern side of the study areas, shows very low concentration of both fluoride and TDS.This is the area which has been identified as major groundwater recharge zone by, because of its low electric conductivity and high pH values.The eastern side has moderately low concentration, while the northern corner and the lakeshore areas exhibit high concentrations of both parameters.Hence, the study area can be sub divided in to three, on the basis of fluoride and TDS concentration, high fluoride and TDS area, north part and the lake shore, low fluoride and TDS area and medium fluoride and TDS area part of the Hawassa city.The ground water in the medium and high areas is not suitable for drinking water because of fluoride concentration.As a result, currently the ground water of those areas is used for sanitation, swimming pools and gardening purposes alone.With respect to the depth of wells, the general trend shows a decrease of fluoride ion concentration with depth, as clearly indicated from the result where shallower aquifer has elevated concentrations of fluoride and deeper aquifers have very low fluoride.Consequently, in areas where the calcium ion concentration is high, due to the prevailing geochemical processes, it may play a critical role in determining the fluoride content of the ground water.As reported by the Ohio EPA, Division of Drinking and Ground Waters technical series, Fluoride levels have shown to decline as water travels from the surface through the soil zone and into ground water.With very similar fashion, the TDS values range from 170 mg/l to 700 mg/l Tables 2 and 3 and Fig. 3).All the TDS values are in the potable range of water quality standard values being the deeper formations more suitable.However, with the existing information it was not possible to assess the spatial variability of the suitability of groundwater for agriculture and industrial use; rather, only the general trend in terms of groundwater suitability for drinking water is presented in this study.To evaluate the relationship between well depth and fluoride ion concentration, well depth and TDS, and fluoride ion with TDS linear correlation analyses were conducted.In the statistical analysis, the independent variable is borehole depth and dependent Variable is Fluoride ion.This means it is possible to estimate fluoride concentration at different depths using the model with 78 percent accurate variation relation.Since water is a solvent and, which is capable of dissolving and interacting with organic and inorganic components of soils, the minerals that make up unconsolidated deposits, and with various types of bedrock.Dissolution of minerals within the soil, sediment, and bedrock is a slow process that can take days, years, or eons, depending on the solubility of the materials.These materials contribute to the amount of total dissolved solids that are present in all groundwater.Major dissolved constituents of groundwater include the cations sodium, potassium, calcium, magnesium, and silica and the anions bicarbonate, sulfate, chloride, and nitrate.The independent variable is Borehole depth and dependent variable is TDS.The model enables the estimation of the TDS value at the required depth with 68 percent relation.The resulting regression analysis indicated that there is an inverse or negative correlation between both Fluoride ion concentration and TDS with borehole depth.The proportion of variation explained by the exponential model is strong, that the model explained 78 percent and 68 of the variation in F- and TDS respectively at α < 0.001.This indicates water extracted from deeper aquifers is of good quality.In general the output of the study indicated that, shallow hand dug wells in Hawassa city are highly exposed to the chemical reaction which could result in fluorine dissolution, high temperature and ease surface source contamination.In most part of the fluoride and TDS affected areas water bearing formation was dominated by crystallized rocks, highly weathered pumice, volcanic ash, fractured and weathered basalt, weathered Rhyolite, scoria and other igneous rock formations.Areas which have such formations and high temperature, which facilitate the dissolution of the fluoride ion, are highly exposed to fluoride contamination.So that, using the water for drinking purpose has high health risk.However, there areas like Gara Riqata, which have aquifer layers below are characterized as safe sources of fluoride ion and TDS.Thus, the water is free of fluoride problem.Therefore, drilling to the deeper depth in the aquifer system beyond where such CaF2 rich sources especially in areas of high geothermal energy and or blinding such thickness of the aquifer could alleviate the problem of fluoride contamination.Either drilling to the deeper depth or blinding the aquifer system where such CaF2 rich sources accumulate could alleviate the problem of fluoride contamination in the area.
Particular emphasis is given to the spatial distribution of fluoride ion (F-) and Total dissolved solids (TDS) applying SPSS (Statistical Package for the Social Scientists) statistical tool.New hydrological insights for the region: The major water bearing formation is of weathered and fractured geologic formation having high porosity and permeability, which resulted in risk of shallow groundwater surface contamination.The deeper the strata along with igneous formation dominated by pumice, the lower the concentration showing strong inverse correlation with depth for both F- and TDS with R2 =0.78 and R2 =0.68 respectively at α< 0.001.Either drilling wells beyond such formations (=60m) or blinding the poor quality strata is recommended to minimize the effect of high fluoride and TDS concentration in drinking water for Hawassa city aquifer.
are undamaged.In the experiments containing photocatalyst only those exposed to light shows significant damaged cells.It was observed that after removal from suspension, cells appear to be attached and agglomerated onto the catalyst.If this occurs during the photocatalytic experiments it may have an effect on the number of countable CFUs by underestimating the viable cells in the plating measurements, especially in the D experiment where is seems many live cells are agglomerated to the photocatalyst.It is also apparent that direct contact with the photocatayst is not toxic to E. coli as live cells are seen agglomerated to the photocatalyst not exposed to light.Only the photocatalyst exposed to light showed significant numbers of dead cells, confirming it is mostly a photocatalytic killing of the bacteria.If there is an affinity for the bacteria to the solid photocatalyst this may add an additional mechanism for removing harmful organisms by a simple filtration of photo-active particles.Therefore the photocatalyst could be classed as a “bioactive-filter” similar to the well-known silver-deposited activated carbon fibres proposed for use in industrial processes such as drinking water filtration.Contamination of water by organic molecules and bacteria is a growing ecological and societal issue, leading to a need for simple and effective methods of decontamination.Here we demonstrate a one-step hydrothermal synthesis of SrTi1-xRhxO3, giving a high surface area material displaying efficient visible light activated photocatalytic activity towards organic dye degradation and anti-microbial properties in aqueous suspension.When x ≤ 0.05 the Rh dopant is predominantly in the Rh oxidation state in the bulk of the material resulting in a very efficient visible-light activated photocatalyst.5-RhSTO is shown to completely oxidise a 0.02 g/L solution of methyl orange within 30 min under visible light illumination, which is comparable to the activity of P25 TiO2 under UV illumination of similar intensity, and more active than doped TiO2 materials showing visible light activity.It is also shown that 5-RhSTO can act as an anti-microbial material to inhibit the growth of E. coli in aqueous suspension, effectively killing the bacteria within 6 h of visible light exposure.The material shows potential for the visible light activated photo-decontamination of water containing organic molecules and bacteria.
A modified hydrothermal synthesis, avoiding high temperature calcination, is used to produce nano-particulate rhodium doped strontium titanate in a single-step, maintaining the rhodium in the photocatalytically active +3 oxidation state as shown by X-ray spectroscopy.The photoactivity of the material is demonstrated through the decomposition of aqueous methyl orange and the killing of Escherichia coli in aqueous suspension, both under visible light activation.A sample of SrTiO3containing 5 at% Rh completely decomposed a solution of methyl orange in less than 40 min and E. coli is deactivated within 6 h under visible light irradiation.
when conventional hospital foam mattresses are used, but such mattresses were outside the scope of this study.In the open systems, represented here by the Nimbus®, deflation of the mattress with the CPR function seems, however, to improve the stability of resuscitation, especially in the situations where the height of the air mattress exceeds 20–25 centimeters.Resuscitation with a closed air system, e.g., the Carital® mattress, seems to produce an optimal combination with respect to stability and effort.Use of the CPR function in this type of mattress makes the mattress air function an open system and this only impairs performance during resuscitation.Limitations of the study include relatively low number of test person and lack of randomization, which however was not carried for practical reasons i.e. 20 min between each CPR session could have not been guaranteed.The test persons were not given feedback during the resuscitation sessions which might have influence on the results of compression frequency and depth i.e. the quality of CPR of the rescuer.A feedback device to measure the total compression deliver by the rescuer could have been introduced.However, these limitations most probably have not a major effect of the results and conclusions.Still it needs to be remembered that rarely resuscitation is carried out on hard floor but on a bed.Furthermore, the rescuer is not on their knees; it is standing.Esa Soppi, Ansa Livanainen: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.Leila Sikanen, Elina Jouppila-Kupiainen: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data.The authors received no funding from an external source.Carital Group funded the statistical analysis performed by an independent company, 4Pharma Ltd.The authors declare the following conflict of interests: Esa Soppi is the Chairman of the Board of Carital Group.No additional information is available for this paper.
The relationship between the efficacy of resuscitation and the mattresses and backboards used in acute care units, has been studied previously.However, few reports focus on the relative efficacy of resuscitation when using mattresses with different modes of function.This study examines the performance of different support surfaces during experimental cardiopulmonary resuscitation (CPR).The surfaces included a hard surface, a higher specification foam mattress, a dynamic, alternating pressure mattress, and a dynamic, reactive minimum pressure air mattress system.A pressure sensitive mat was placed between the mattresses and each surface and the efficacy of resuscitation measured using differences in compression frequency, compression depth and hands-on time.Our results suggest that the efficacy of resuscitation is dependent on the mode of action of the mattress, while adequate compression frequency and depth do not have a significant effect.In the open system alternating mattress, deflation of the mattress using the CPR function improved the stability of the resuscitation in our study, especially in situations where the height of the air mattress is greater than 20-25 centimeters.Using our experimental system, resuscitation on a closed air system mattress optimally combined stability and effort, while the CPR function converts the air system of the mattress to open, which impairs its functionality during resuscitation.These results indicate that resuscitation is dependent of the mode of action of the mattress and whether the mattress-specific CPR function was used or not.However, the interactions are complex and are dependent on the interaction between the body and the mattress, i.e.Furthermore, this study casts doubt on the necessity of the CPR function in air mattresses.
or even a publicist of innovation.Future studies could try to differentiate the role and impact of the orchestrators of innovation in destinations from the role and impact of actual innovators and so test whether public sector actors in destinations can only facilitate but not innovate or whether tourism offices or other destination decision makers can be more integrally a part of innovation co-creation and implementation.Finally, our mixed results concerning the value of knowledge redundancy and relational trust suggest the need for further study to isolate the mechanisms at work.Drawing from both the literature and our findings, we would suggest that boundary spanning leads the discovery and introduction of new ideas, familiarity through daily collaboration enhances the flow of ideas, and knowledge redundancy combines with relational trust to provide he absorptive capacity necessary to apply new knowledge effectively.If this is so, then one question concerns the optimum proportion of redundancy and reach in generating destination-wide and firm-level innovation.Similarly, although our methods allowed us to separate network structure measures of flows direct measures of composition, more needs to be done to sort out the direction of causality.To pursue either of these avenues of inquiry, historical or prospective longitudinal studies would be useful to explore the possibility of some sort of spiral of destination innovation as the interaction of knowledge diversity, network position and pattern, and trust-modulated flows.When doing such studies, it would be important to pay extra attention to collecting as complete information about the networks under study as possible; as with all network studies, our findings are limited to the network we were able to map and cannot be considered representative of some larger population.For now, we hope that our study provides new insight into the influence of network structure in bringing in and spreading new ideas through a destination and of familiar, shared-knowledge, perhaps trusting relationships in applying those ideas to specific tourist experiences.
We combine network structure and firm-level relationship measures to explore the association between innovative behavior, firm position within the network of a destination, and the knowledge and relational trust characteristics of a firm's innovation-oriented relationships.We find current collaboration, shared knowledge and trust are associated with innovative behavior with partner firms, but that betweenness centrality indicates which partners are the most prominent innovators in a population.That is, relationship-level characteristics facilitate innovation partnerships, but network structure characteristics identify the most successful innovative partners.To theory, our findings contribute to efforts in the tourism, innovation and network literature to evaluate the differential effects of knowledge stocks and flows on innovation.For practice, our results suggest that promoters of innovation within a destination should leverage brokerage positions to improve the in-flow of ideas while encouraging the firms that share knowledge and trust to collaborate to apply those ideas.
Articular cartilage is a load bearing material found at the articulating ends of bones within joints of the body.Smooth joint motion is a result of the low friction at joints of the body, aided by a surface roughness of ~100 nm for articular cartilage.Osteoarthritis includes the degeneration of cartilage, leading to poor joint motion which is typically painful.Rapid heel-strike rise times, during gait, have been implicated in the onset of OA.These rapid heel-strike rise times were as low as 5–25 ms for the subset of the population potentially predisposed to OA.This is in contrast to estimated typical rise times of around 100–150 ms for otherwise healthy gait during walking.This rate of loading is important to the mechanical behaviour of cartilage, because its mechanical properties are rate dependent: cartilage is viscoelastic.Viscoelastic materials can be characterised in terms of a storage, E’, and a loss, E’’, modulus while a viscoelastic structure can be characterised in terms of a storage, k’, and a loss, k’’, stiffness.E’ characterises the ability of the material to store energy for subsequent elastic recoil; whereas, E’’ characterises the ability of the material to dissipate energy.The viscoelastic properties of cartilage have been characterised over frequencies ranging from typical gait frequencies and up to frequencies representative of rapid heel-strike rise times.The implication was that cartilage, on-bone, undergoes a glass transition at around 10–20 Hz, with a frequency-independent loss modulus but a storage modulus which increases with frequency.Subsequently, Sadeghi et al. determined that frequency, independent of load, was significantly correlated to increased failure of articular cartilage.The mechanism proposed, consistent with the hypothesis provided by Fulcher et al., was that at higher frequencies, the storage modulus increases but the loss modulus remains constant.Thus, the ability of the tissue to store energy is greater at higher frequencies.This increased energy, past a certain point, predisposes the tissue to undergo failure; thereby, dissipating energy within the tissue.Frequencies above a proposed glass transition appear to be of particular concern regarding failure.Induced stresses in articular cartilage have been estimated to range from 1 to 6 MPa for moderate activities, such as walking with peak stresses estimated to reach up to 10.7 MPa, for stair-climbing, and 18 MPa for rising from a chair.This is in comparison to induced stresses of around 1–1.7 MPa estimated for hip and knee joints during ‘ambulatory’ activities, i.e. walking.The material properties of cartilage have previously been found to be strain-dependent.However, the relationship was not linear, instead resembling a U-shaped relationship.Different stress levels imply different strain, and potentially different mechanical response to loading.The relevance, though, of stress to the dynamic viscoelasticity of cartilage is currently unknown.The juxtaposition of cartilage and bone will mean that a change in one will lead to a change on stress generated with the other; hence, the relevance of understanding the interactions between articular cartilage and bone.The underlying subchondral bone to which cartilage is attached, has a restrictive effect on cartilage and prevents lateral displacement at the base of the tissue.For example, it has been suggested that the underlying bone would attenuate the increased energy dissipation with loading velocity observed off bone.The extrapolated implication being that cartilage on- and off-bone have different frequency-dependent loss moduli.This inference appears to be consistent with the finding that cartilage off-bone has a frequency-dependent loss modulus, as opposed to a frequency-independent when on-bone.However, differences between testing procedures could make this inference invalid.For example, testing of cartilage samples in air as opposed to within a hydrating solution; since hydration alters the viscoelastic properties and predisposition to failure of articular cartilage.The aim of this study was to determine the effect of the induced stress and restraint provided by the underlying bone on the frequency-dependent viscoelastic properties of articular cartilage.Some tests were performed on cartilage in a hydrating fluid and others in air, in order to understand the limitations of comparing published studies performed under these different conditions.Except for bone restraint, viscoelasticity has been analysed in terms of E’ and E’’.Bone restraint, has been analysed in terms of k’ and k’’, since the combination of cartilage and bone is a structure and not a material.Three bovine femoral heads and eight bovine humeral heads, of approximately between 18 and 30 months old, were obtained from a supplier; bovine cartilage is a suitable model for the dynamic viscoelasticity of human cartilage.Specimens were wrapped in tissue paper, and saturated in Ringer’s solution, on arrival in the laboratory.Specimens were then stored in a freezer at −40 °C.Specimens were thawed for 12 h before testing.Freeze-thaw treatment does not alter the dynamic mechanical properties of articular cartilage.Large scale damage of the cartilage on joints was not evident.However, India Ink was used to ensure that only intact surfaces were used for testing because surface cracks alter the mechanical properties of articular cartilage.Sixteen cylindrical test specimens were obtained using a cork borer with a medical scalpel used to isolate the cartilage from the subchondral bone.The specimens were 5.2 mm in diameter, but varied in thickness.The angle δ is the phase difference between the applied compressive force and the displacement.A 20 mm diameter compression plate was used to compress articular cartilage specimens.This DMA frequency sweep was used for three different testing procedures described in Section 2.3.The DMA frequency-sweep was applied under three distinct testing protocols which focused on test specimens: in air and in Ringer’s solution; loaded under different levels of sinusoidal loading to vary the induced stress; and on- and off-bone.For testing protocol-1, 8 test specimens
The aim of this study was to determine the effect of the induced stress and restraint provided by the underlying bone on the frequency-dependent storage and loss stiffness (for bone restraint) or modulus (for induced stress) of articular cartilage, which characterise its viscoelasticity.Dynamic mechanical analysis has been used to determine the frequency-dependent viscoelastic properties of bovine femoral and humeral head articular cartilage.
were tested following the DMA procedure in air or in Ringer’s solution.To enable a paired comparison, each individual test specimen was tested under both conditions with half the test specimens tested first in air and the other half first in Ringer’s solution.Between tests, each specimen was allowed to rest/recover whilst saturated in Ringer’s solution for 30 min; this ensured cartilage returned to a hydrated state prior to the subsequent test, consistent with literature.A sinusoidally compressive force was applied between 16 and 36 N.Peak loading induced maximal stresses of 1.7 MPa, estimated physiological for lower limb cartilage during walking.For testing protocol-2, 8 test specimens, from the humeral head, were tested in air following the DMA procedure with a variety of three different sinusoidal loading ranges: 2–22 N; 16–36 N and 65–85 N.This induced three different ranges of dynamic stress.To enable paired comparisons, each specimen was tested under the three loading ranges with the order of testing varied with Excel Random Function.For testing protocol-3, 8 test specimens were obtained from humeral heads and tested on-bone and then off-bone.These samples were not cut using a cork borer but by using a hollow drill-head attached to a drill.Cylindrical cartilage on bone specimens were obtained, 4.1 mm in diameter.These specimens underwent the DMA procedure outlined, above in the Section 2.2, first on-bone and subsequently after using a medical scalpel to isolate the cartilage from the bone.For both cartilage specimens on- and off-bone, a sinusoidally compressive force was applied between 10 and 24 N.This loading range induced a maximal stress of 1.8 MPa, comparable to the estimated cartilage walking peak stress of 1.7 MPa.Storage and loss stiffness were used in test protocol-3 as measuring the thickness was not feasible.To understand the effect of stress to the potential failure of cartilage, the ratio of storage modulus to loss modulus was calculated for every frequency for test protocol-2.For protocol-3, the ratio of storage stiffness to loss stiffness was calculated to understand how the restraint of bone affects the potential failure of cartilage.Wilcoxon signed rank tests were used to compare the E’ and E’’ of cartilage when tested in air versus in Ringer’s solution.Wilcoxon signed rank tests were also used to compare k’, k’’ and k’/k’’ of cartilage on- and off-bone.A Friedman repeated measures analysis of variance on ranks was performed to evaluate the differences between cartilage specimens tested at different stress ranges.If the Friedman test showed a significant difference between the groups, a Student-Newman-Keuls multiple comparison test was used to determine the differences between the groups in relation to E’, E’’, E’/E’’ and δ.The results of all statistical tests with a p < 0.05 were considered significant.The testing of articular cartilage off-bone, in either air or Ringer’s solution, did not alter the general logarithmic trend of either E’) or E’’) in relation to frequency.E’ was not significantly different, for cartilage tested in air and in Ringer’s solution, for any frequency tested.Likewise, E’’ was not significantly different, for cartilage tested in air and in Ringer’s solution, for any frequency tested.The viscoelastic response of articular cartilage varied with the induced stress.Increasing the induced sinusoidal stress from low stress to high stress led to a significant increase of E’ by 3.8 to 4.9 times.The logarithmic trend of the frequency-dependency of E’ did not change at low, walking or high stress; it was off-set between the groups.C did not vary; however, D increased from 19.6 to 102 MPa as the induced stress increased.E’’ varied with the induced stress, with significantly higher values of E’’ for walking stress than low and high stress.There was no significant difference in E’’ between low and high stress.The frequency-dependent E’’ was logarithmic for low and walking stress ranges tested, however, E’’ did not follow a frequency-dependent logarithmic trend for the high stress range.CL did not vary between the low stress range and walking stress range.DL increased from 4.9 MPa to 8.1 MPa.Hysteresis loops were larger for the lowest levels of induced stress; the reduction of the area within the centre of the hysteresis loop demonstrates that as the induced stress increases, cartilage dissipates less energy.This corresponded to a much larger phase angle, between stress and strain, at such stresses, which decreased significantly with increased level of median induced stress; for example, at 1 Hz δ decreased from 13.1° to 3.5° at 1 Hz with increasing stress level.The decrease in phase angle, as the induced stress increases, also highlights an increasingly elastic response.This was accompanied by an increase in E* from 23 MPa at the lowest level of induced stress to 51 MPa at walking induced stresses, increasing to 101 MPa at high induced stresses.Walking stresses E’/E’’ was significantly different between the three induced stress groups for every frequency, increasing with induced stress.At the higher stress range, E’:E’’ was 2.90–3.52 times greater than E’/E’’ for the walking stress range.For the walking stress range, E’/E’’ was 1.29–1.66 times greater than the low stress range.The median k’ ranged from 548 N/mm to 706 N/mm for on-bone and 544 N/mm to 732 N/mm for off-bone and k’ was logarithmically frequency-dependent for both on- and off-bone; Table 4).For all frequencies tested, k’ was not significantly different when on- and off- bone.The frequency-dependency of k’’ varied for articular cartilage when on and off-bone.On-bone articular cartilage was frequency independent.However, for off-bone, articular cartilage k’’ demonstrated a frequency-dependency.Regression analysis demonstrated that this frequency-dependency could be empirically described using Eq.k’’, off-bone, was significantly greater than on-bone
The phase angle decreased significantly (p < 0.05) as the induced stress increased; reducing from 13.1° to 3.5°.The median storage stiffness ranged from 548 N/mm to 707 N/mm for cartilage tested on-bone and 544 N/mm to 732 N/mm for cartilage tested off-bone.On-bone articular cartilage loss stiffness was frequency independent (p > 0.05); however, off-bone, articular cartilage loss stiffness demonstrated a logarithmic frequency-dependency (p < 0.05).
of loading.Further, while regions with differing matrix integrity across a joint may have a similar storage/loss ratio, a compromised matrix may well have a lower storage/loss failure threshold.This is likely related to the mechanism by which collagen interacts with the surrounding gel.Alterations to collagen-gel interaction appear to lead to increased storage modulus with increased loading, but an altered loss modulus when bone restrains cartilage.Energy transfer mechanisms during plastic deformation of ground substance over collagen fibres/fibrils or once collagen fibrils/fibres have exceeded a critical length could both be implication in failure.There is also the potential for fibre-pullout at the bone-cartilage interface, which may subsequently increase localised hysteresis near deeper cartilage layers.A hypothesised glass transition appears consistent with an expectation of increased failure with increased frequency of loading, because of the increase in storage/loss ratio.The importance of the internal swelling pressure of cartilage in resisting compressive stress has some important implications for the results described here.The gel surrounding the collagen fibrils is polyanionic and attracts water by the Donnan effect; this effect leads to cartilage having an internal swelling pressure or ‘turgor’ that enables it to withstand applied compression.In the resting tissue this pressure is balanced by tension in the collagen fibrils.The collagen fibrils in articular cartilage are oriented so that they are placed in tension by the internal pressure, leading to mechanical equilibrium of the tissue.Increasing the compressive stress applied to the cartilage surface will then increase its stiffness, provided that the collagen network is not damaged, that little fluid is expressed by the tissue and that the viscosity of the tissue is not too great.It might be expected that removing the cartilage from the underlying bone would disrupt the collagen network and so affect the storage stiffness.However, the collagen fibrils in this region are oriented to prevent the swelling pressure from lifting the cartilage off the bone.This effect of the swelling pressure is not likely to be important in a laboratory compression test, so removal of the mechanism for its prevention is unlikely to be important.It might be considered that, given the importance of tissue hydration for mechanical properties, that mechanical tests of articular cartilage should be performed in a hydrating fluid.However, this will only be true if fluid expression is an important factor in the mechanical response of cartilage in the time-scale of the tests.Evidence from the response of cartilage to impact loading and the published values of cartilage permeability, suggest that fluid flow and fluid expression may be less important than is commonly supposed.In this study, stresses were induced in the range of 0.09–4 MPa.The range investigated incorporates low stress studies, stresses estimated as physiological during walking, and greater, but still physiological, stresses.Induced stresses associated with cartilage failure of above 4 MPa were avoided.The range investigated was lower than failure stresses associated with creep loading of around 8–10 MPa, or induced during traumatic loading of 10–40 MPa or 25–50 MPa.It is noted that at induced stresses in the range of 50 MPa, Jeffrey and Aspden, calculated a ‘dynamic’ modulus as 170 ± 21 MPa for bovine cartilage.This is higher than the values reported in this present study, in which the storage modulus did not exceed 114 MPa; however, higher induced stress would be expected to lead to a higher material rigidity, demonstrating consistency between the premise of the two studies.Equilibrium and aggregate moduli of less than 1 MPa reported are orders of magnitude lower than the moduli reported in this current study, or impact studies.However, induced stresses during such creep based studies are typically below those estimated as physiological during walking for lower limb cartilage.Results from our current study have demonstrated that at stresses below the 1–1.7 MPa range, the storage modulus decreased significantly, with the force-displacement phase lag increasing up to 15°.Further, it has previously been demonstrated that low loading frequencies also exaggerate this phase lag.Thus, if the effects of creep testing and low induced stresses are additive, cartilage may exhibit viscoelastic behaviour which is far removed from cartilage under walking conditions.The dissipative effects of cartilage will present as enhanced, evidenced by a phase angles of ≥15° and reduced magnitude of the complex moduli.Therefore, it may appear to behave in more ‘viscous’ manner than is physiological.Cartilage might appear to be dominated by fluid exudation, while at physiological loading rates the matrix may better approximate an elastic solid.The conclusions from the frequency-dependent viscoelastic properties of articular cartilage are that:articular cartilage is proportionally ‘more viscous’ at low stress and, therefore, not a representation of physical behaviour under a physiological stress range;,at a high induced stress range, articular cartilage is ‘more elastic’ in response when compared to the walking stress range;,off-bone articular cartilage has a greater ability to dissipate energy and its loss stiffness is frequency-dependent, while on-bone articular cartilage is frequency-independent;,there is no significant difference in viscoelastic properties, in relation to frequency, of articular cartilage whether tested, for short tests, in air or in Ringer’s solution.
A sinusoidal load was applied to the specimens and out-of-phase displacement response was measured to determine the phase angle, the storage and loss stiffness or modulus.As induced stress increased, the storage modulus significantly increased (p < 0.05).In conclusion, the frequency-dependent trends of storage and loss moduli of articular cartilage are dependent on the induced stress, while the restraint provided by the underlying bone removes the frequency-dependency of the loss stiffness.
proteins identified in wild-type animals showed reduced rates of change in long-lived daf-2 mutants.The majority of proteins accumulated at similar rates in chronologically age-matched wild-type animals versus daf-2 mutants.Stated simply, these proteins didn’t seem to know that they were in a long-lived mutant.These results agree with our lab’s previous study of mRNA changes during early adulthood in daf-2 animals, where again, the age-specific abundances of many mRNAs changed at a wild-type rate.However, the significance of this was not clear because these mRNAs might have been linked to reproduction, which takes place during this period, involves two-thirds of the animal’s cells, and was known to be fairly normal in these mutants.Our finding that this observation also holds true for protein levels, and extends well beyond the reproductive phase of adulthood, is thus significant.It suggests that the aging rate of much of the age-variant proteome, at least in terms of protein abundance, is unchanged in these long-lived animals.More fundamentally, it implies that an important biological clock still ticks at a normal rate in these animals, in spite of the change in their rates of morphological decline.These mutants probably age more slowly at the organismal level and live long because the relatively small set of cell-protective proteins and proteostasis regulators, known to be expressed constitutively at high levels even in young daf-2 adults, alter the extent to which normal time-dependent processes affect biological aging.In summary, here we describe a highly reproducible proteomic dataset containing >11,000 proteins identified in the adult worm, and we show how these proteins change during aging.Network analysis of our data, together with follow-up in vivo studies to test hypotheses generated using our dataset, has identified impairments in peroxisomal protein import during aging and, surprisingly, the finding that the majority of age-variant proteins do not scale with the rate of biological aging in long-lived insulin/IGF-1-receptor mutants.Additionally, this deep proteomic analysis contributes candidate determinants for biological aging and a large set of proteins whose abundances, to our knowledge, were not previously shown to change with age, including proteins that may serve as biomarkers of aging and can be tested genetically for a causal role.To maximize the value of this resource to the community, all of these data are provided in a convenient, searchable online database.Detailed procedures for SILAC labeling, sub-cellular fractionation, LC-MS/MS and data analysis, microscopy, RNAi treatment, and in vivo assays are included in the Supplemental Experimental Procedures.The N2 strain was used as wild-type.HZ859 was a gift from Hong Zhang and Malene Hansen, GFP-SKL was from Monica Driscoll, and TH184 and TH237 were from Mihail Sarov.All other strains were from the Caenorhabditis Genetics Center.The strains were grown and maintained at 20°C as previously described with sufficient food for at least three generations prior to use.The data have been assembled into a searchable, online resource with a user-friendly graphical interface, maintained by the A.I.L. lab, to provide convenient and open access to the community.Our data will also shortly be incorporated into WormBase, a resource that is used extensively by the worm community.V.N. conceived and performed the experiments, with input from T.L., A.G., A.I.L., and C.K. V.N., A.I.L., A.G., and C.K. secured funding.V.N. and T.L. wrote R scripts for data analysis.E.P. and V.N. optimized SILAC protocols for worms.A.B.M. integrated the proteomic data into the EPD.V.N., C.K., A.I.L., T.L., and A.G. wrote the manuscript.
Effective network analysis of protein data requires high-quality proteomic datasets.Here, we report a near doubling in coverage of the C. elegans adult proteome, identifying >11,000 proteins in total with ∼9,400 proteins reproducibly detected in three biological replicates.Functional experiments confirm that protein import into the peroxisome is compromised in vivo in old animals.We also studied the behavior of the set of age-variant proteins in chronologically age-matched, long-lived daf-2 insulin/IGF-1-pathway mutants.Unexpectedly, the levels of many of these age-variant proteins did not scale with extended lifespan.This indicates that, despite their youthful appearance and extended lifespans, not all aspects of aging are reset in these long-lived mutants.
input ratio, however, did not have any significant effect as can be seen from the two cases of Umix = 0.78 and 1.09 m/s, where different input ratios were studied.The wave frequencies were also studied with the double-wired conductance probe for the few conditions where there were no drops present.There is good agreement with the results from the high speed imaging with frequency peaks at 26 Hz and 33 Hz for mixture velocities of 0.62 m/s and 0.78 m/s respectively.From the measured wave frequencies the Strouhal number was calculated using the actual water velocity computed based on the average interface height.The mean value for the 7 flow conditions investigated was 0.24 with 9% standard deviation.This number is in good agreement with the literature value of 0.2 for vortex shedding behind a cylinder in single phase unbounded flows.The agreement suggests that the interfacial waves are caused by the von Karman vortices generated by the rod.The agreement was good even for the cases where small KH waves formed at the inlet for r different than 1.It seems that, at least for the flow conditions studied, vortices generated by the rod dominated over the KH waves.In addition, the pipe wall does not affect the vortex frequencies.The distance between the rod and the pipe wall varies from 0 to 6.75 mm measured along the cylinder from the bottom of the pipe.It has been shown that vortices can be suppressed when the ratio of the distance between the wall and the cylinder over the size of the cylinder is smaller than 0.3 .For the current system only 9.8% of the length of the rod falls below this critical ratio.However, further investigations of the velocity fields would be required to fully understand the flow behind the bluff body in this configuration and its interactions with the interface.Average wave velocities for the different conditions investigated are shown in Fig. 15.The wave velocities increase with mixture velocity and do not vary significantly with distance from the rod especially at the lower mixture velocities.At the higher mixture velocities, there are more fluctuations with maximum deviation of about 4% from the mean value.In the current work, for all conditions investigated the wave velocities were 9.6% faster than the mixture velocity on average regardless of the input flowrates.Other studies also reported that the wave velocity was different from the mixture velocity.It is possible that these differences are due to the different mechanism of the generation of the waves; waves observed in previous studies resulted from a KH instability that depends on the velocity difference between the two phases at the inlet, whereas in the current study the waves seem to result predominantly from the vorticies shed by the bluff body.In addition, the blockage caused by the bluff body leads to a local acceleration of the fluid below and above the cylinder and it is possible that this affects the wave velocity.In this paper the effect of a cylindrical bluff body placed inside a pipe to the flow patterns and interface characteristics of a two-phase liquid–liquid system were studied experimentally, using high speed imaging and a conductance probe.The aim was to passively actuate waves in the interface and the transition from stratified to non-stratified flows.It was found that the rod reduced the transition to lower mixture velocities while the change in flow patterns persisted at 7 m downstream the rod.In stratified flows the bluff body generated interfacial waves attributed to the interactions of the von Karman vortices in the wake of the rod with the oil–water interface.An increase in interface height was seen after the rod that was affected by both the mixture velocity and the input ratio of the phase flowrates.The average wave amplitude increased with distance from the rod, while the average wavelength and frequency remained almost constant.The Strouhal number agreed with the literature value of 0.2 for vortex shedding behind a cylinder in single phase flows with no wall present.The wave velocities were found to be about 10% higher than the mixture velocity.Further investigations of the velocity fields in the water phase are needed to reveal the interactions between the vortices shed by the rod and the liquid–liquid interface.Although the current investigations are at their initial stage, the use of bluff bodies in multiphase flows in pipes promises to have many important industrial applications.The presence of a bluff body in a pipe can enhance two-phase mixing and improve mass and heat transfer rates; the current approach can therefore be used in heat exchangers and in mixing processes.In addition, the use of a bluff body can improve the overall control of the flow patterns inside pipes, for example during transportation of oil–water mixtures, important for flow assurance applications.
In this paper the effect of a transverse cylindrical rod immersed in water on the flow patterns and interfacial characteristics of an oil-water pipe flow is investigated experimentally.The cylinder is used to passively actuate the transition from stratified to non-stratified flows and to localise the formation of waves and the detachment of drops.Flow patterns and interface characteristics were studied with high speed imaging.It was found that the presence of the rod generates waves shortly downstream, from which drops detach, and reduces the mixture velocity for the transition from stratified to non-stratified flows.The average interface height and wave amplitude increase with distance from the rod, while the average wave length and frequency remain almost constant.The Strouhal number is found to be equal to 0.24, while the wave velocities are slightly higher than the mixture velocities.
donate and contributed less.This study aimed to determine the effects of announcing SEED and PREV on voluntary contributions to trail maintenance in the national park.This is one of the first studies conducting a field experiment to investigate both effects at the same time.Thus, we will consider and then compare the effects of each treatment.On the question of how the announcement of SEED funding affected contributions: this study found that it increased the share of people who had positive donations and raised sample average contributions, while the effect on conditional contributions were unclear.That is, the increase in the total contributions relied on the ratio of people who donated, not on the average individual amount.These findings are consistent with findings from theoretical research and other field experimental research.Since it is known that seed money is a generally effective mechanism to raise funding when project cost is fixed, our findings indicate that the announcement of seed money is a useful mechanism for park management as well.This follows because most national parks have a certain budget within which the management cost needs to be fixed before implementation.Our findings suggest that park authorities share information related to seed money and fundraising targets with park visitors to enhance funding.Still, we could not determine whether the 50% seed funding allocation was optimal, since practical constraints compelled us to fix the percentage of seed money in our experiment.Thus, further research is needed to investigate the effect of the initial amount of seed money that is available.Publicly announced PREV increased the ratio of people who donated as well as the sample average contributions, although the parameter was not statistically significant in the regression model.Surprisingly, the conditional average contribution of PREV was smaller than that of the control, and the parameter of PREV was negative although both were not statistically significant.That is, this result implies that PREV could have a negative influence on fundraising campaigns in this specific park management context."A possible explanation for this is that participants to whom PREV were announced perceived the others' contribution as smaller than the reality.That is, participants could consider that more people donated a smaller amount than what was done in this field experiment, because the initial donations contributed by other participants on the first day of the experiment included a small amount of coins.In this instance, we did not only announce the number of contributors but also showed the amount of the donations on the first day, so as to have practical implications.Martin and Randal found that the composition of the initial contents approximately reflected the composition of the donations by participants in the context of their art gallery experiment.While it is beyond the scope of this study, examining how participants perceived the donations could be an interesting future study.To summarize the above findings: announcing the seed money and the fundraising target is a superior measure for raising funds to achieve sustainable park management.We note some interesting findings from the field experiment that also have important implications for the understanding of actual donation behaviors.First, all treatments have three peaks in the distribution of their contributions—0, 500, and 1000 JPY—as described in Fig. 1.A possible explanation is that it is easy for people to choose these donations because 500 and 1000 are round numbers and because there are 500 JPY coins and 1000 JPY bills in Japan.Next, the findings from the regression analysis show that older and non-local participants contributed more than others."This is not surprising and supports previous studies—older people's income tend to be higher than that of younger people, and the higher travel costs of non-local tourists imply that they have greater motivations to hike on the mountains.However, these findings have practical implications for collection measures.That is, a few Japanese national parks have asked tourists to contribute some fixed amount; however, the above findings suggest that uniform contributions are not preferred, even when using an effective mechanism such as SEED."Finally, the proportion of respondents willing to donate anything was surprisingly larger compared with previous visitors' donations at the NHT.This could be because park staff were around people to check their safety and to distribute the questionnaires.To consider the finding from a practical management perspective, the benefits of these measures—that is, to gather donations by means of park staff—can exceed the costs, especially because the NHT needs to hire some staff to ensure visitor safety regardless of whether or not the experiments are conducted.Future research needs to investigate and confirm the effect of other people around participants in Japan.The study conducted by Alpizar and his colleagues would be a good reference, since they have found that the occurrence of solicitors increased individual contributions and participation rates in Costa Rica.To date, most parks have faced financial shortages for park management, even in developed countries.Our findings show that announcing the seed money and the fundraising target is a superior measure by means of which to raise funding for sustainable park management.Since this study was conducted in the specific context of protected areas, this is just a step on the way toward understanding donation behavior.Further studies need to investigate the effect of announcing previous contributions to raise findings in other fields.
Donation is one of the most important solutions to inadequate funding for protected area management; however, there has been little agreement on the measures to be used to encourage visitors to donate.The second type is for trail maintenance and information is provided on the value of one day's contribution by other participants.We found that announcing the seed money amount and the target significantly increased the probability of a positive contribution and raised the average contribution, compared with the control treatment of no additional announcements.In conclusion, announcing the seed money and the fundraising target is superior to the other measures studied in this paper to raise funds in this specific context of protected area management.
peak during delithiation is attributed to a low lithium ion conductivity of the delithiated matrix, which, given that the surface of the film is delithiated first, effectively reduces the kinetics of the further delithiation.Fig. 7 shows an illustration of the relationship between the differential capacity analysis and the phase diagram presented previously.The differential capacity plot is from the 20th lithiation of a SiN0.89 electrode after complete delithiation to 3 V vs. Li+/Li.This figure shows how the lithiation path first passes through a three phase region, where Si3N4 reacts with lithium to form Li2SiN2 and Si, resulting in the sharp M#d1 peak in the differential capacity plot.The lithiation continues into a two phase region, in which Li2SiN2 is a primarily inactive spectator, while silicon is gradually lithiated, first to Li2Si and then to Li3.5Si, corresponding to the more diffuse Si#d1 and Si#d2 dQ/dV peaks.The blue features are related to the main reversible reaction when cycling between 0.05 V and 1 V vs. Li+/Li.In this work, we have proposed a model describing the initial nitride conversion reaction in a thin film electrode system.Based on this model, a number of equations have been derived, relating the mass and composition of the electrode with its charge and discharge capacity, as well as the surface and bulk contributions to the irreversible capacity.Despite the matrix not being completely inactive, we have shown that the equations resulting from this model correlate well with the experimental data, and that this system is well suited for separating surface and bulk contributions to the irreversible losses.From both a mass dependent and mass independent fitting of experimental data to the model, we determined the Li:Si:N atomic ratio of the matrix to be close to 2:1:2.Based on thermodynamic considerations, we argued that a combination of non-ternary matrix components, e.g. Si3N4 and Li3N, is unlikely, and that the matrix instead consists of the ternary phase Li2SiN2.This was supported by the dQ/dV lithiation-characteristics of the matrix after delithiation to 3.0 V vs. Li+/Li, and is in agreement with a previous study by Suzuki et al. .Based on the determined matrix composition, a prediction of the bulk discharge, charge and irreversible capacity of SiNx as a function of nitrogen content x was also made, as seen in Fig. 8.
An extensive research effort is being made to develop the next generation of anode materials for lithium ion batteries.A large part of this effort has been related to silicon, primarily due to its considerable theoretical capacity; however, very limited cycling stability has prevented widespread commercial adoption.A potential solution for this is to use convertible sub-stoichiometric silicon nitride (a-SiNx), which goes through an irreversible conversion reaction during the initial lithiation cycle, producing active silicon domains in an inactive, lithium conducting matrix.Relative to pure silicon, the resulting composite material has gained cycle life at the cost of reduced specific capacity.The specifics of the conversion reaction, however, have not yet been determined; hence, the impact of varying nitrogen content remains uncertain.In this work we develop a model reaction which relates the reversible and irreversible capacities of an electrode to the composition of the conversion products.By fitting this model to experimental data from a large number of a-SiNx thin film electrodes with different thickness and composition, we determine with a high probability that the matrix composition is Li2SiN2.From this, the reversible and irreversible capacities of the material can be predicted for a nitride of a given composition.
sintering or hot compressing.Yang et al. determined values between about 60 and 68 emu g−1 for samples with 55 at% Mn obtained by melt spinning, grinding and magnetically aligning.Similarly, the sample of Cui et al. had been ball milled and magnetically aligned, resulting in a saturation magnetization at room temperature of 76 emu g−1.This shows that hot centrifugation as applied here might be a feasible and rather simple way to obtain samples with a reasonably high content of α-BiMn that might be further enriched by other methods.An experimental method was devised to synthesize the ferromagnetic compound α-BiMn.In order to remove the surplus Bi-rich liquid that is usually left over during high-temperature synthesis, samples were centrifuged at a temperature of 320 °C.This resulted in a distinct phase separation, and samples of α-BiMn with a phase purity of up to 87% could be recovered.In agreement with the literature, thermal analyses showed the peritectic decomposition of α-BiMn into β-BiMn and Bi-rich liquid at 355 ± 2 °C on heating, and the ferromagnetic α-BiMn phase was formed again on cooling at about 341 ± 2 °C.These temperatures were well supported by the measurements of the temperature dependence of the magnetization.The coercivity of α-BiMn increases with increasing temperature, as it has been already observed earlier in the literature.The Curie temperature of α-BiMn is obviously higher than the temperature of the peritectic decomposition of the phase, which is in general agreement with literature results obtained by a mean field approximation .Unfortunately, α-BiMn is extremely oxygen sensitive as observed during the attempted high-temperature XRD measurements.Together with the very difficult synthesis in a phase-pure form, this will probably prevent the use of the compound as a permanent magnetic material for any commercial applications, at least in bulk form.
Several Mn-based compounds have been identified as promising materials to replace permanent magnetic alloys based on rare earth elements.Amongst them is the compound BiMn, in particular the low temperature modification α-BiMn (hP4, AsNi-type, B81), which has been shown to exhibit interesting magnetic properties.In the present study, it was attempted to synthesize phase-pure α-BiMn by annealing samples at 320 °C and centrifuging them at this temperature.In this way, it was possible to obtain a distinct phase separation with the intermetallic compound on top and the liquid Bi, which is always present after the high-temperature syntheses, at the bottom.After solidification, α-BiMn with a purity of up to 87%, though not phase-pure, could be recovered.Thermal analyses as well as the temperature dependence of the magnetization indicated a peritectic decomposition at 355 ± 2 °C whereas the phase re-formed on cooling at 341 ± 2 °C.High-temperature X-ray diffraction led to a complete oxidation of the sample even in a rather pure He atmosphere.Magnetic measurements showed an increase of the coercivity of α-BiMn with increasing temperature, in general agreement with earlier literature reports.The Curie temperature could not be determined experimentally as it is apparently higher than the temperature of the peritectic decomposition.
algorithm proposed here, similar to the work on cortical sulci recognition, where the expansion of the database that could represent the high inter-subjects variability of sulcal patterns improved the accuracy and generalization of the method.In addition, the training data set needs to be expanded to include other mouse strains with significantly different cerebrovasculature.The feasibility of using a strain mixed training data set versus single strain training set for the automatic labeling algorithm should be examined as well.Given the labeled segmented vasculatures, it is possible to compare and quantify the features of different vessels across the specimens.Moreover, arterial and venous vessels can be analyzed separately based on the labeled data.Examples of features that one can compute for specific arteries are mean vessel diameter, total sub-tree vessel length, and total intravascular volume.An illustrative selection of these parameters is reported in Table 2, along with standard deviation, minimum and maximum for the 7 labeled C57Bl/6J specimens.The mean diameter of arteries calculated by our method is comparable to published values measured by in vivo angiography methods, indicating the minimal shrinkage and distortion effect of Microfil perfusion and sample preparation.The coefficient of variation for these different metrics varied between 6.5% and 25.4%.The analysis of the labeled vasculature can also provide qualitative information such as the existence of collateral circulation in the circle of Willis.For example, three of the C57Bl/6J specimens had no posterior communicating artery.The other four specimens had a unilateral posterior communicating artery with a mean diameter of 125 ± 11 μm.For each specimen, a coarse perfusion territory map was constructed by assigning the label of the nearest arterial terminal edge to each voxel.An average perfusion territory map and the accuracy map of the average territories were created by voxel voting over the 7 C57Bl/6J specimens.Comparison of cerebrovascular variations can broaden our understanding of this anatomical structure.The vascular morphology is highly variable from one individual to another.The variations include different number of branches, variation in the location of bifurcation, absence of certain vessel segments, bifurcation or trifurcation, etc.The high complexity and variability of the vascular structure make it very difficult to compare vessels among individuals which are essential in studies of the vasculature development and pathologies.The automatic recognition of the vessels through registration to an atlas would also fail due to the large inter-subject variations.The variation of vascular structures, such as the location and angle of branches, missing or additional branches, and smoothness and diameter of vessel segments from one subject to another make an accurate co-registration of the vasculature almost impossible.We have developed a framework to segment and label all the vessels in the cerebral vasculature automatically.We have shown briefly that this automatic method facilitates and expedites the comparison and analysis of many cerebrovascular data in a reasonable time.In the future we will utilize this method to compare anatomical variations of cerebrovasculature caused by different mouse strains and to study the genotype–phenotype relationships in the development of cerebral vessels.In addition, the processing methods introduced in this paper can be adapted with minimal changes to in vivo images for applications such as longitudinal studies.Currently, the typical in vivo angiography resolution is limited between 50 and 100 μm, which is not high enough to visualize the fine arterioles and venules.We anticipate that as in vivo angiography methods continue to improve this will be a good application for the presented methodology.
Study of cerebral vascular structure broadens our understanding of underlying variations, such as pathologies that can lead to cerebrovascular disorders.The development of high resolution 3D imaging modalities has provided us with the raw material to study the blood vessels in small animals such as mice.However, the high complexity and 3D nature of the cerebral vasculature make comparison and analysis of the vessels difficult, time-consuming and laborious.Here we present a framework for automated segmentation and recognition of the cerebral vessels in high resolution 3D images that addresses this need.The vasculature is segmented by following vessel center lines starting from automatically generated seeds and the vascular structure is represented as a graph.Each vessel segment is represented as an edge in the graph and has local features such as length, diameter, and direction, and relational features representing the connectivity of the vessel segments.Using these features, each edge in the graph is automatically labeled with its anatomical name using a stochastic relaxation algorithm.A leave-one-out test performed on the labeled data set demonstrated the recognition rate for all vessels including major named vessels and their minor branches to be > 75%.This automatic segmentation and recognition methods facilitate the comparison of blood vessels in large populations of subjects and allow us to study cerebrovascular variations.© 2014 The Authors.
scenarios and various adaptation measures in such analyses.High water supply allows an expansion of vineyards in the region leading to an increase in regional net benefits.However, efficient adaptation to a climate scenario would require high flexibility in terms of land and water use, or lead to severe groundwater externalities.The VOI reveals the economic value of timely and accurate climate information for decision making in agricultural adaptation, which is of particular importance in a water-constrained environment.Considering stochastic climate scenarios in the analysis even increases the scientific and practical relevance of the work.When land and water use in the case study region is efficiently adapted to a climate similar to the past, falling groundwater tables or economic losses have to be expected in the long run if a dry or wet climate is realized.This is mainly due to excess irrigation water use.If farmers adapt to a dry climate or a climate similar to the past but wet conditions prevail, land and irrigation water use is inefficient and result in foregone economic benefits.From the VOI computation, we can conclude that absent climate information bears a high economic and environmental risk affecting not only farmers but also the public.VOI computations may thus inform the provision of climate data and impact studies.While the described IMF has several components that improve and distinguish it from previous research, such as explicit inclusion of plant water demand, blue and green water supply, various irrigation technologies and intensities as well as stochastic climate scenarios, a number of opportunities for further research remain.We propose that future work could differentiate between farm and farming types and respective differences in adaptation decision making, assess water price and water policy scenarios, analyze plant water productivity, restrict land use change in areas with high ecological value, evaluate the accuracy of water allocation, and include additional adaptation measures, which might allow a more flexible groundwater use.The authors declare that they have no conflict of interest.
Identifying efficient adaptation measures in land and water use requires integrated approaches and a spatially and temporally explicit representation of water demand and supply.Stochastic climate information may further improve adaptation assessments to reduce the risk of misinterpretation of climate signals.We aim at developing an integrated modeling framework (IMF) that meets these requirements for assessing impacts of three stochastic climate scenarios (DRY, SIMILAR, WET), and regional irrigation water restrictions on land and water use.Furthermore, impacts on regional net benefits and the economic value of stochastic climate information (VOI) are assessed.The VOI is defined as the difference between regional net benefits with and without efficient adaptation of land and water use to a specific climate scenario.The IMF has been applied to the semi-arid Seewinkel region in Austria.Considering efficient adaptation, regional net benefits amount to 8 M€ and irrigation water use to 8.4 Mm³ in a DRY climate scenario.In a WET climate scenario and a scenario with SIMILAR conditions compared to the past, regional net benefits amount to 38 and 20 M€ and irrigation water use to 41 and 21 Mm³, respectively.High regional net benefits are obtained through an expansion of vineyards, irrigation, and fertilization.On average, the VOI is highest if land and water use is efficiently adapted to DRY but a WET scenario is realized (506 €/ha/a) and lowest with efficient adaptation to WET but the realization of a SIMILAR scenario (58 €/ha/a).
GABAergic interneurons constitute 20–30% of all neurons in the cortex and are essential for cortical circuit function.Through their inhibitory actions cortical interneurons have multiple functions including maintenance of network balance and shaping of synchronized activity .This functional diversity of interneurons in the cortex is enabled through a remarkable heterogeneity.The exact number of different subtypes that exist in the adult cortex is unclear partly because of ambiguity in their classification.However, recent concerted efforts to pull together different criteria provide great promise for a unifying classification scheme .Tremendous efforts have been made in the last 15 years to determine how interneuron heterogeneity becomes established.It is now widely accepted that genetic pathways hold the key to cell fate determination.Insight into the genetics that drive cell diversity is emerging fast and has already had far reaching benefits beyond basic science into neurodevelopmental disease research and stem cell therapies .In this review we describe the known genetic regulatory pathways that promote cortical interneuron cell fate specification focusing mainly on the most recent advances in the field.As intrinsic genetic programs of cell identity do not act in isolation, we discuss how extrinsic cues influence the development of cortical interneurons.The generation of interneuron diversity begins during embryogenesis when cortical and hippocampal interneurons are born in subcortical regions and migrate away to reach their final positions.Three sources of cortical interneurons have been identified in the telencephalon: the medial ganglionic eminence, the caudal ganglionic eminence and the preoptic area.Each of these regions generates distinct cohorts of interneurons for the cortex indicating that restriction of neurogenic potential in the subpallium generates diversity.The three sources of interneurons identified to date are clearly not enough to explain the >20 subtypes of mature interneurons found in the adult cortex and hippocampus .Original suggestions that the septum — the fourth major germinal zone of the ventral telencephalon — may generate interneurons for the cortex have been disproved .However, smaller subdivisions of the neuroepithelium lining the ganglionic eminences have been identified based on transcription factor expression, raising the possibility that finer restriction of neurogenic fate from the three major sources may contribute to diversity .In agreement with this, biases in interneuron subtype generation have been described within the ganglionic eminences and the POA along the dorso-ventral and anterior–posterior axes .Superimposed on the spatial control of interneuron fate is temporal regulation, with distinct interneurons being generated at different stages during development .The temporal regulation of cell identity within the MGE has recently been attributed partly to the presence of distinct precursors for upper and lower layer MGE-derived interneurons .One question that ensues is whether committed precursors of early-born and late-born interneurons in the ganglionic eminences are intermingled but molecularly distinct from each other, as recently shown for pyramidal neuron precursors .Much like the spinal cord where morphogen-regulated transcription factors establish distinct progenitor domains , the telencephalic subdivisions arise through the activation of transcription factors that provide the neuroepithelial cells with their identity.Morphogens that pattern the telencephalon include SHH and FGF and early-acting transcription factors include GLI1/2/3, PAX6, SIX3, FOXG1, NKX2-1, GSX2, ASCL1 and NEUROG2 .These transcription factors function well before the appearance of any cortical interneurons and yet have profound effects on cortical interneuron development through restriction of progenitor differentiation potential.At the top of the genetic cascade of cortical interneuron development are the transcription factors DLX1 and DLX2 which are activated in all interneurons downstream of early patterning genes.DLX1/2 have multiple roles at the initial stages of cortical interneuron development including inhibition of glial fate, promotion of GABAergic differentiation and cell migration .ARX and DLX5/6 are two direct targets of DLX1/2.They are transcription factors that show prolonged expression in subsets of cortical interneurons beyond the initial specification and migration stages and are deployed in multiple ways in the regulation of interneuron development .ASCL1 is another transcription factor that is expressed in the subcortical telencephalon and is thought to function high up in the hierarchy of cortical interneuron development.ASCL1 loss-of-function mutants have implicated this factor in the regulation of neurogenic differentiation genes .More recent compound DLX1/2 and ASCL1 LOF mouse mutants have revealed unique and overlapping genetic pathways regulated by these factors in the ganglionic eminences .Such studies using mice harboring mutations at multiple loci provide great insight into common and distinct functions of transcriptional regulators and their downstream actions.The MGE is the largest source of interneurons for the cortex, generating around 60% of the total population .This includes two major classes: firstly, parvalbumin-expressing, fast spiking basket and Chandelier cells and secondly, somatostatin-expressing neurons that may express other markers such as calretinin, neuropeptide Y or reelin, may have multipolar, bitufted or bipolar dendrites, distinct axonal arborizations and may exhibit intrinsic-burst spiking or adapting non-fast spiking responses to current injection .Although lumped into two classes, PV-expressing and SST-expressing interneurons are themselves diverse populations.What are the molecular pathways that direct their fates?,And by fate we refer to molecular identity, laminar localization, axonal/dendritic morphology and physiological characteristics, all of which are used as traits for classification.At the top of the molecular hierarchy governing MGE-interneuron development is NKX2-1.The actions of NKX2-1 are central to the MGE and are initiated through specification of the neuroepithelial MGE identity .In its absence, interneurons known to be derived from this region are mis-specified into alternative fates.Yet Nkx2-1 is only briefly expressed in the cortical interneuron lineage and becomes downregulated in migrating immature cells as part of their differentiation program .ZEB2 has recently been identified as
The origins of cortical interneurons in rodents have been localized to the embryonic subcortical telencephalon where distinct neuroepithelial precursors generate defined interneuron subsets.A swathe of research activity aimed at identifying molecular determinants of subtype identity has uncovered a number of transcription factors that function at different stages of interneuron development.© 2014.
studies which resulted in an imbalance of interneuron subtypes in the cortex .This has been attributed to a defect in progenitor proliferation rather than cell fate determination .The related transcription factor, NR2F2, is involved in directing interneurons through a caudal migration route .Like NR2F1, expression of NR2F2 is not linked to a single origin and can be observed in MGE-derived as well as CGE-derived interneurons .More recently, SP8 has been identified as a marker for some CGE interneurons; its function in the lineage remains unknown .A breakthrough into the specification of CGE fates has been the finding of PROX1 expression in the lineage.PROX1 is a transcription factor that is present in nearly all striatal interneurons regardless of their origin but within the cortical interneuron population expression is confined to CGE and POA-derived cells .LOF studies in mice have demonstrated an essential role for this transcription factor in the development of CGE-derived cortical interneurons: at early stages PROX1 is necessary for radial migration and proper positioning within the cortical plate; at later stages the requirement for PROX1 is subtype-specific, functioning in morphogenesis, maturation and network integration.CGE-derived interneurons lacking PROX1 maintain expression of NR2F2 and SP8 suggesting independent activation of these two transcription factors.PROX1 is therefore a lineage tracer for the CGE-derived cortical interneuron population acting at multiple points to regulate their differentiation.How a single transcription factor such as PROX1 can have multiple functions in different cell types and at different stages of development is unknown but likely to be mediated by differential binding to as yet unidentified transcriptional cofactors.Interneurons generated from the POA contribute only ∼10% of the total population in the adult cortex but include a large diversity of subtypes .As the POA has only recently been placed on the source map of cortical interneurons we have almost no data on how these cells are specified.Genes involved in fate-direction elsewhere in the telencephalon are also expressed in the POA and contribute to patterning of this domain.These include SHH and NKX2-1 which are expressed in the majority of the POA neuroepithelium, DBX1 and NKX6-2 which label respectively the dorsal and ventral POA domains and the postmitotic marker HMX3, which is expressed in small subsets of cells adjacent to the neuroepithelium .Some of these genes have been used in lineage tracing studies of the POA but their contribution to interneuron specification remains elusive.We currently have a framework of the initial genetic pathways that lead to cortical interneuron cell fates but we are far from a complete picture.We lack almost any insight into late developmental events such as specification of axonal and dendritic blueprints, synaptic partner selection or expression of channels and receptors that define the physiological characteristics of mature interneurons.These processes are all likely to be highly dependent on intrinsic factors and environmental influences.Some of the early-acting genes already identified are undoubtedly acting as ‘master’ regulators that trigger downstream genetic cascades.As new factors come into play these will either feed into the known pathways or expand the branches to further refine our understanding of the mechanisms that control cortical interneuron trait acquisition.Papers of particular interest, published within the period of review, have been highlighted as:• of special interest,•• of outstanding interest,There are numerous overlapping steps in cortical interneuron development before a fully mature phenotype is established.These include tangential migration through the subpallium and the pallium, radial migration and layer selection within the cortical plate, formation of axonal and dendritic arborizations, expression of mature markers related to physiological properties, synaptic target cell selection and subcellular targeting of synapses.There is evidence showing that nearly all of these are linked to the embryonic origin of interneurons and therefore are specified by genetic pathways.Even cell death, a process by which 40% of interneurons generated during development are eliminated, is thought to be determined by intrinsic factors .However, genetic programs do not act in isolation and environmental cues are essential for their correct execution.For example, from the onset of their migratory journey, interneurons depend on guidance cues secreted by the environment to find their way to their destination.In the absence of such signals interneuron distribution becomes abnormal .Late-born CR-expressing interneurons additionally require electrical activity for migration as well as development of their axonal and dendritic arbors .Furthermore, layer acquisition and connectivity, both of which show high specificity, are determined by embryonic origin but are also dependent on local cues .And even expression of neurotransmitters, channels and neurotransmitter receptors is genetically predetermined but requires external influences for acquisition of mature phenotypic features .The discovery of the activity-dependent expression of SATB1 in cortical interneurons is one of the most recent examples of environmental influences on the genetic program of interneuron development .SATB1 is a maturation-promoting factor that is expressed in subsets of cortical interneurons.In its absence, SST-expressing interneurons lose hallmarks of their identity .They do not convert to an alternative fate but simply remain as immature neurons.Expression of SATB1 is detected just before birth and evidence suggests that this is dependent on cortical activity .Yet induction of SATB1 is restricted to MGE interneurons and requires LHX6 function .SATB1 therefore forms the link between a developmentally imposed genetic specification program and extrinsic environmental cues; a prime example of nature and nurture intertwined to specify cell fate.
Pathways that lead to the acquisition of mature interneuron traits are therefore beginning to emerge.As genetic programs are influenced by external factors the search continues not only into genetic determinants but also extrinsic influences and the interplay between the two in cell fate specification.
Adeno-associated virus-derived recombinant vectors are attracting significant attention as promising tools for a wide range of applications in the field of gene therapy.Cell transduction mechanisms with rAAV have been studied in detail.Those studies have identified a number of cellular receptors for virus entry, as well as many aspects of the intracellular trafficking of their payloads to the nucleus.Protein classes having specific post-translational modifications, such as alpha-2,3 and alpha-2,6 sialic acids, N-linked glycoproteins, or heparin sulfate proteoglycans, are the primary cell receptors for rAAV uptake.1–3,These post-translational modifications are so common among mammals that researchers initially assumed that rAAV efficiency would be similar across species lines, such that data obtained from animal models would be predictive of the human situation.This optimism, however, was tempered by subsequent studies showing that rAAV-3 could efficiently transduce human hepatocytes through the human hepatocyte growth factor receptor, but had no such uptake mechanism in murine hepatocytes.4,5,Moreover, many studies have been completed using cells grown in culture,6–8 without taking into account the likely disruptive interactions of rAAV with actual components of more complex human body fluids.This consideration is a crucial issue in the case of systemic delivery of vectors in humans via intravenous transfusions.Indeed, recent studies have shown that rAAV interactions with blood proteins are significantly vector-serotype and species-sera specific.As an example, human and dog galectin 3 binding protein interacts with serotype rAAV-6 and decreases its transduction efficiency, but mouse and monkey G3BP do not.9,The same applies to mouse C-reactive protein; it binds with rAAV-1 and rAAV-6, improving skeletal muscle transductions by more than 10-fold in mice, but human CRP does not react with these two serotypes.10,Given that knowing about and taking into consideration these critical species-specific concerns is essential for further improvements in rAAV-driven therapeutics, we have undertaken studies to precisely identify the patterns of serum proteins reacting with rAAV-8 and rAAV-9, two vector serotypes that are currently under widespread clinical development.By using an assay that is comprised of direct trypsin digestion of serum proteins co-precipitated with immobilized vectors and Orbitrap mass spectrometry peptide analysis, we were able to exhaustively identify and quantify the serum proteins interacting with these important vectors.We show that, in contrast to rAAV-1 and -6, which preferentially interact with one major serum protein, rAAV-8 and -9 interacted with a larger and more diverse spectrum of proteins in mouse and human sera.Importantly, we observed a high similarity in the patterns of bound proteins between mouse and human.As stated, rAAV-8 and -9 do not bind one predominant protein, instead they bind to up to 30 different proteins, at rates ranging from 0.1% to 25% of the total amounts of bound proteins.Second, there were nine proteins bound to rAAV-8 in common between mouse and human sera.Quantitative estimation of proteins bound to rAAV-8 demonstrated that these nine proteins comprised 50% of the bound protein in mouse sera and 40% in human.Similarly, there were six proteins in common between mouse and human sera that bound to rAAV-9; they comprised 86% and 51% of the bound proteins, respectively.Next, we assessed whether these proteins might have a functional impact on vector transduction comparable with that of murine CRP or human G3BP on the efficacy of rAAV-1 and rAAV-6, by evaluating the functional role of platelet factor 4.This protein was found to have the highest level of vector binding in human and mouse sera for both serotypes, although comprising only roughly 15% of the total bound proteins.Using PF4-knockout mice and PF4-KO mice expressing human PF4, we showed in vivo that serum lacking mPF4, or huPF4, did not alter skeletal muscle transduction, even though the efficacy of the level of heart transduction was improved by 2- to 3-fold for both vectors.Our results strongly support our position that the impact of serum proteins on the transduction properties of rAAV-8 and rAAV-9, already observed in mouse models, should be similar in human preclinical trials.To accurately identify the proteins interacting with rAAV-8 and -9, we adapted the technology of a vector-protein binding assay, in which serum proteins bound to immobilized rAAV particles are digested with trypsin and the resulting peptides are identified on an Orbitrap mass spectrometry instrument.This was followed by estimation of the relative abundance of each protein by a label-free quantification approach.11,Only proteins identified as present by three or more peptides, and a Mascot score exceeding 70, were given further analysis.We used defined criteria to discriminate proteins specifically bound to the rAAV particles from any non-specifically bound proteins, i.e., those proteins bound adventitiously to the bead support used for rAAV immobilization.These criteria were: specificity, which had to be higher than 2; and reproducibility, only proteins detected in at least 50% of the experiments were given further analysis.We validated our assay by reanalyzing proteins captured by the well-studied rAAV-6 in the presence of human or mouse sera.In agreement with previous studies, human G3BP matched both of our criteria, with a >100 “specificity” in six of six experiments.The same was true of murine CRP, which displayed a >100 “specificity” in all eight experiments.Remarkably, although human G3BP and murine CRP were the major proteins bound to rAAV-6, representing more than 90% of the quantity of bound proteins, our assay made it possible to now identify additional proteins not detected using previous assays.9,10,Minor amounts of vitronectin, prothrombin, fibronectin, and lipopolysaccharide binding protein were captured from human serum by rAAV-6.Likewise, in more than 50% of the experiments using mouse serum, rAAV-6 could also bind small amounts of vitronectin,
We have previously shown that interaction with human galectin 3 binding protein dramatically reduces rAAV-6 efficacy, whereas binding of mouse C-reactive protein improves rAAV-1 and rAAV-6 transduction effectiveness.Herein we have assessed, through qualitative and quantitative studies, the proteins from mouse and human sera that bind with rAAV-8 and rAAV-9, two vectors that are being considered for clinical trials for patients with neuromuscular disorders.We show that, in contrast to rAAV-1 and rAAV-6, there was a substantial similarity in protein binding patterns between mouse and human sera for these vector serotypes.To establish an in vivo role for the vector binding of these sera proteins, we chose to study platelet factor 4 (PF4), which interacts with both vectors in both mouse and human sera.Experiments using PF4-knockout mice showed that a complete lack of PF4 did not alter skeletal muscle transduction for these vectors, whereas heart transduction was moderately improved.Our results strongly support our position that the impact of serum proteins on the transduction properties of rAAV-8 and rAAV-9, already observed in mouse models, should be similar in human preclinical trials.
Congenital heart defects are associated with significant societal costs and are a leading cause of infant morbidity and mortality.Cardiovascular malformations account for nearly one-third of all congenital anomalies, making these the most common type of birth defects.While the specific causes of cardiac malformations have not been clearly identified, previous research has suggested that prenatal exposure to ambient air pollution is associated with their development.Most studies have focused on criteria pollutants such as particulate matter, nitrogen dioxide and ozone, but there remains considerable uncertainty whether these pollutants are primarily responsible for the observed adverse effects.There is increasing interest in ultrafine particles which are produced in large numbers by diesel vehicles and other combustion processes, but little is known regarding the impact of UFPs on fetal cardiovascular malformations.The relationship of ambient air pollution with risks of CHDs has been investigated in several previous studies, with associations mainly reported for coarctation of the aorta, tetralogy of Fallot, atrial septal defect and ventricular septal defect.A meta-analysis found an association between particulate matter and atrial septal defect.Other more recent studies found inconsistent associations between PM2.5 and specific types of CHDs.Recent experimental studies conducted in mice suggests that smaller size particles, such as UFPs, could be responsible for PM-induced cardiac defects through oxidative stress, DNA damage, and alteration of molecular signalling or epigenetic events.Therefore, it is important to evaluate the possible impact of UFPs on foetal cardiac development.In the present study, we examined the association between UFP exposures during the early weeks of pregnancy and CHDs using a population-based cohort study in Toronto, Canada.We also evaluated if ambient UFPs are independently associated with CHDs after adjusting for major ambient air pollutants, namely particulate matter with aerodynamic diameters of ≤2.5 μm, nitrogen dioxide, and ozone.We used a retrospective cohort of pregnant women giving birth to live born singleton infants between April 1st 2006 and March 31st 2012 in Toronto, Canada.Mother-infant pair data were obtained from the Better Outcomes Registry & Network Ontario database."Gestational age was determined from the mother's last menstrual period and ultrasound dating.In particular, for every mother-infant pair we identified the period between the 2nd and 8th week post conception, based on previous studies, in order to focus on the gestational period when the foetal heart begins to form.Postal Code Conversion File Plus software was used to obtain the geographic coordinates of maternal place of residence during the critical period of exposure based on residential postal code reported in health administrative data.Maternal residential 6-digit postal code during pregnancy were obtained from the Registered Persons Database, which contains annual demographic information on Ontario residents.This database also records postal code changes which have been reported to the Ministry of Health for all Ontario residents who have ever had a health insurance number.It contains the postal code, a start and end date defining the period during which the postal code applied to the subscriber.For each year, using July 1st as a reference point, the best known postal code address is captured using the latest postal code reported across health administrative databases in the 1st half of each year or else the earliest one identified in the 2nd half of each year.Therefore, this database can capture to some extent postal code changes during pregnancy and therefore assign exposures accordingly.The housing and linkage of the administrative data sources was conducted at ICES in Ontario, Canada.ICES uses encrypted unique identifiers based on universal health insurance numbers to be able to accurately gather information on individuals across these different data sources.Subjects were excluded if they had a residential postal code outside Toronto, a missing postal code, a missing health card number, a missing date of birth and/or missing information on sex.We obtained CHD outcomes after linking mother-infant pair data to hospitalisation data from the Hospital Discharge Abstract Database of the Canadian Institute of Health Information.We identified CHDs from birth to one year of age.Ontario, the most populous province in Canada, has a publicly funded universal medicare system for hospital, laboratory, and physician services that covers the whole population, including all hospital births in Ontario.We identified cases with any major CHDs as well as seven major subtypes: transposition of great vessels, ventricular septal defect, atrial septal defect, atrioventricular defect, tetralogy of Fallot, tricuspid atresia and stenosis, pulmonary valve stenosis and coarctation of aorta.Here we focus mainly on overall CHDs, ventricular septal defect and atrial septal defect because too few cases were available for the other subtypes.We excluded births with chromosomal abnormalities.Defects were diagnosed based on ultrasound examinations in utero or postnatally before discharge from hospital."The CHD diagnoses were identified on the infant's hospital discharge abstract and linkage between DAD and BORN was performed using encrypted unique identifiers which is the standard approach in linking health administrative data within the Institute for Clinical Evaluative Sciences in Ontario, Canada. "Air pollution exposure estimates were assigned to the geographical coordinates representing the centroid of each mother's residential 6-digit postal codes of residence during weeks 2 to 8 of pregnancy.In Toronto, 6-digit postal codes are generally represented by one side of a city block or a large apartment complex.We assigned residential exposure to ambient UFPs derived from a land use regression model developed using mobile monitoring data collected for two weeks in the summer and one week in the winter including data from 405 road segments distributed across the city of Toronto.In brief, the monitoring was conducted using 3 separate vehicles equipped with rooftop monitoring devices measuring
Background: Cardiovascular malformations account for nearly one-third of all congenital anomalies, making these the most common type of birth defects.Little is known regarding the influence of ambient ultrafine particles (<0.1 μm) (UFPs) on their occurrence.Methods: A total of 158,743 singleton live births occurring in the City of Toronto, Canada between April 1st 2006 and March 31st 2012 were identified from a birth registry.Associations between exposure to ambient UFPs between the 2nd and 8th week post conception when the foetal heart begins to form and CHDs identified at birth were estimated using random-effects logistic regression models, adjusting for personal- and neighbourhood-level covariates.
previous studies can be attributed to different exposure assessment methods and differences in local air pollution emission sources.Our findings also showed that O3 was potentially associated with atrial septal defect as well as atrioventricular defect, but the associations did not remain statistically significant when adjusting for other pollutants.In a study conducted in Wuhan, China, authors found associations between exposure to O3 and overall CHDs, ventricular septal defect and tetralogy of Fallot with effect estimates remaining statistically significant in multi-pollutant models.In another study conducted in Taiwan, each 10 ppb increases in O3 exposure during the first 3 gestational months were associated with the risks of ventricular septal defects, atrial septal defects, and patent ductus arteriosus.However, effect estimates were not adjusted for other pollutants.Further studies are required regarding the impact of O3 on risk of CHD when investigating these impacts in multi-pollutant models.It is important to acknowledge several limitations of this study.First, we evaluated multiple air pollution measures on the risk of CHDs and CHD subtypes which might have given rise to chance associations.Our exposure estimates for UFPs and NO2 for the time period under study were assigned using LUR models based on data collected from short-term monitoring campaigns using a temporal scaling adjustment in order to capture different periods of exposure.We were therefore unable to obtain spatial-temporal ground estimates measured across the City of Toronto due to technological challenges and high costs.Also, our monitoring campaign for developing models for UFPs was conducted toward the end of the study time period.However, we applied previously published methods in order to capture as accurately as possible temporal changes in UFPs and NO2 which were applied in a recent publication of our study group.However, our UFP exposure model could be impacted by broad changes in emissions over time and could be impacted by local changes in infrastructure that could influence the movement of traffic sources through space.This likely resulted in exposure misclassification, but a systematic difference in the magnitude of exposure error by CHD case status seems unlikely and the overall impact of exposure measurement error was likely a bias toward the null.In addition, since outdoor UFP concentrations have likely decreased over time, our data may not be appropriate in identifying absolute threshold values for UFP impacts on CHDs if overall exposure levels were elevated toward the beginning of the follow-up period.We also need to acknowledge that there may be potential residual confounding.For example, no individual-level information was available for income, education, ethnicity and use of folic-acid supplements and medications during pregnancy.However, we controlled for some neighbourhood-level SES factors, which may have partially accounted for these missing variables.We were also not able to provide information on CHD cases that were identified from different sources as this information was not available from health administrative data.In addition, there may be some level of uncertainty in estimating conception dates which likely affected our capacity in identifying the critical window under study.However, this level of error is likely the same for all participants and potentially resulted in an underestimation of risk estimates.Some of the strengths of this study include the air pollution exposure estimates that captured both spatial and temporal variation, the large sample size and several important confounders included in the analyses.We also identified CHDs based on hospital discharge records at birth which have been previously used in Canada and have shown high sensitivity and specificity.The risk of selection bias was likely reduced due to the population-based approach we used.This is also the first study, to our knowledge, to examine the effects of prenatal exposure to UFPs on the risk of CHDs.In this study, we found that exposure to UFPs during weeks 2 to 8 of pregnancy was associated with increased odds of ventricular septal defect independent of other air pollutants including PM2.5, NO2 and O3.Further research is needed on the effects of UFPs on cardiac foetal development.This study was funded by the Public Health Agency of Canada and Health Canada.Dr Gasparrini was supported by the Medical Research Council UK.
Objective: This population-based study examined the association between prenatal exposure to UFPs and congenital heart defects (CHDs).We also investigated multi-pollutant models accounting for co-exposures to PM2.5, NO2 and O3.Results: A total of 1468 CHDs were identified.In fully adjusted models, UFP exposures during weeks 2 to 8 of pregnancy were not associated with overall CHDs (Odds Ratio (OR) per interquartile (IQR) increase = 1.02, 95% CI: 0.96–1.08).When investigating subtypes of CHDs, UFP exposures were associated with ventricular septal defects (Odds Ratio (OR) per interquartile (IQR) increase = 1.13, 95% CI: 1.03–1.33), but not with atrial septal defect (Odds Ratio (OR) per interquartile (IQR) increase = 0.89, 95% CI: 0.74–1.06).Conclusion: This is the first study to evaluate the association between prenatal exposure to UFPs and the risk of CHDs.UFP exposures during a critical period of embryogenesis were associated with an increased risk of ventricular septal defect.
parameters in assessing the conformance between simulated and observed plume behaviour, provided that the detection threshold of seismic measurements is taken into account in the study.The “maximum lateral plume migration” is a parameter, which is difficult to derive from seismic time-lapse amplitude maps, commonly used to derive the lateral extent of CO2 reservoirs.If this parameter is to be used in addition to the plume footprint area and volume, it would be preferable to derive the lateral migration distance on an interpreted amplitude map, excluding small-scale but high-amplitude noise patches severely affecting the parameter.Whereas the previously discussed performance parameters compare geometrical relations without considering the “real” shapes of the plumes, the similarity index, derived from the Sørensen–Dice coefficient), provides a quantitative measure for the areal overlap between observed and simulated plumes.An investigation of this parameter for the Ketzin pilot site has shown that a maximum “similarity” between simulated and observed plumes is achieved, when concentrating on thickness thresholds above 6.5 m and normalized amplitude thresholds from at least 0.2.These observations can be used as indicators of conformance quality between observed and simulated plumes.Conformance can be regarded as “high”, if the maximum similarity index is reached for small thickness threshold and amplitude threshold values.In addition, a high absolute value of the maximum similarity index needs to indicate that the shapes and propagation directions of the CO2 plume conform for the observed and simulated plumes.A maximum similarity index of 0.7 for the Ketzin pilot site data and simulations may be regarded as a reasonable result, indicating that the greater part of the simulated and observed plumes overlap with each other.In the conformance assessment of larger-scale storage sites, this parameter will also be useful in showing the convergence between observed and simulated plume behaviour, by applying the same investigation to monitoring data in comparison with simulated plumes based on various realisations of reservoir models, starting with initial models set up in the early operational phase of a storage site and continuing with updated models after incorporating an increasing amount of monitoring data collected during the operational phase.
Demonstrating conformity between observed and simulated plume behaviour is one of the main high-level requirements, which have to be fulfilled by an operator of a CO2 storage site in order to assure safe storage operations and to be able to transfer liability to the public after site closure.The observed plume behaviour is derived from geophysical and/or geochemical monitoring.Repeated 3D seismic observations have proven to provide the most comprehensive image of a CO2 plume in various projects such as Sleipner, Weyburn, or Ketzin.The simulated plume behaviour is derived from reservoir simulation using a model calibrated with monitoring results.Plume observations using any monitoring method are always affected by limited resolution and detection ability, and reservoir simulations will only be able to provide an approximated representation of the occurring reservoir processes.Therefore, full conformity between observed and simulated plume behaviour is difficult to achieve, if it is at all.It is therefore of crucial importance for each storage site to understand to what degree conformity can be achieved under realistic conditions, comprising noise affected monitoring data and reservoir models based on geological uncertainties.We applied performance criteria (plume footprint area, lateral migration distance, plume volume, and similarity index) for a comparison between monitoring results (4D seismic measurements) and reservoir simulations, considering a range of seismic amplitude values as noise threshold and a range of minimum thickness of the simulated CO2 plume.Relating the performance criteria to the noise and thickness threshold values allows assessing the quality of conformance between simulated and observed behaviour of a CO2 plume.The Ketzin site is provided with a comprehensive monitoring data set and a history-matched reservoir model.Considering the relatively high noise level, which is inherent for land geophysical monitoring data, a reasonable conformance between the observed and simulated plume behaviour is demonstrated.
epithelium but separated from the tissues surrounding and underlying it.In the human tissues examined, there was significant, presumably age-related hair cell loss evidenced by the relatively low density of hair cells in comparison with the tissue from the young mice, but this did not seem to affect the extent of supporting cell coupling.In the organ of Corti of mice, CX30 is the predominant CX in the gap junctions between the supporting cells that surround the outer hair cells, and in a strain of mouse from which CX30 has been ablated, the pattern of lesion repair by supporting cells when hair cells die is disrupted.In that model, supporting cells fail to expand to fill the spaces left by dying hair cells.Supporting cell expansion is an essential phase of epithelial repair in the inner ear, and seems to rely on functional coupling via gap junctions."Consequently, the maintenance of coupling with extensive hair cell loss as observed in this present study is likely to be important in enabling coordination of repair responses of supporting cells as hair cells progressively die, and ensuring maintenance of the tissue's integrity throughout human life.It has been suggested that in mammals the actin assemblies associated with the adherens junctions at the necks of supporting cells provide enhanced mechanical stability of the epithelium, and may be related to the inability of supporting cells in the mammalian inner ear to undergo cell division.A careful examination of these structures in the present study suggests they become remodeled after hair cell loss.As this previous work has suggested, normally in the human vestibular sensory epithelia the actin assemblies associated with adherens junctions are unusually wide and deep.However, in regions of hair cell loss in tissue fixed immediately upon arrival in the laboratory and even more so in tissue from which all hair cell had been ablated after exposure to gentamicin, these actin assemblies were often much less extensive.Although it has been known for some time that the new intercellular junctions that form between supporting cells when closing the lesion caused by loss of a hair cell initially have little actin, our present results indicate that existing wide junctions can become remodeled.It seemed that entire actin assemblies could be released from the membrane and displaced deeply into the cell, or sometimes were displaced while still attached to the membrane and to become released from the membrane into the cytoplasm of cell body region of the cell, whereas at the junctional region reformation of smaller actin assemblies occurred.What the eventual fate of these detached actin assemblies might be was not clearly resolved, but presumably they are broken down.Nor was the mechanism that underlies displacement down the membrane clear.Nevertheless, the result was that, after hair cell loss, the junctions between supporting cells were much less complex than in undamaged regions of tissue with a complement of hair cells.This clearly indicates that supporting cells in the human vestibular sensory epithelia retain a capacity to undergo structural remodeling in response to changing environmental conditions.It may also have implications for attempts to replace lost hair cells using exogenously applied precursors derived from stem cells.The incorporation of such cells into the epithelium requires disruption of the junctions between supporting cells.Disruption of the extensive junctional complexes normally present in the epithelium would be difficult to achieve, but it may be much less so in epithelia where hair cell loss has occurred and the junctions are less complex.This study demonstrates not only the viability of human vestibular epithelia harvested during trans-labyrinthine surgery, but also characterizes some effects of ageing.It demonstrates that tissue harvested from the human inner ear provides an in vitro model that can be used in translational studies for regenerative therapeutics and for studies of ageing.The authors have no conflicts of interest to disclose
Balance disequilibrium is a significant contributor to falls in the elderly.The most common cause of balance dysfunction is loss of sensory cells from the vestibular sensory epithelia of the inner ear.However, inaccessibility of inner ear tissue in humans severely restricts possibilities for experimental manipulation to develop therapies to ameliorate this loss.We provide a structural and functional analysis of human vestibular sensory epithelia harvested at trans-labyrinthine surgery.We demonstrate the viability of the tissue and labeling with specific markers of hair cell function and of ion homeostasis in the epithelium.Samples obtained from the oldest patients revealed a significant loss of hair cells across the tissue surface, but we found immature hair bundles present in epithelia harvested from patients >60years of age.These results suggest that the environment of the human vestibular sensory epithelium could be responsive to stimulation of developmental pathways to enhance hair cell regeneration, as has been demonstrated successfully in the vestibular organs of adult mice.
mode, which is a significant improvement compared to our previously reported results on GaAs/Si .Moreover, as the device has been fabricated by As-only growth and has a buffer as thin as ∼2 µm, the compatibility of the device with industry standard fabrication process has also improved.Furthermore, for the LI curve, a single facet output power of 48 mW is obtained at injection current density of 500 A/cm2 with no thermal rollover.The calculated slope efficiency and external differential quantum efficiency achieved are ∼0.095 W/A and 9.95%, respectively, which indicate an improvement to our previous results .As heating affects the laser performance under CW mode, improved slope efficiency leads to a better temperature stability of device.To have a quantitative investigation, the temperature dependence of Jth and characteristic temperature T0 are shown in Fig. 5 and.Fig. 5 presents the LI characteristics of InAs/GaAs QD laser on GaAs/Si at various operation temperatures under CW mode.The Jth increases with the rising of operation temperature, and lasing was observed up to 52 °C.Also, the LI characteristic at 52 °C shows thermal rollover of the output power which can be due to the self-heating of the device.Hard soldering to high thermal conductivity heatsinks may help to improve the temperature performance of device.The inset in Fig. 5 presents a laser emission spectrum with injection current density of 180 A/cm2 at room temperature, which shows the lasing wavelength at O-band.Temperature-dependence of Jth is shown in Fig. 5.The calculated characteristic temperature T0 of ∼60.8 K was between 16 °C and 36 °C under CW mode.At higher temperatures 36–52 °C, the characteristic temperature T0 was degraded to ∼26.1 K. For the next step, the characteristic temperature T0 could be further improved by introducing P-type modulation doping technique.In conclusion, we have successfully demonstrated an electrically pumped CW 1.3 µm InAs/GaAs QD laser directly grown on CMOS-compatible Si substrate operating at maximum temperature of 52 °C with a reduced thickness of ∼2 µm high quality III-As buffer layer, including In0.18Ga0.82As/GaAs DFLs.A low Jth down to 160 A/cm2 is achieved.A single facet output power of 48 mW was observed without thermal rollover at 500 A/cm2.Much higher output power can be expected with higher injection current density.Lasing was observed up to 52 °C with a calculated characteristic temperature T0 of 60.8 K from 16 °C to 36 °C under CW operation mode.Further improvement in the temperature characteristics on this platform is possible with improved fabrication process.Monolithically integrated InAs/GaAs QD lasers on exact Si, without intermediate buffer, provide a promising approach for implementing high-performance, CMOS-compatible and low-cost on-chip light sources for Si photonics, which can be seen as a major step towards its commercialization.
The concept of high-efficiency, high-reliability and low-threshold electrically pumped lasers monolithically grown on silicon has attracted great attention over the past several decades, as a promising on-chip optical source for Si photonics.In this paper, we report an electrically pumped continuous-wave (CW) 1.3 µm InAs/GaAs quantum dot (QD) lasers grown on a complementary metal-oxidesemiconductor (CMOS) compatible Si exact (0 0 1) substrate with reduced GaAs buffer thickness down to ∼2 µm.A threshold current density (J th ) as low as ∼160 A/cm 2 has been achieved at room temperature.The characteristic temperature (T 0 ) obtained is ∼60.8 K and laser operation is observed up to 52 °C under CW mode.These results suggest that an O-band InAs/GaAs QD laser could be very promising to develop a monolithically integrated on-chip optical source for Si photonics.
the tip breakage risk due to much higher tensile and shear stress in the tip region.Furthermore, Fts is asymmetric in measurements of rising and falling edges of steep nanostructures.The measurement of structures at rising edges may lead to a much larger Fts, thus resulting in a higher risk of tip breakage.This simulation result agrees well with the result of an experiment, where significant changes of the AFM torsion signal were observed in the measurements of steep line features.The investigations in this paper reflect a fundamental limit of the classical AFM technique.The AFM technique typically applies its cantilever as the sensor unit, where the static deformation or dynamic vibration of the cantilever is measured to detect the tip-sample interaction force.However, as a cantilever type sensor, the AFM probe has dominant high sensitivity in its bending direction and consequently is a 1D sensor only.It has an inherent problem in measuring, for example, steep structures or 3D structures.To solve this problem, AFM techniques such as tilting AFM and 3D-AFM have recently been developed.However, the development of AFM probes which are capable of true 3D detections of tip-sample interaction forces would be the ultimate solution to such problems.
Tip abrasion is a critical issue particularly for high-speed atomic force microscopy (AFM).In this paper, a quantitative investigation on the tip abrasion of diamond-like-carbon (DLC) coated tips in a high-speed metrological large range AFM device has been detailed.During the tests, different scanning speeds up to 1 mm/s and different vertical load forces up to approximately 33.2 nN are applied.Various tip characterization techniques such as scanning electron microscopy (SEM) and AFM tip characterizers have been jointly applied to measure the tip form change precisely.The experimental results show that tip form changes abruptly rather than progressively, particularly when structures with steep sidewalls were measured.This result indicates the increased tip breakage risk in high-speed AFM measurements.To understand the mechanism of tip breakage, tip-sample interaction is modelled, simulated and experimentally verified.The results indicate that the tip-sample interaction force increases dramatically in measurement scenarios of steep surfaces.
in seismicity.A stress change on the order of ~0.1 MPa is typically considered necessary to trigger seismicity, particularly when considering aftershocks.The seasonal stress imposed at Aluto-Langano will have contributions from both surface and subsurface components and can be estimated from GRACE satellite data, which measured the large-scale variation in water storage.For the central Main Ethiopian Rift, the annual variability in liquid water equivalent thickness was found to be ~6 cm, equivalent to a vertical load of <1 kPa.Although this is sufficient to produce seasonal vertical deformation of up to 10 mm at GPS sites in the region, it would be unlikely to trigger regional seismicity.The heterogeneous distribution of surface and subsurface water storage, means the 1° resolution of the GRACE satellite is unlikely to capture the spatially variability in the magnitude of surface loading.For example, the 10–15 cm variation in height of Lake Ziway is twice that inferred from the GRACE measurements.Although monitoring data is not available for Lakes Abijata, Langano and Shala, we expect the variations in lake level to be larger there as the wide, shallow Lake Ziway produces a more muted response to hydrological changes than its neighbours.For example, studies have shown that water abstraction projects in the Lake Ziway catchment caused a greater longer-term reduction in the Lake Abijata water level than for Lake Ziway.However, even taking the local variations into account, the magnitude of the loading is likely to be <0.01 MPa, below the threshold typically considered for aftershocks or regional earthquakes."The response to loading is restricted to shallow seismicity occurring within Aluto's geothermal reservoir with no seasonal pattern in regional or deep seismicity.Hydrothermal systems, particularly those containing gas bubbles, are more sensitive to small stress changes than surrounding areas.Distant large earthquakes have caused seismic swarms and subsidence at hydrothermal systems in response to dynamic stresses as low as 0.01 MPa.The abundant seismicity, high b-value and low Vp/Vs ratio of the Aluto geothermal reservoir suggest it is gas-rich and critically-stressed, thus sensitive to small-magnitude seasonal variations in stress associated with hydrological loading.Alternatively, stress changes can cause failure of an impermeable barrier, causing brecciation of a shallow aquifer and inducing seismicity.Seasonal variations in seismicity are seen across a range of settings, including in hydrothermal reservoirs.We combine observations of seismicity at Aluto volcano, Ethiopia, with geodetic measurements from a GPS network and hydrological observations to distinguish between possible mechanisms for seasonal seismicity.We conclude that the major seasonal peak in seismicity is driven by an increase in vertical loading as the lakes and groundwater aquifers fill.The magnitude of the stress change is insufficient to cause a regional change in crustal seismicity, but the Aluto-Langano geothermal reservoir is gas-rich and critically-stresses, thus particularly sensitive to low magnitude stress changes.Thus seismic and GPS data suggests that fractured reservoirs are closer to failure during at times of the year defined by the local hydrology, reservoir geometry and fluid pathways.
Seasonal variations in the seismicity of volcanic and geothermal reservoirs are usually attributed to the hydrological cycle.Here, we focus on the Aluto-Langano geothermal system, Ethiopia, where the climate is monsoonal and there is abundant shallow seismicity.We deployed temporary networks of seismometers and GPS receivers to understand the drivers of unrest.First, we show that a statistically significant peak in seismicity occurred 2–3 months after the main rainy season, with a second, smaller peak of variable timing.Seasonal seismicity is commonly attributed to variations in either surface loading or reservoir pore pressure.As loading will cause subsidence and overpressure will cause uplift, comparing seismicity rates with continuous GPS, enables us to distinguish between mechanisms.At Aluto, the major peak in seismicity is coincident with the high stand of nearby lakes and maximum subsidence, indicating that it is driven by surface loading.The magnitude of loading is insufficient to trigger widespread crustal seismicity but the geothermal reservoir at Aluto is likely sensitive to small perturbations in the stress field.Thus we demonstrate that monsoonal loading can produce seismicity in geothermal reservoirs, and the likelihood of both triggered and induced seismicity varies seasonally.
CABMOD-3 profile consistently shows Ni ablating later than the experimental profile.This suggests that the Ni grains in the Allende particles are small and hence have a large surface area to volume ratio.Hence, they ablate much faster than one large particle of equivalent total volume, which is what CABMOD-3 assumes.This is consistent with the Ni EDX images in Figs. 2e and 3e. Another possibility is that the small grains melt and rapidly migrate to the surface, where the low surface tension of Fe–Ni sulfide ensures the metal sulfide melt spreads out over the particle surface and hence ablates rapidly.An alternative explanation to these effects from particle size is that carbothermic reactions such as reaction lead to decomposition of the sulfides and oxides present resulting in evaporation of Ni at temperatures below the expected thermal decomposition temperatures.It is likely that a combination of all of these effects is responsible for the early release of Ni in the experimental runs.The agreement with the Na profiles is not as good, with the simulated profile from CABMOD-3 peaking between 150 K before, and up to 70 K after, the experimental peak.There is a distinct trend of ablation at higher temperatures for larger radii in the experimental case that is absent in the CABMOD-3 simulations.This is possibly due to the melting point of the particles varying much more than CABMOD-3 assumes.By the time that Ni ablates, the particle has completely melted in all cases.The smallest particles melt and ablate over a broader range of temperatures than larger particles.This smallest size bin would include any particles smaller than 38 μm in diameter so could include a substantial fraction of very small particles that melt very quickly.The differences in composition between the individual particles manifests in the spread of the Na and Ni profiles, in terms of temperatures of ablation and the broadness of individual peaks.This broadening effect is less pronounced in the case of relatively volatile species like Na and Ni than for more refractory metals.Experimental runs generally involved many particles.The smallest size bins required more particles for a measureable signal.Size 1 typically had 100s of particles, whereas size 3 required 5–10.The earlier than expected ablation of Na is less apparent for linear ramps applied to particles from the Chergach meteorite.The SEM-EDX mapping reported here did not detect Na.Previous analyses mapping Na in Chergach particles indicate a heterogeneous distribution, in contrast to the more homogeneous distribution of Allende.An explanation could be that Na occurs in larger grains that melt quickly but at higher temperatures.Chergach is an H5 ordinary chondrite.It has large grains of pure Fe as well as sizeable grains of Fe–Ni.The experimental Ni profiles suggest very little variation in ablation temperature between the different particle sizes.This implies a constant Ni grain size).As mentioned above, CABMOD-3 currently assumes all the metal sulfide phase resides in one mass.However, the SEM-EDX data suggests a large range of Ni grain sizes for both Allende and Chergach.Furthermore, given that meteoroids are an agglomeration of even more primitive minerals, it seems likely the Ni grain size would be independent of particle size.It seems likely, then, that there is in fact a bias produced by the meteorite grinding method that results in similar size metal grains.Frequently, experimental runs with Chergach particles would give no discernible Ni profile even though there were particles visibly present and the Na profile looked normal.When only 5–10 particles are present, as in the experiments with the larger sizes of particles, it was possible to have a sample with no Ni included.This was very rarely seen for Allende samples which have a more homogeneous distribution of Ni grains.Allende and Chergach particles were also subjected to realistic atmospheric temperature profiles.These profiles were simulated by CABMOD-3, taking into account the physical heating of micrometeoroids upon entry into the Earth’s atmosphere.Smaller, slower micrometeoroids experience lower peak temperatures and longer duration heating than larger, faster ones.To a first approximation, the amount of Ni ablation depends largely on the maximum temperature of the profiles.Fig. 9 depicts atmospheric heating profiles over a range of initial velocities and particle sizes applied to samples from the Allende meteorite.Particle sizes and velocities were chosen based on the expected size and velocity distributions of particles from Jupiter family comets, at least as far as the thermal profiles could be reproduced within the temperature range of the pyrometer.The evaporation profiles of Na and Ni are overlaid with CABMOD-3 predictions.In general the rate of heating is very fast compared with the linear temperature ramps, up to 1400 K s−1.The evaporation peaks of Na and Ni therefore appear soon after the start of heating and are close together.The rapid heating rate also helps to minimize the spread of ablation temperatures compared with those observed for the linear ramps.In general this is well captured in the CABMOD-3 simulation.The simulated profiles are generally around 80% narrower than the experimental ones, but this difference is less than observed in previous MASI studies of other elements such as Fe and Mg.As observed in the linear ramps, the CABMOD-3 profile peaks slightly after the experimental profile in the majority of cases, again suggesting a larger surface area to volume ratio than accounted for in the model.Similarly to the linear temperature ramp data, the times of maximum evaporation of the experimental profiles in Fig. 9 are aligned before the average is taken, with uncertainty in time and signal represented by envelopes for each metal.The profiles are normalised to
Modelling the ablation of Ni from micrometeoroids upon their entry to the Earth's atmosphere enables us to better understand not just the Ni layers in the upper atmosphere but also the differential ablation of Fe.Small grain sizes are consistent with the Fe–Ni grain size observed in SEM-EDX mapping of the meteorite particles used for this study.
1 then multiplied by the fraction ablated.The bar at the top of each plot also represents the experimental fraction of Ni ablated out of the total Ni present in the particle.Fig. 10 plots the temperature of maximum ablation of Ni versus the maximum temperature experienced by the particle.Particles of the same size but moving at faster velocities undergo maximum ablation of Ni at slightly higher temperatures, according to CABMOD-3 simulations, but in general the maximum ablation happens at around 2250 K if the particle reaches this temperature.Overall, MASI measurements of the temperature of maximum ablation of Ni are lower than the CABMOD-3 predictions, consistent with the earlier than predicted ablation of Ni, as seen in Fig. 7.Fig. 11 depicts two examples of atmospheric ablation profiles with experimental and CABMOD-3 results from Allende particles, with altitude on the ordinate axis and with the atomic injection rate on the abscissa.Fe data from earlier measurements is also included.In both the laboratory MASI simulation of atmospheric entry, and the CABMOD-3 prediction, Ni ablates in a narrow altitude range slightly below Na and before the majority of Fe.CABMOD-3 predictions suggest that all the Ni in a carbonaceous chondrite experiencing temperatures greater than 2400 K will ablate.The experiments indicate a lower threshold of 2200 K for complete ablation.Again, this lower threshold can be largely explained by the small grain sizes present in the actual meteorite samples and potential decomposition of sulfides in carbothermic reactions.While the altitude of maximum ablation and the relative amounts of ablated metal are well captured by CABMOD-3, the experimental profiles are again much broader than the computer simulation.This is particularly evident for Fe.As mentioned earlier, the combination of meteorite heterogeneity and electromigration results in a broadening of the experimental profiles.These effects are less pronounced for Ni, however, as it ablates early and so tends to avoid effects from particle movement on the filament.Many small grains would ablate more rapidly than one large grain of equivalent volume, since the rate of evaporation depends on surface area.Therefore, small grains of Fe–Ni–S would be expected to ablate rapidly once they melt.This matches what we see with Allende meteorite samples.In contrast, for micrometeoroids with high Fe–Ni content similar to that of Chergach, the likelihood is that the molten grains would combine as they migrate to the surface.FeS has a low surface tension and will spread out to cover the surface of the particle.It is this large surface area that allows for rapid evaporation of first FeS and then FeNi.CABMOD-3 in its current form does not allow for this possibility.Although Allende has a relatively low metallic Fe–Ni abundance, the improvements in CABMOD-3 make a significant difference to the shape of the simulated Fe ablation profiles.A similar difference would be seen for cosmic dust of CI chondritic composition, given that the metallic Fe–Ni abundance is similar.The ablation of metallic Fe and Ni would also be enhanced if FeO is reduced by the pyrolysis of carbon.CI and CM chondrites typically have C at abundances of around 2%.Here, we focus on the fraction of Ni and Na ablated as a more intuitive measure of ablation.Micrometeoroids enter the atmosphere with a range of particle sizes and velocities.The fractions ablated in Table 2 are averaged over the sizes and velocities used in the experimental MASI runs, weighted to account for the actual size and velocity distributions of dust from cometary and asteroidal sources.See Carrillo-Sánchez et al. for more details of these distributions or Bones et al. and Gómez-Martín et al. for examples for other elements.The CABMOD-3 simulations were run for the same set of particle sizes and velocities, allowing a direct comparison with the MASI results.The agreement in Table 2 is very good.The MASI experimental setup has been used to test the CABMOD-3 simulation of Ni ablation over slow linear temperature ramps and realistic atmospheric temperature profiles.In the absence of genuine pristine micrometeoroids, samples of powdered terrestrial meteorites serve as informative proxies as long as compositional differences between the meteorites and the interplanetary micrometeoroid population are understood.The mostly good agreement between CABMOD-3 simulations and MASI experiments gives us confidence in the ability of CABMOD-3 to simulate the ablation of metals from micrometeoroids entering the atmospheres of Earth and other planets.Typically the experimental data suggests that Ni ablates at lower temperatures than predicted by CABMOD-3.The reason is evident from the linear temperature ramp experiments: these experiments underline the importance of accurately representing Ni grain size to get the best prediction of Ni ablation as a function of altitude.Both MASI data and CABMOD-3 simulations agree that the ablation of Ni occurs rapidly and at the relatively low temperature of around 2200 K, before the bulk of the silicate Fe has ablated, as previously reported by Taylor et al.Ni is also seen to ablate completely for a wider range of particle sizes and velocities than Fe in both experiment and simulation.This may help to explain the observed depletion of Ni in micrometeorites.In a subsequent paper the impact of micrometeoroids on different planetary atmospheres will be examined.Simulations will be conducted using CABMOD-3 to compare the Meteoric Input Function for Fe, Ni and other elements for Earth, Mars and Venus.
Meteoritic particles (powdered terrestrial meteorites) and mineral proxies were flash heated to temperatures as high as 2700 K to simulate atmospheric entry.Slower linear heating ramps were also conducted to allow a more precise study of ablation as a function of temperature.Disagreement between the model and the data can be explained by the distribution of Ni in small grains in the meteorite samples, in contrast to the model assumptions of one molten metal phase.
scavengers for detecting hole, superoxide radicals, and respectively.The likely mechanism is described as follows, upon photon irradiation, photo-reduced Ag3PO4 is subsequently oxidised into Ag3PO4, with H2O2 as oxidant.Whilst the photo-excited hole, at the valence band, directly oxidises MB, MO and RhB dyes into carbon dioxide, water and intermediate products.As depicted in Fig. 19, when 1 mM EDTA was used as scavenger in the photocatalysis experiments, degradation efficiency was greatly suppressed.The efficiency decreased from 99.11 % to 35.51 %.When same concentrations were used for both XTT and Coumarin, their efficiencies were 95.02 % and 96.74 % respectively.These suggest that the high efficiency of Ag3PO4 was mainly due to the presence of the hole, with a high oxidation potential of + 2.45V vs NHE, like that observed by Yi et al .In summary, a facile method to synthesize an efficient and stable Ag3PO4, composed of cubic, rhombic dodecahedron, nano spheres and nanocrystals morphologies for degradation of MB, MO and RhB has been demonstrated.By controlling pH at 6.5, controlled morphology can be produced.Approximately, 19 % of Ag+ is converted to Ag0.The Ag3PO4 photocatalyst can be rejuvenated after employing for more than 4.30 hours of irradiation.Both the fresh and the rejuvenated Ag3PO4 nanostructures have higher photocatalytic reactivity than conventional Degussa TiO2, and some modified TiO2.Ag3PO4 was found to be efficient and stable even after repeated cycles.It is hoped that this work would inspire exploration of similar method to control the morphology and stability of other easily photo-corroded photocatalyst for efficient water purification, oil spill and general industrial waste water treatment.Henry Agbe: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper.Nadeem Raza: Conceived and designed the experiments; Analyzed and interpreted the data.Aditya Chauhan, David Dodoo-Arhin and Vasant Kumar: Analyzed and interpreted the data.This work was supported by University of Ghana/University of Cambridge Commonwealth split-site PhD programme.The authors declare no conflict of interest.No additional information is available for this paper.
Ag3PO4 photocatalyst has attracted interest of the scientific community in recent times due to its reported high efficiency for water oxidation and dye degradation.However, Ag3PO4 photo-corrodes if electron accepter such as AgNO3 is not used as scavenger.Synthesis of efficient Ag3PO4 followed by a simple protocol for regeneration of the photocatalyst is therefore a prerequisite for practical application.Herein, we present a facile method for the synthesis of a highly efficient Ag3PO4, whose photocatalytic efficiency was demonstrated using 3 different organic dyes: Methylene Blue (MB), Methyl orange (MO) and Rhodamine B (RhB) organic dyes for degradation tests.Approximately, 19 % of Ag3PO4 is converted to Ag0 after 4.30 hours of continuous UV-Vis irradiation in presence of MB organic dye.We have shown that the Ag/Ag3PO4 composite can be rejuvenated by a simple chemical oxidation step after several cycles of photocatalysis tests.At an optimal pH of 6.5, a mixture of cubic, rhombic dodecahedron, nanosphere and nanocrystals morphologies of the photocatalyst was formed.H2O2 served as the chemical oxidant to re-insert the surface metallic Ag into the Ag3PO4 photocatalyst but also as the agent that can control morphology of the regenerated as-prepared photocatalyst without the need for any other morphology controlling Agent (MCA).Surprisingly, the as- regenerated Ag3PO4 was found to have higher photocatalytic reactivity than the freshly made material and superior at least 17 times in comparison with the conventional Degussa TiO2, and some of TiO2 composites tested in this work.
Evolutionary-based optimization approaches have recently shown promising results in domains such as Atari and robot locomotion but less so in solving 3D tasks directly from pixels.This paper presents a method called Deep Innovation Protection that allows training complex world models end-to-end for such 3D environments.The main idea behind the approach is to employ multiobjective optimization to temporally reduce the selection pressure on specific components in a world model, allowing other components to adapt.We investigate the emergent representations of these evolved networks, which learn a model of the world without the need for a specific forward-prediction loss.
Deep Innovation Protection allows evolving complex world models end-to-end for 3D tasks.
We study the problem of designing provably optimal adversarial noise algorithms that induce misclassification in settings where a learner aggregates decisions from multiple classifiers.Given the demonstrated vulnerability of state-of-the-art models to adversarial examples, recent efforts within the field of robust machine learning have focused on the use of ensemble classifiers as a way of boosting the robustness of individual models.In this paper, we design provably optimal attacks against a set of classifiers.We demonstrate how this problem can be framed as finding strategies at equilibrium in a two player, zero sum game between a learner and an adversary and consequently illustrate the need for randomization in adversarial attacks.The main technical challenge we consider is the design of best response oracles that can be implemented in a Multiplicative Weight Updates framework to find equilibrium strategies in the zero-sum game.We develop a series of scalable noise generation algorithms for deep neural networks, and show that it outperforms state-of-the-art attacks on various image classification tasks.Although there are generally no guarantees for deep learning, we show this is a well-principled approach in that it is provably optimal for linear classifiers.The main insight is a geometric characterization of the decision space that reduces the problem of designing best response oracles to minimizing a quadratic function over a set of convex polytopes.
Paper analyzes the problem of designing adversarial attacks against multiple classifiers, introducing algorithms that are optimal for linear classifiers and which provide state-of-the-art results for deep learning.
Many tasks in natural language processing and related domains require high precision output that obeys dataset-specific constraints.This level of fine-grained control can be difficult to obtain in large-scale neural network models.In this work, we propose a structured latent-variable approach that adds discrete control states within a standard autoregressive neural paradigm.Under this formulation, we can include a range of rich, posterior constraints to enforce task-specific knowledge that is effectively trained into the neural model.This approach allows us to provide arbitrary grounding of internal model decisions, without sacrificing any representational power of neural models.Experiments consider applications of this approach for text generation and part-of-speech induction.For natural language generation, we find that this method improves over standard benchmarks, while also providing fine-grained control.
A structured latent-variable approach that adds discrete control states within a standard autoregressive neural paradigm to provide arbitrary grounding of internal model decisions, without sacrificing any representational power of neural models.
to be assessed with moral considerations in mind i.e. value-based constructs of righteousness.Consequently, people are less inclined to evaluate grassroots communities based on what they deliver in turn for financial contribution, ascending more traditional input-output balances.This aligns with our proposed “moralisation of provisioning”, which describes that new and possibly inconvenient terms of exchange still garner pragmatic legitimacy as a result of novel value-propositions.For example, our empirical case draws a compelling picture of CSA members who highly value expected benefits of ‘being outdoors’ and therefore agree with choice restrictions, up-front payment and weekly farm visits.Here, we align with research on consumer culture where consumption choices and behaviour are understood from a social and cultural point of view.For example, our work aligns with Thompson and Coskuner-Balli who conclude that unconventional systems of provision such as CSA sustain themselves in increasingly corporate-dominated sphere of food through “a confluence of economic, ideological and cultural factors that leverage anti-globalization sentiments in ways that provide a marketplace resource for consumers to co-produce feelings of enchantment”.Likewise, the growing number of community energy projects in the Netherlands should be understood in light of normative, structural problems associated with centralised energy systems.For example, members of energy communities assign significance togaining control over their energy production whilst coping with pragmatic inconveniences such as unstable supply.In conclusion, grassroots organisational survival thus seems to depend on whether its assessors understand ‘the new rules of the game’.This, in turn, requires grassroots entrepreneurs to intervene in, thus manipulate, cultural structures that steer one’s perception of reality and, more importantly, identify and attract constituents who value these new moralities and associated deliverables the organisation is equipped to provide.The use of legitimation as a theoretical lens to further scientific understanding on grassroots organisations appears to be useful.The legitimacy concept, borrowed from organisational sociology sheds a new light on entrepreneurial strategies that clarify why and how grassroots organisations survive.Our study contributes to understanding legitimacy types, presented as distinct concepts in traditional organisational literature, by showing that they appear to be highly integrated and co-dependent in the case of grassroots organisations.We witness a step-wise legitimation process where new moralities need to be understood to ‘righteously’ interpret what grassroots organisations can deliver.In addition, the build-up of legitimacy seems to undergo acceleration as the result of positive feedback loops.As such, focussing on longitudinal patterns and cumulative causations in legitimacy creation may additionally further the understanding of grassroots organisational survival the form of reinforcing ‘motors’.In our studied grassroots case, we can deduce some evidence on such a ‘legitimacy motor’, assuming that sources of legitimacy vary with respect to the CSA’s development stage.During initiation, CSAs need to position themselves locally and inform unaware audiences.They recruit members who reportedly join for food-related deliverables.During the first years of operation, CSAs focus strongly on ‘internal’ legitimacy to attain long-term relationships with their members.Concurrently, the appreciation of immaterial or social benefits becomes highly important and constitutes as the main source of legitimacy.In particular, in this initiation phase new morals are created for which sympathy grows.Finally, learning and demonstration of success are key in moving towards the maturity phase.As internal legitimacy is safeguarded through farm routines and the build-up of trust and social capital, CSA entrepreneurs find the time to invest in external legitimacy creation.Positive reputation and credibility of the CSA influences broader understanding of the CSA’s benefits and the acknowledgement of societal value by formal authorities.As such, CSAs gain additional support outside their ‘inner circle’.Concisely, it appears more important to invest in the CSA community before external legitimacy and strategic positioning of the organisation is sought.Finally, we propose several future research directions.Firstly, we suggest a further examination of social networks and the construction of social capital in grassroots organisation as this seems positively correlated with survival.Scaling-up and diffusion potentially question social network conditions such as trust and reciprocity that characterises grassroots organisations.Secondly, blossoming of grassroots initiatives with dissonant views on e.g. food or energy provision address the necessity of new performance criteria to grasp the initiative’s multi-dimensional impacts.In a similar vein, Smith and Stirling recently directed academic attention to more practical questions regarding how to best appreciate and reimburse grassroots entrepreneurs.Next, we prioritise the advancement of a comparative approach on grassroots organisational survival to understand how much legitimation depends on distinct, context conditions.Grassroots initiatives generally address structural problems in conventional systems that may differ across countries or within localities.As such, how activities are justified expectedly coincides with ‘what matters to local people’.Whilst grassroots initiatives generally endorse similar ideologies, a comparative case study among various countries or sectors is desired to shed light on possible differences in legitimacy creation.For example, as we focused on CSA in the Netherlands, contextual characteristics including the affordability, accessibility and availability of food in the Netherlands as well as its high safety standards for food products arguably affect the legitimation process.These contextual specificities may then explain why pragmatic legitimacy in Dutch CSAs involves social gains rather than accommodating basic needs such as access to safe food.Moreover, as we noticed during the interviews, trigger events such as food scandals and national trends concerning alienation seem to have fostered legitimation of community-driven food initiatives.However, as this paper focuses on entrepreneurial efforts in legitimation, the influence of such external conditions that may foster or hinder legitimation have not been studied explicitly.Future research should endeavour to link such external, landscape factors to legitimation strategies within niches.As a final research priority, we encourage researchers to uncover how
Grassroots initiatives for sustainable development are blossoming, offering localised alternatives for a range of societal functions including food and energy.Research into grassroots organisations often recognises the difficulties grassroots groups face to continue operations.However, there is a need for better understanding dynamics that enable or constrain grassroots organisational survival.Here, we specifically shed light on how such survival is dependent on the organisation's ability to construct legitimacy.In the context of community supported agriculture (CSA), we explore different legitimacy types and strategies.We learned that CSAs predominantly work to garner legitimacy from their members and that survival seems associated with social capital building.In addition, we observed a moralisation of food provision that describes why new and possibly inconvenient terms of exchange still amass legitimacy.As external audiences remain at a distance, they often misunderstand CSA, their deliverables and impacts on social welfare.
is highly pertinent.In order to explore this possibility, Shimotake and colleagues conducted a direct comparison between the location of the critical grid site for semantic processing in the patients, and the peak activations obtained from neurologically-intact participants when undertaking various verbal and nonverbal tasks.Shimotake et al. found that the two vATL sites were remarkably similar and are the same area highlighted in the present study.This strongly suggests that this ventral ATL region remains as a critical semantic region in the patients."Of course, it is possible that other regions within the semantic network change or alter their function to support patients' semantic performance – which is not possible for us to assess in the present investigation but could be explored in future fMRI studies.The rise of semantic coding in the vATL LFPs from 250 msec is consistent with other methods that have probed the ATL time-course for semantic processing.This includes the ATL convergence of auditory and visual semantic processing observed in MEG, semantic priming and category differences in multimodal imaging and depth electrode studies.One recent MEG study found an earlier vATL effect; specifically, an enhanced event-related regression coefficient for items with a relatively greater ratio of shared to distinctive semantic features.Future research is needed to relate these regression measures to the full coding of individual concepts.One possibility, as suggested by Clarke et al., is that very general aspects of meaning are activated in the vATL from very early time-points whilst a fuller, individuated conceptual representation gradually emerges later.This hypothesis is consistent with the hub-and-spoke computational models of semantic representation; initial input to the hub layer very quickly drives apart the activation patterns of items from unrelated domains, but it takes longer for multimodal reverberation within and between the hub and spokes to settle the model into item-specific semantic representations.The implantation of subdural electrodes is used in some neurosurgical patients in order to assess the focus and nature of seizures directly, as well as map eloquent areas for language, motor and other functions.By combining this form of electrophysiology with active tasks, it is possible to explore the cortical evoked responses across different tasks and over time.The present study demonstrates that important additional information can be extracted from these same data by utilizing RSA.This and related analysis methods, allow investigation of the type or form of information that is coded in specific brain regions.For the reasons set out above, in this study we assessed the potential semantic, visual and phonological representation of the ventral and lateral temporal regions.It is, of course, possible to generalize this approach both to other brain regions and types of information.As well as assessing the role of rostral temporal areas in semantic representation at the group level, we were also interested in considering the utility of this new approach at the individual level.We found that the group pattern could be observed in the vast majority of individual patients, with some variation in when semantically-related patterns were represented in the naming LFPs.This is an encouraging first step and it is possible that the consistency across individual patients may be improved through future studies which further refine data collection and analysis methods.Finally, we note that the results of the current study add to those that implicate the vATL as an important contributor to language-semantic function.Consistent with the hypothesis posed in a previous study, it would appear that the “basal temporal language area” may reflect its more primary role in representing semantic concepts.This result is consistent with the fact that anomia and mild semantic impairments are observed after resection and that, if the region is spared by using a subtemporal surgical approach, verbal memory is significantly better in the chronic phase post surgery.
Electrocorticograms (ECoG) provide a unique opportunity to monitor neural activity directly at the cortical surface.Ten patients with subdural electrodes covering ventral and lateral anterior temporal regions (ATL) performed a picture naming task.The results indicate that the neural activity in the ventral subregion of the ATL codes semantic representations from 250 msec after picture onset.The observed activation similarity was not related to the visual similarity of the pictures or the phonological similarity of their names.In keeping with convergent evidence for the importance of the ATL in semantic processing, these results provide the first direct evidence of semantic coding from the surface of the ventral ATL and its time-course.
The aging of the population results in increasingly complex medication-related needs.1,To sustain the economic viability of health care the majority of elderly patients should be treated in primary care.To incorporate specific pharmaceutical expertise, some primary care practices have embedded a non-dispensing pharmacist.NDPs in primary care practice conduct clinical pharmacy services that primarily focus on chronic disease management.CPS are usually multifaceted, including medication therapy reviews, counselling and medication education.These services can be aimed at patients with a specific chronic condition such as diabetes, cardiovascular disease or COPD, or at a more heterogeneous group of patients at risk of drug related problems, such as patients with multimorbidity and polypharmacy.Disease-specific CPS focusses on evidence-based protocolled care, while patient-centered CPS entails a more non-standardized and holistic approach.2,Some NDPs are fully integrated into the health care team,3,4 whereas others only temporarily provide a specific CPS.5,Common opinion is that integrated care for patients with chronic conditions may improve patient outcomes.6–8,CPS have been shown to positively affect surrogate outcomes, such as blood pressure, glycemic control and lipid goal attainment.9–13,Evidence of the effect of CPS on clinical endpoints, such as mortality, hospitalizations and health related quality of life, is less clear probably due to very heterogeneously defined CPS as well as strongly differing study settings.12,14,Both aspects are features of the degree of integration of the NDP who delivers the CPS.The degree of integration of NDPs into the health care team may be a determinant for its success, but this association has never been properly assessed.Therefore, we conducted a systematic review to investigate how the degree of integration of an NDP impacts health outcomes in primary care.The protocol of this systematic review has been published in the PROSPERO register.The registration number is: CRD42016017506.15,We searched PubMed and Embase from 1966 to June 2016.A trained librarian, in consultation with researchers, developed a search strategy.Also, we manually searched the reference list of systematic reviews and background articles about clinical pharmacy interventions in primary care for additional citations.Potentially relevant studies were identified by two reviewers based on predetermined inclusion criteria in a two-step procedure: 1) title and abstract, 2) screening of the full text.In case disagreement about inclusion could not be resolved by discussion between the two reviewers, a third reviewer was consulted to reach consensus.We used the PRISMA checklist to conduct and report the systematic literature review.16,Both USA and non-USA comparative studies of any design that had a control group or baseline comparison were included if they met the following criteria:Studies were excluded if the intervention was delivered in a specialty or off-site clinic without collaboration with the general practitioner, or if it was a pilot of an already included study or a secondary analysis.Also, unpublished studies and studies published in languages other than English were not taken into account for analysis.Our main focus was the degree of integration of NDPs, which we assessed via key dimensions of integration from the conceptual framework of Walshe and Smith17: organizational, informational, clinical, functional, financial and normative integration.The financial integration could not be taken into account as most interventions were project funded studies.The key dimensions were scored dichotomous.A positive score on zero to two dimensions of integration was defined as “no integration”.A positive score on three or four dimensions of integration was defined as “partial integration” and a positive score on all five dimensions was defined as “full integration”.Prescriptive authority was taken into account to assess clinical integration, see Table 3.The primary outcomes of the intervention were either real clinical health outcomes, such as mortality, or surrogate clinical health outcomes, such as HbA1c, lipids and blood pressure.In addition to clinical health outcomes, we included patient reported health outcomes, such as health related quality of life and proxies of health outcomes, such as quality of care performance indicators.Other extracted data included the duration of the intervention, study size, primary outcomes, specification of the CPS and the number of involved practices and NDPs.The primary outcomes of the intervention were categorized as either “positive”, “negative” or “no effect”.A positive outcome was defined as a statistically significant difference compared to the control group or baseline.A negative outcome being the opposite and no effect as no statistically significant difference between intervention and control group or baseline.Two authors independently extracted the data and one author cross-checked all extracted data.Differences were resolved in discussion.In case of dissensus, a third researcher was consulted.If we were unable to score the dimensions of integration – despite contacting the corresponding author for additional information and verifying complementary study protocols - the study was excluded for synthesis.We used the Effective Public Health Practice Project Quality Assessment Tool to assess: selection bias, study design, confounders, data collection methods, withdrawals and drop-outs.Given the nature of the included studies, blinding of the participants and outcome assessors was generally not possible.Therefore, this criterion was not included in the quality assessment.Two authors independently assessed each study and resolved disagreement by consensus or by consulting a third reviewer.The included studies were heterogeneous regarding the type of CPS, enrolled participants, number of practices, involved NDPs and measured health outcomes.Therefore, it was inappropriate to perform statistical aggregation of findings.To investigate how the degree of integration of an NDP impacts health outcomes we plotted the number of improved primary outcomes against the total number of assessed primary outcomes.We stratified the analysis for disease-specific CPS and patient-centered CPS.Ninety studies were included for data extraction.For thirty studies we were unable to determine the degree of integration of the
Background: A non-dispensing pharmacist conducts clinical pharmacy services aimed at optimizing patients individual pharmacotherapy.Embedding a non-dispensing pharmacist in primary care practice enables collaboration, probably enhancing patient care.The degree of integration of non-dispensing pharmacists into multidisciplinary health care teams varies strongly between settings.The degree of integration may be a determinant for its success.We included English language studies of any design that had a control group or baseline comparison published from 1966 to June 2016.
NDP and were excluded.We grouped studies by type of CPS: disease-specific CPS and patient-centered CPS.The included studies consisted of 35 RCTs, 12 two group cohort studies and 13 one group cohort studies.The median of the study population was 140 patients.The duration of the interventions ranged from 1 to 60 months.The median of the number of involved practices and NDPs was 1 and 2, respectively.The majority of the studies were performed in the United Stated of America.The methodological quality was high in 18 studies, moderate in 34 studies and low in 8 studies.35 studies had a strong design, with described randomization processes.Eight studies had a high participation rate and were very likely to be representative to the target population.Forty studies controlled for at least 80% of relevant confounders and 48 studies used valid and reliable data collection tools.29 studies had a follow-up rate of at least 80%.We assessed 89 health outcomes in 60 comparative studies: 54 clinical health outcomes, 12 patient reported health outcomes, such as health related quality of life and 23 proxies of health outcomes, such as medication errors.CPS conducted by NDPs showed a significant positive effect on 62% of assessed health outcomes.The other 34 health outcomes showed no statistically significant difference compared to control group or baseline.None of the included studies measured a negative impact on health outcomes.The effect of CPS on surrogate clinical health outcomes and proxies of health outcomes was high: 67% and 78% of these outcomes improved.Patient reported health outcomes were less frequently reported and showed improvement in one trial.We related the dimensions of integration to the degree of integration.We found 14 studies in which the NDPs were not or minimally integrated into the health care team.71% of NDPs had shared access to patient medical records.Yet, integration on all other dimensions was low: organizational 14%, normative 14%, functional 7% and clinical 7%.We identified 19 studies in which the NDPs were partially integrated.All but one had shared access to patient medical records.Integration on the clinical, functional and normative dimension was 68% and 47% of NDPs were permanently employed within the practice or worked within an umbrella organization or network.We found 27 studies in which the NDPs were fully integrated within the primary care practice.This involved permanent employment within the organization, or an umbrella organization or network, shared information systems, shared education or administrative support and a profound clinical role with shared goals and visions, such as a collaborative practice agreement to enhance cooperation in the delivery of CPS.For each level of integration, we plotted the number of improved primary outcomes against the total number of assessed primary outcomes.The accumulated evidence from these studies suggests that there is no impact of the degree of integration of NDPs on health outcomes.The percentage of improved health outcomes for none, partial and fully integrated NDPs is respectively 63%, 61% and 62%.Also, after stratifying the health outcomes into clinical, patient reported and proxies of health outcomes, no association can be identified between the degree of integration of NDPs and an improvement on health outcomes.We included 43 studies about disease-specific CPS, in which 61 health outcomes, mainly surrogate clinical health outcomes were assessed, of which 67% showed a significant positive effect.Five patient reported health outcomes and five proxies of health outcomes were reported, of which 20% and 60% showed improvement, respectively.Within this subgroup of CPS services, we found 8 studies in which the NDPs were not or minimally integrated into the health care team, 14 studies in which the NDPs were partially integrated and 21 studies in which the NDPs were fully integrated within the primary care team.For disease-specific CPS the percentage of improved health outcomes in studies with not, partial and fully integrated NDPs is respectively 75%, 63% and 59%.Our data suggest a negative association between integration and improvement on health outcomes for disease-specific CPS.We included 17 studies about patient-centered CPS and assessed 28 health outcomes, mainly proxies of health outcomes of which 83% showed a significant positive effect.In total, 7 patient reported health outcomes were reported of which none showed improvement.A small number of surrogate clinical health outcomes was reported and 2 were positively affected by the NDP provided services.We found 6 studies in which the NDPs were not or minimally integrated into the health care team, 5 studies in which the NDPs were partially integrated and 6 studies in which the NDPs were fully integrated within the primary care team.For patient-centered CPS the percentage of improved health outcomes in studies with not, partial and fully integrated NDPs is respectively 55%, 57% and 70%.Therefore, our data suggest a positive association between integration and improvement on health outcomes for patient-centered CPS.We evaluated the impact of the degree of integration of NDPs on health outcomes in primary care.Although we found that the degree of integration of NDPs did not impact health outcomes in the overall group, subgroup analysis suggests that full integration of an NDP may be especially relevant for patient-centered CPS.An explanation of why full integration of an NDP is more relevant for patient-centered interventions than disease-specific interventions is provided by Weick.76,Integration enables NDPs to manage interruptions in the care trajectory of an individual patient.Being in close relation with both GPs and patients, NDPs can pick up the small clues that signal lapses in the care trajectory.The degree of integration showed a trend towards a negative association with the health outcomes of disease-specific CPS.The diseases-specific CPS included in this study were
Results: Eighty-nine health outcomes in 60 comparative studies contributed to the analysis.The accumulated evidence from these studies shows no impact of the degree of integration of non-dispensing pharmacists on health outcomes.For disease specific clinical pharmacy services the percentage of improved health outcomes for none, partial and fully integrated NDPs is respectively 75%, 63% and 59%.For patient-centered clinical pharmacy services the percentage of improved health outcomes for none, partial and fully integrated NDPs is respectively 55%, 57% and 70%.
based upon a set protocol.These standardized care trajectories are less prone to errors and allowing for variety may not have an added value.Reliability – defined as compliance to the protocols – seems to be more effective.77,Almost all studies reported surrogate health outcomes rather than clinical endpoints such as hospitalization or mortality.Disease-specific CPS mainly described surrogate clinical health outcomes, while patient-centered CPS often used process outcomes to measure the effect of the intervention.Also, we found a low impact of CPS on health related quality of life.51,61,65,67,69,The effects of a multifaceted quality improvement service often do not extend as far as to health related quality of life.78,Fully integrated NDPs are permanently employed or work within a network or umbrella organization, they usually have shared access to clinical information systems, work in multiprofessional teams with face-to-face collaboration with the GP, have shared education and/or support staff for administrative functions and share a vision on patient care with clinicians.Clinical integration into a multidisciplinary primary care team provides greater opportunities for both formal and informal communication, probably enhancing patient care.63,Also, expanding the clinical role of the NDP by allocating prescribing privileges might be beneficial.79,Within disease-specific CPS, more than half of the NDPs were authorized to make medication changes within a defined scope of practice.Within patient-centered CPS, only 2 studies showed NDPs with prescribing authority.In these kind of services, with a more holistic approach to pharmaceutical care, prescribing authority would entail the whole spectrum of medications.The current absence of prescribing authority might have restricted the impact of the CPS on health outcomes.CPS performed in isolation may negatively influence the quality of care.80,There is one systematic review that described the effectiveness of NDPs co-located in primary care practice.9, "The importance of follow-up and face-to-face communication with the patient's GP is highlighted.Other available studies described the effectiveness of CPS in different outpatient settings.10–14,This study is the first to unravel the association between the extent of NDP integration in clinical care and drug related health outcomes.This review has a number of limitations.Similar to most literature reviews, there might have been publication bias.Also, CPS can like all cognitive interventions be subject to the Hawthorne-effect.The Hawthorne-effect might, at least partly, explain the absence of any negative health outcome in the included studies.The interventions and outcomes assessed in this review were heterogeneous.Also, we were unable to assess the impact of health care systems on the degree of integration of NDPs and on the success of the provided services.Moreover, the study population, duration of the intervention, number of practices and involved NDPs differed widely, limiting our options to assess the independent effect of integration and to pool data.The problem of heterogeneity in clinical pharmacy intervention studies has been previously addressed.9,12,14,81–83,Hence, we cannot draw too strong conclusions about the impact of integration – as reflected by the wording we choose.Lastly, the positive association we found between the degree of integration and the effect of patient-centered CPS was based upon a limited number of studies.Random effects cannot be ruled out.Additional research is required when new studies about integrated clinical pharmacy services in primary care become available.This study has several implications for practitioners and policy-makers.Integration on all dimensions for all types of chronic disease management services performed by NDPs in primary care practice may not be necessary.Integration on all dimensions should be promoted for individually tailored, i.e. patient-centered CPS.To obtain maximum benefits of CPS for patients with multiple medications and comorbidities, full integration of NDPs should be stimulated.This study is part of the POINT intervention study which is funded by The Netherlands Organization for Health Research and Development and by the Dutch health insurance company Agis/Achmea.ZonMW and Agis/Achmea were not involved in the design of this systematic review, nor in the decision to submit this article for publication.
Objectives: This study investigates how the degree of integration of a non-dispensing pharmacist impacts medication related health outcomes in primary care.Methods: In this literature review we searched two electronic databases and the reference list of published literature reviews for studies about clinical pharmacy services performed by non-dispensing pharmacists physically co-located in primary care practice.We assessed the degree of integration via key dimensions of integration based on the conceptual framework of Walshe and Smith.Descriptive statistics were used to correlate the degree of integration to health outcomes.The analysis was stratified for disease-specific and patient-centered clinical pharmacy services.Conclusions: Full integration adds value to patient-centered clinical pharmacy services, but not to disease-specific clinical pharmacy services.To obtain maximum benefits of clinical pharmacy services for patients with multiple medications and comorbidities, full integration of non-dispensing pharmacists should be promoted.
post hoc test.*P < 0.05 was considered statistically significant.Targeting specific signalling molecules in anucleate human platelets is often hampered by the availability of selective pharmacological inhibitors.As a result, the field of platelet biology is critically reliant on studies using genetically modified mice or blood from patients with inherited bleeding disorders due to mutations in genes regulating haemostasis .Our recent discovery in mouse platelets suggested a critical role for the Ral GTPases, RalA and RalB, in regulating secretion of P-selectin .This finding opens therapeutic avenues for targeting platelet-mediated inflammatory disorders requiring platelet expression of P-selectin, and in our study we showed that platelet-specific deletion of RalA and RalB significantly slowed the onset of symptoms in a mouse model of inflammatory bowel disease.We therefore set out to assess the role of Rals in human platelets using the recently described Ral inhibitor RBC8 .Initial experiments confirmed that RBC8 effectively inhibited both RalA and RalB activation in an identical, dose-dependent manner following platelet stimulation with the GPVI-specific ligand, CRP.Non-specific, upper bands were observed when immunoblotting for activated RalB, with the specific ‘GTP’ signal denoted by the arrow.The half-maximum inhibitory value of RBC8 for RalA and RalB was 2.2 μM and 2.3 μM, respectively, which is relatively similar to reported IC50 values of 3.5 and 3.4 μM in H2122 and H358 cells, respectively .Having confirmed the inhibitory effect of RBC8 on Ral activity, subsequent experiments set out to assess the effects of RBC8 treatment on platelet functional responses.We specifically chose a threshold concentration of CRP as we had previously observed a relatively weak, but statistically significant reduction in dense granule secretion, but not aggregation, using this concentration in Ral DKO mouse platelets .With this, we observed a dose-dependent inhibitory effect of RBC8 on human platelet aggregation, with a concomitant decrease in dense granule secretion.Secretion of ADP from platelet dense granules is an important autocrine/paracrine signalling mediator of GPVI platelet responses and we therefore used ADP “rescue” experiments with exogenously added ADP to understand the mechanism through which RBC8 inhibits human platelet aggregation .Notably, exogenous ADP fully recovered the aggregation defect in the 1 and 3 μM RBC8-treated platelet samples, but not completely in 10 μM RBC8 samples.This suggests that RBC8, particularly within the IC50 range reduces ADP secretion necessary for full aggregation responses, but at the 10 μM dose there is an ADP-independent component to RBC8-mediated reduction in platelet aggregation.Previously, we established that genetic deletion of Rals in mouse platelets causes a substantial reduction in P-selectin surface exposure, without a significant change in integrin αIIbβ3 activation .Using the same flow cytometry assays, we investigated the effect of RBC8 on human platelet responses.Here, RBC8 significantly decreased both readouts of activation in human platelets and these reductions were not agonist-dependent as significant decreases were observed in response to both CRP and PAR4-AP.While the reduction in P-selectin exposure with RBC8 is consistent with responses in mouse Ral DKO platelets, the decrease in integrin activation with RBC8 was a noticeable divergence in functional responses between RBC8-treated human platelets and mouse Ral DKO platelets.Similarly, we observed a pronounced defect in thrombus formation in vitro in RBC8-treated whole human blood perfused over a collagen-coated surface; a defect which was not apparent in whole blood from Ral DKO mice.However, considering the effect of RBC8 on human platelet aggregation, granule secretion and integrin activation, it was not entirely unexpected to observe defective platelet thrombus formation in RBC8-treated whole blood .This finding does also support the efficacy of using RBC8 in native environments such as whole blood, consistent with the seminal paper by Yan et al. reporting decreases in tumor growth from in vivo studies .Further experiments in RBC8-treated human platelets assessed the soluble release of the α-granule marker, PF4, Ca2+ mobilisation and phosphatidylserine exposure.Here, platelet responses following RBC8 treatment generally showed no effect compared with vehicle/DMSO-treated platelets.The lack of defect in PF4 secretion with RBC8 treatment is important, and aligned with our observations in Ral DKO mouse platelets that show a major defect in P selectin expression with no defect in PF4 release .Furthermore, the absence of altered Ca2+ signalling with RBC8 is consistent with previous reports demonstrating that Ral activity is downstream of Ca2+ signalling, as an increase in cytosolic Ca2+, either due to release from intracellular stores and/or cellular influx, is essential for Ral activation .These rises in cytosolic Ca2+ are also important for platelet procoagulant function, as measured by annexin V binding to exposed PS, and therefore the absence of altered PS responses with RBC8 is also unsurprising .Importantly, RBC8 did not alter basal/unstimulated annexin V binding values in unstimulated platelets, confirming that the compound does not non-specifically induce apoptosis in resting platelets.Our observations with RBC8 in human platelets suggested a more wide-ranging role for Rals in platelet function compared to our observations in Ral deficient mouse platelets.Using lumi-aggregometry, 10 μM RBC8 significantly reduced platelet aggregation and ATP secretion responses in both WT and DKO platelets using the threshold concentration of CRP, while 3–10 μM RBC8 also significantly reduced ATP release.Further investigations using FACS analysis to assess integrin activation revealed almost identical, dose-dependent reductions in both WT and Ral DKO platelets with RBC8 treatment using either CRP or PAR4-AP as agonist.Here, inhibitory responses with RBC8 were more sensitive to lower concentrations of compound compared to the aggregation/dense granule secretion assay.The reduction in CRP-mediated P-selectin exposure in WT platelets with RBC8 appeared to be dose-dependent, but 10 μM RBC8 was required
Recently, we demonstrated that deletion of both Ral genes in a platelet-specific mouse gene knockout caused a substantial defect in surface exposure of P-selectin, with only a relatively weak defect in platelet dense granule secretion that did not alter platelet functional responses such as aggregation or thrombus formation.Initial studies in human platelets confirmed that RBC8 could effectively inhibit Ral GTPase activation, with an IC 50 of 2.2 μM and 2.3 μM for RalA and RalB, respectively.Functional studies using RBC8 revealed significant, dose-dependent inhibition of platelet aggregation, secretion (α- and dense granule), integrin activation and thrombus formation, while α-granule release of platelet factor 4, Ca 2+ signalling or phosphatidylserine exposure were unaltered.Subsequent studies in RalAB-null mouse platelets pretreated with RBC8 showed dose-dependent decreases in integrin activation and dense granule secretion, with significant inhibition of platelet aggregation and P-selectin exposure at 10 μM RBC8.
to significantly suppress P-selectin to levels observed in Ral DKO platelets in the absence of RBC8.Furthermore, RBC8 could significantly suppress WT platelet-leukocyte aggregation formation, an effect principally mediated by platelet P-selectin interaction with PSGL-1 on leukocytes .We had previously demonstrated that Ral DKO platelets have a near complete ablation of CRP-mediated platelet-leukocyte interaction, making it challenging to determine off-target effects of RBC8 with this assay .Using PAR4-AP as agonist, RBC8 appeared less potent at reducing P-selectin levels in WT platelets, although 10 μM RBC8 did significantly decrease the response.However, 10 μM RBC8 did also significantly suppress PAR4-AP-mediated P-selectin exposure in Ral DKO platelets.Overall, our data show that RBC8 elicits off-target effects in mouse platelets as evidenced by numerous inhibitory effects in Ral DKO platelets.It is therefore possible that similar off-target effects exist for RBC8 in human platelets, however we cannot definitively say that this is the case since the differences in our data may just reflect fundamental differences in Ral function between human and mouse platelets.For instance, Rals may have a more critical role in regulating human platelet dense secretion, as reported by Kawato et al., whereas our observations in Ral deficient mouse platelets suggest a very weak role for Rals in dense granule release, which did not alter platelet aggregation or integrin activation responses .If such a difference between species were true, it would help explain why inhibition of human Rals have a more profound effect on human platelet activation responses, which are critically reliant on secreted ADP amplification signals.Also, compensatory upregulation of specific signalling pathways have been previously reported in transgenic mice and therefore it cannot be excluded that similar issues are present in Ral DKO transgenic mice that could potentially mask Ral specific functions in platelets .However, even at 1 μM RBC8, which has weak inhibitory effects on Ral activation, we observed significant effects of the compound on CRP- and/or PAR4-AP-induced human platelet integrin activation and P-selectin exposure.While our experiments suggest that RBC8 is targeting signalling component other than Rals in mouse platelets, it is not clear what those target are likely to be.In the Yan paper which identified RBC8 as a Ral inhibitor, the compound showed no off-target activity towards Ras or RhoA, both of which are activated in response to platelet stimulation .The GTPase Rac1 has been shown to be important specifically for GPVI-mediated platelet responses, but Rac1 deficient platelets have defective Ca2+ mobilisation and RBC8 did not alter CRP-mediated Ca2+ signalling responses .Our observations suggest the target is likely to be a Ca2+ sensitive component of platelet signalling pathways that is critical for integrin activation and dense granule secretion, the latter being reinforced by our observations that exogenous ADP could largely recover the platelet aggregation defects with RBC8 treatment.Based on this, we suspected the Rap1 isoforms, Rap1a and Rap1b, as likely candidates.Like Rals, they are members of the Ras family of GTPases and are specifically regulated by the calcium sensitive guanine nucleotide exchange factor, CalDAG GEF1, and are critical regulators of integrin activation and platelet secretory responses .However, we did not observe any inhibitory effect of RBC8 on CRP-induced Rap1 activation suggesting the off-target effects are not mediated by Rap1.We are therefore currently uncertain of the Ral-independent mechanism of RBC8 in platelets.The development/discovery of compounds targeting small GTPases is challenging .Our data point to RBC8 being efficient and potent as a Ral inhibitor in human and mouse platelets, but that it exhibits some activity beyond just Rals, particularly in mouse platelets.It is however possible that species differences in Ral function and structure could partly explain our observations in human platelets, in which wider functions for Rals may be present than in mouse platelets.For functional assessment of Rals in tissues it is advisable therefore to use a combination of genetic and pharmacological approaches and to be aware of possible species differences.The authors have no conflicts of interest.A. Wersäll designed and performed experiments, interpreted results and revised the manuscript.T.G. Walsh designed and performed experiments, interpreted results and wrote the manuscript.A.W. Poole designed research, interpreted results and revised the manuscript.
We sought to investigate the function of Rals in human platelets using the recently described Ral inhibitor, RBC8.This study strongly suggests therefore that although RBC8 is useful as a Ral inhibitor in platelets, it is likely also to have off-target effects in the same concentration range as for Ral inhibition.
In the data, Tables 1 and 2 show the different formulation of the micro-agglomerated cork stopper groups tested and the dimensions and density for each cork stopper type, respectively.Fig. 1 is the device developed in the INIA-CIFOR Cork Laboratory for measuring the displacement force.Figs. 2 and 3 show the maximum compression force distributions and the Young’s modulus distributions per stopper type, respectively.Fig. 4 shows the reaction force distributions for each stopper type.Fig. 5 shows the diameter recovery after compression test and after 15, 60, and 1,440 min.for each stopper type.Fig. 6 shows the displacement force distributions per stopper type.In Figs. 2–6 the boxplot notches indicate a 95% confidence interval on the median.This article shows the mechanical properties from 22 types of cylindrical wine stoppers, 18 types of micro-agglomerated stoppers, three types of natural stoppers, and one type of co-extruded synthetic closure.Micro-agglomerated stoppers differing cork mass percentages and the densities 230, 290 y 350 kg·m−3.Natural cork stoppers differing the external visual.The sample code for synthetic closures is SC.Once acclimatized, stoppers were weighed and measured using Mitutoyo ID-F150 digital vernier callipers.Stoppers density was calculated as already reported in González-Hernández .The maximum radial compression force was measured using a Zwick universal testing machine with a 20,000 N load cell.A stress-strain curve was drawn from the data recorded and Young׳s modulus was calculated for each of the stoppers tested as the slope of the linear elastic portion of the stress-strain curve between 1% and 2% of the strain .Diameter recovery was measured immediately after the compression test and again after 15 min, 1 h and 24 h after the test using a Mitutoyo ID-F150 digital vernier callipers.As already reported in Sánchez González , the relaxation force was measured with a device developed in the INIA-CIFOR Cork Laboratory .The displacement force is the maximum force required to extract the stopper and is a proxy of the extraction force.This force was measured by a device developed in the INIA-CIFOR Cork Laboratory used in the Zwick universal testing machine as already reported in Sánchez González .All stoppers used in this test were previously surface treated with an aqueous emulsion comprising silicones and waxes.All tests were carried out using the SAS software version 9.4 as already reported in Sánchez González .
The data in this paper are related to the research article entitled “Assessing the percentage of cork that a stopper should have from a mechanical perspective” (González and Terrazas, 2018).This data article contains data on the mechanical properties of different types of wine stoppers: 18 types of micro-agglomerated stoppers, three types of natural stoppers, and one type of co-extruded synthetic closure.Mechanical properties were evaluated with different analysis: Compression test for the maximum radial compression force, the young's moduli and the diameter recovery, relaxation test for the relaxation force and the extraction test for the displacement force.
fistulae developed in cell-seeded 0% PAA SIS group at 6 months post-repair, chronic inflammation was also evidenced by mononuclear cell aggregates.Animals in unseeded PAA group showed serious stricture by retrograde urethrography and extensive fibrosis formation by histological assessment.Fig. 3 provides us with representative characteristics of the individual group.Urethrography can demonstrate the wide caliber, fistulae or stricture, as indicated by the arrowheads, of the regenerated urethra.The macroscopy can demonstrate the gross pathology change of mucosa, including normal-like, ulceration, or scar/shrinkage.Epithelium, sub-epithelial smooth muscle, vessels regenerated very well in cell-seeded PAA group, so wide caliber was demonstrated in urethrogram and normal-like mucosa under gross macroscopy.The non-PAA matrix was less likely to prompt cell proliferation because of the disadvantages of high density and existence of retained heterogeneous cellular compounds, the epithelium in cell-seeded non-PAA is very thin, and chronic inflammation, ulceration occurred following the implantation under gross macroscopy, and fistulae formed in urethrogram.As a result of the scarcity of seeded UC in the unseeded group, the new epithelium regenerated from the native tissue is not solid enough to prevent the sub-epithelium from the urine leakage.So the inflammation and fibrosis formation occurred very early and seriously postoperatively.The mucosa with scar and shrinkage touched stiff because of fibrosis formation in the sub-epithelium, the caliber got stricture in urethrogram and this kind of mucosa also seems pallid because of scarcity of vessels in the sub-epithelium.The results in the study detailed the feasibility of a modified 3-D porous SIS scaffold seeded with UC to serve as graft for onlay urethroplasty to treat large urethral mucosa defect.In comparison to cell-seeded non-PAA SIS scaffold and unseeded PAA SIS, 5% PAA SIS seeded with UC displayed better maintenance of the urethral patency, epithelization, smooth muscle proliferation and neovascularization.Future studies with bigger sample size are warranted to ascertain the potential of cell-seeded 5% PAA modified 3-D porous SIS graft for urethral reconstruction.Long Zhang: Conceived and designed the experiments; Performed the experiments.Junping Li: Performed the experiments.Anna Du, Minjie Pan, Weiwei Han: Performed the experiments; Contributed reagents, materials, analysis tools or data.Yajun Xiao: Conceived and designed the experiments; Analyzed and interpreted the data; Wrote the paper.This work was supported by Innovation Fund of Wuhan Bureau of Human Resource and Social Security.The authors declare no conflict of interest.No additional information is available for this paper.
Objective To explore the feasibility of a modified 3D porous small intestinal submucosa (SIS) scaffold seeded with urothelial cells (UC) for surgical reconstruction in a rabbit model.Ventral onlay urethroplasty was performed with a 1.0 × 1.7 cm2 SIS scaffold that was either cell-seeded and treated with 5% peracetic acid (PAA) (n = 6), or cell-seeded and untreated (n = 6), or unseeded and treated with 5% PAA (n = 6).Animals were sacrificed at 6 months post-repair and retrograde urethrography and histological analyses performed.Results In animals implanted with cell-seeded and PAA treated SIS scaffolds, urethrography showed wide-caliber urethra without any signs of stricture or fistulae, and histological analyses confirmed a complete urethral structure.In contrast, ulceration and fistula occurred in the reconstructed urethra of animals implanted with cell-seeded but untreated SIS scaffolds, and evident stricture was present in the unseeded, PAA treated group.Histological analyses demonstrated less urothelial coverage and smooth muscle in the cell-seeded and untreated SIS scaffold group, and serious fibrosis formation occurred in the unseeded, treated group.Conclusions A modified 3D porous SIS scaffold seeded with UC and treated with PAA produces better urethroplasty results than cell-seeded untreated SIS scaffolds, or unseeded PAA treated SIS scaffolds.
Training activation quantized neural networks involves minimizing a piecewise constant training loss whose gradient vanishes almost everywhere, which is undesirable for the standard back-propagation or chain rule.An empirical way around this issue is to use a straight-through estimator in the backward pass only, so that the "gradient" through the modified chain rule becomes non-trivial.Since this unusual "gradient" is certainly not the gradient of loss function, the following question arises: why searching in its negative direction minimizes the training loss?In this paper, we provide the theoretical justification of the concept of STE by answering this question.We consider the problem of learning a two-linear-layer network with binarized ReLU activation and Gaussian input data.We shall refer to the unusual "gradient" given by the STE-modifed chain rule as coarse gradient.The choice of STE is not unique.We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient, and its negation is a descent direction for minimizing the population loss.We further show the associated coarse gradient descent algorithm converges to a critical point of the population loss minimization problem. Moreover, we show that a poor choice of STE leads to instability of the training algorithm near certain local minima, which is verified with CIFAR-10 experiments.
We make theoretical justification for the concept of straight-through estimator.
emulsion paint on the outside and parts of the inside faces of the walls this can subsequently result in structural damage as the water cannot easily diffuse to the outside.Cracks and spalling of the façade coating documented in the splash water zone of building 4 appear to further support this conclusion.In order to enhance durability of façades for protection against driving rain DIN 4108–3:2014 defines limiting values for plasters and coatings.As detailed in Table 8 requirements are given for the water absorption coefficient Ww as well as the equivalent air layer thickness for water vapour diffusion sd.Given these recommendations, the following measures may be applied to buildings in Bhutan:Application of plasters or coatings with a low water absorption coefficient Ww ≤ 0.5 so that a sufficient durability of the façade can be ensured.Consideration of the equivalent air layer thickness for water vapour diffusion to achieve a sufficient drying capacity.Review of the current roof overhangs in Bhutan of around 1.5 m in their suitability for protecting constructions against driving rain in relation to wall construction materials and building height.,The Bhutanese construction sector has, over the past decades, experienced a change from a rural subsistence system based on mutual help to a professional trade with architects, engineers and construction companies.Alongside this the main construction systems and construction materials have also changed from buildings constructed with earth, quarry stone and timber to reinforced concrete fame constructions with brick infill walls.This type of construction in its present form was found to have a limited suitability for the climate of the Inner Himalayan region of Bhutan, which is characterised by dry, sunny winters and a comparably large diurnal temperature swing throughout the year.However, the initial study of building physics properties of a range of construction types in the Thimphu valley area of Bhutan presented in this paper highlights that both, current and also traditional construction methods come with limitations with respect to providing comfortable indoor climate conditions for the occupants and have some risks as per climate induced structural damages.Air tightness tests conducted on 9 buildings revealed that the current building stock needs to be classified as leaky, with traditional construction types being the worst performing structures.Whilst the air infiltration rates ninf deduced from the air tightness tests for the two investigated buildings following traditional construction techniques were determined as 3.9 h− 1 and 5.3 h− 1, the modern constructions showed infiltration rates ninf of between 0.8 h− 1 and 1.9 h− 1.Typical air leaks in all buildings were found to be joints between materials, timber structure joints and the joints of windows and doors made with wooden frames.Conversely, in terms of U-values the most common contemporary wall construction type of brick infill walls was found to perform worse at 1.25 to 1.45 W/m2 K than traditional rammed earth walls at 1.1 to 1.2 W/m2 K and the cement stabilised earth block construction technique derived from the traditional rammed earth walls at 1.05 to 1.25 W/m2 K. However, due to the limited timeframe that was available for these measurements, there are limitations as per the accuracy of these values which, therefore, can only be considered as indicative.Indoor climate assessments conducted in 4 buildings following traditional and contemporary construction methods highlighted the thermal storage potentials of high thermal mass walls under the given climate in conjunction with exploiting solar gains through appropriately glazed areas.In addition, the humidity regulating abilities of construction systems involving earth materials were demonstrated in comparison to constructions based on bricks and concrete.Water absorption tests of typical wall construction materials revealed that all investigated common construction materials need to be classified as absorbing, whereas some coatings were found to be waterproof, henceforth increasing risks of structural damage by humidity trapped inside the construction.The data gathered on building air tightness, indoor climate, wall U-values and the water absorption of construction materials in the field surveys give a first insight into the thermal performance of the Bhutanese building stock but can, by no means, be considered as exhaustive or representative for the entire building stock.Further studies need to follow to deliver a more complete picture.Nevertheless, the heating degree day information provided in this paper allows for heating demand calculations in the Thimphu valley, for which, within limitations as per the accuracy and transferability to other buildings, the U-value and air infiltration data obtained within this study may be used.Both, the climate data analysis for the Thimphu valley as well as the results of the field study highlight that there are potentials for improvements to the building stock that take reference in traditional and contemporary construction methods.The further development of traditional design strategies of using local materials and exploiting thermal mass for room climate moderation appears to have significant potentials for achieving the goal of sustainable development which forms part of the national philosophy in Bhutan.This will need to include improvements to the air tightness by improved material joints and window designs together with considerations for improving wall surface treatment to enhance durability and improving wall U-values by, for example, using aggregates of a low thermal conductivity as well as by routinely applying double glazing.The result may be an adapted form of building design which takes reference in traditional design ideas but also includes adaptations that feed through to the design appearance in order to meet modern comfort and durability standards.
Traditionally, buildings in the Inner Himalayan valleys of Bhutan were constructed from rammed earth in the western regions and quarry stone in the central and eastern regions.Whilst basic architectural design elements have been retained, the construction methods have however changed over recent decades alongside expectations for indoor thermal comfort.Nevertheless, despite the need for space heating, thermal building performance remains largely unknown.Furthermore, no dedicated climate data is available for building performance assessments.This paper establishes such climatological information for the capital Thimphu and presents an investigation of building physics properties of traditional and contemporary building types.In a one month field study 10 buildings were surveyed, looking at building air tightness, indoor climate, wall U-values and water absorption of typical wall construction materials.The findings highlight comparably high wall U-values of 1.0 to 1.5 W/m²K for both current and historic constructions.Furthermore, air tightness tests show that, due to poorly sealed joints between construction elements, windows and doors, many buildings have high infiltration rates, reaching up to 5 air changes per hour.However, the results also indicate an indoor climate moderating effect of more traditional earth construction techniques.Based on these survey findings basic improvements are being suggested.
The field of energy needs newer methods in its battle to sustain its role as the chief driver of economy.In this context, enhanced research is required in the field of renewable energy especially wind energy which is currently the fastest growing source.Although Vertical Axis Wind Turbines are inherently less efficient than Horizontal Axis Wind Turbines, their usage has increased in the past two decades.Numerous methods have been proposed and used for solving the flow physics of VAWTs.Out of all the methods, solutions based on streamtube theories stand out.The easiness of the approach makes up for its inaccuracy compared to more accurate approaches like Computation Fluid Dynamics.The first reference to the classical streamtube theories was by Templin in the year 1974 with the introduction of the single streamtube theory as an ‘Aerodynamic Performance Theory’ for VAWTs.The effects of aerodynamic stall and curvature of the blade on performance was incorporated.In the same year, Robert et al. in a report with the primary focus on Horizontal Axis Wind Turbines introduced the concept of using multiple independent streamtubes for analysis.The streamtube models gained further momentum when James on behalf of Sandia National Laboratories wrote a report in 1975 comparing the single and multiple streamtube models with experimental results.The report confirmed better prediction capabilities of multiple streamtube models when compared to single streamtube models for small rotors.Lapin briefly introduced the concept like DMST of having two actuator disks in tandem in 1975 when he developed it to study the economic feasibility of a large rotor.Classical streamtube theories have gone through immense transformation over the years.Out of these, three phases are the most prominent and is often referred to in discussions about the history of streamtube theories.They are, the transformation from ‘Single streamtube theory’ to ‘Multiple streamtube theory’ to ‘Double multiple streamtube theory’.Fig. 1 shows a simple representation of these phases.Concurrent with the development of the theory was its usage for optimization of VAWTs.An aerodynamic optimization method for straight bladed Darrieus VAWTs based on the streamtube theories was proposed in 1983 .DMST was used to develop a mathematical optimization model to enhance the power coefficient of vertical axis turbine for tidal current conversion by adjusting the blades deflection angle .Optimizing studies on the rotor of the VAWT were carried out with variables as radius of the rotor, number of the blades, chord length and blade height .Use of genetic algorithm and consideration of generality of shape of VAWT and wind direction in optimization have demonstrated the versatility of streamtube theories.Design guidelines for optimization of annual energy yield of H-Darrieus wind turbines have been proposed by the usage of streamtube theories.This paper deals with the derivational aspects of a non-dimensional parameter named “Demand Factor” for optimization of VAWT.The paper discusses a modified version of the Demand Factor definitions first introduced in along with the modified approach of carrying out the optimization process.The study is a continuation of research that has been pointing in the direction of single point optimization whereby the lift and drag coefficients were attempted to be modelled using equations by regression analysis .According to the theoretical formulations of DMST, the VAWT is divided in plan into multiple adjacent but aerodynamically independent streamtubes and computations are carried out through each of those independent streamtubes.The final solution is arrived at by the summation of results obtained through them.The number of streamtubes used should be sufficient to ensure a converged solution.Most of the works of literature identify 36 streamtubes as a converged value.As per DMST formulations, the wind is required to pass through each streamtube twice before reaching the other end.The calculations are carried out separately for the upstream and downstream halves of the rotor.The streamtube divided computational geometry is depicted in Fig. 4.The input parameters in the design of VAWT are rated power, rated wind speed, aspect ratio, density of air & viscosity of air.The latter two are dependent on the location and altitude of VAWT installation.For the simplest VAWT, which is straight bladed and has no pitch variations, at least six parameters are required to design it completely.They are: length of the blade, radius of the rotor, number of blades, aerofoil shape, chord length of the blade and target angular rotation at rated wind speed.The output parameters of VAWT are shown in Fig. 5.One of the hurdles of the streamtube theories in the development of a single point optimization program is that the performance of the entire VAWT could be calculated only after calculating the performances of all the individual streamtubes and).To circumvent this issue and to provide a single point reference for VAWTs, the concept of effective streamtube is introduced.An effective streamtube is defined as the streamtube corresponding to the azimuthal position that gives the power coefficient closest to overall power coefficient of the VAWT calculated with a specified number of streamtubes.In other words, the entire VAWT with certain number of streamtubes is replaced by a single streamtube that can represent the VAWT.The concept is represented graphically through Fig. 6.The concept of isolating a streamtube has been carried out earlier by Conaill through the definition of effective lift to drag ratio based on the calculation of average torque per cycle.However, although the earlier definition is also about representing the entire VAWT by the properties of a particular streamtube, a new definition as discussed above was necessary due to the following reasons:The benchmarking based on power coefficient helps to maintain the entire calculations in non-dimensional
This paper deals with the derivational aspects of a dimensionless parameter named “Demand Factor” for optimization of Vertical Axis Wind Turbine (VAWT).The input parameters considered in this derivation are power, wind velocity, the aspect ratio of the turbine, density of air and viscosity of air and the output parameters are length of the blade, number of blades, chord length, aerofoil shape, radius of the turbine and angular velocity at rated speed.Four rounds of variable definition trials are carried out through the arrangement of the input parameters on the numerator and denominator positions.The process of carrying out single point optimization based on Demand factor expression is discussed along with the steps involved in numerically calculating output parameters.
completion of the design.For a particular “effective streamtube” to be a feasible solution to the optimization problem, the ADF value for it should lie within the limits of minimum and maximum values of IDF calculated.This situation is demonstrated graphically in Fig. 7.There could be numerous effective streamtubes that are eligible to become the solution to the given problem.As an optimization problem, the goal is to find the most efficient solution.Two measures are employed to ensure that the resulting solution is the optimum.Choosing the optimum TSR – For an analysis, there is a value of Tip Speed Ratio that would yield the maximum power coefficient.Thus, one of the ways of ensuring an optimum solution is to extract only those effective streamtubes that correspond to the optimum TSR.Seeking solution from descending order of power coefficients - The solution set would contain properties of different effective streamtubes corresponding to optimum TSRs of various combinations.These different cases are arranged in descending order of power coefficients.Once the IDFmin and IDFmax for a problem are calculated, the range of values is compared with the ADF values calculated for the list of effective streamtubes starting from the first one.Arrangement in descending order will ensure that the optimal solution is selected as the topmost entry satisfying the ADF requirement.In certain situations, multiple solutions are obtained that provide the same amount of efficiency.The situation occurs in cases where the ADF falls within the range of values where different sets of number of blades can offer the same Cp within the maximum range of aspect ratio possible.This situation is represented graphically in Fig. 9.Back calculations for all the three cases would yield different sets of solution, all of them giving the same value of power coefficient.Among these solutions, the one that gives the value of angular velocity at rated speed that is closest to the comfort zone of the manufacturer is selected as the final solution.In certain situations, there are no effective streamtubes that have ADF value between the range of IDFmin and IDFmax calculated.This could happen in one of the following scenarios:The range of IDF is greater than all the ADF values of the effective streamtube set.It means that with the current limitation of the number of blades and aspect ratio, it is not physically possible to attain the specified power.The IDF range is within the limits of ADF values of the effective streamtube set but still misses the solution.This case indicates that the resolution of control parameters is too coarse to provide a solution.The range of IDF is lesser than all the ADF values of the effective streamtube set.This case is very unlikely, but it means that the demanded power output from the VAWT is too low.A variable called “demand factor” is developed for the optimization of VAWTs.This variable was developed considering four different criteria.First, it should be non-dimensional, secondly, it should contain all the input variables on one side of the expression.Then, it should contain the intrinsic variables on the other side of the equation and finally, it should not contain any output variables as they are unknowns until the completion of the design.The left hand side of Demand Factor expression contains the input variables and is identified as a range that is termed as input demand factor.The expression on the RHS contains the various intrinsic variables of design that is labelled as the aerodynamic demand factor.For optimization process, a large set of effective streamtube solutions are generated by varying the values of control parameters.Only the effective streamtubes corresponding to the value of optimum TSR for a case of analysis are selected from the above set.These effective streamtubes are then arranged in the decreasing order of their power coefficients.The first effective streamtube that has an ADF value that falls within the range of IDF values is taken as the solution to the problem.Thus, through this process, the definition of Demand Factor enables a single point optimization wherein five input variables are uniquely connected with six output variables of VAWT.
The use of dimensionless numbers like Reynolds Number, Froude Number and Webber Number has historically simplified the process of comparison of phenomena irrespective of their scales and in their classification into different categories.With the filtering out of unsuitable combinations at different stages of elimination, out of 32 combinations the expression that holds the potential to represent demand factor was identified.The expression of Demand factor developed provides a different perspective on the process of design and optimization of VAWTs.
are critical to society’ and that the animal feed industry is a particular problem.The use of land and, in particular, grain for animal feed has generated what was described in 1987 as a post-Malthusian ‘food versus feed’ problem with the rise of a global market for food and feed and the growth of middle-income, meat-consuming classes."Some now advocate biofuels on the basis of reductions in the use of land for animal feed, a point prefigured in David Hall's earlier assessment of bioenergy. "Known for promoting grassroots experiments in sustainable living, the UK Centre for Alternative Technology's vision of a Zero-Carbon Britain by 2030 assumes a dramatic reduction in the use of land for animal grazing suggesting that this land could be released for producing biomass-derived energy including biofuels.In this case, the space of food production is no longer sacrosanct or exempt from critical scrutiny of its own sustainability credentials, though biofuel production and use are once again envisaged in national-territorial terms.The return to space in the reframing of biofuels is more explicit on the website of Journey to Forever, a self-described mobile, environmental NGO which questions the food-versus-fuel dynamic by referring to wider systemic causes of hunger.It seeks to resurrect biofuels by distinguishing the use of locally available resources for local use from ‘agrofuels’.‘Objections to biofuels-as-agrofuels are really just objections to industrialised agriculture itself, along with “free trade” and all the other trappings of the global food system that help to make it so destructive’, they argue.Prominent NGO critiques of biofuels are on similar lines, employing the language of ‘agrofuels’ to highlight the problems of biofuels as an output of globalised industrial agriculture.Christian Aid summarise the issue succinctly: “The problem is not with the crop or the fuel – it is with the policy framework around biofuel production and use”.In asking why biofuels have been targeted when other agricultural technologies and uses of land apparently have not, we find that the biofuels controversy draws attention to precisely this wider agricultural system.In this paper, we have shown why and how spatial connections and spatial unevenness are important for understanding the ‘riches to rags’ journey of biofuels.Contemporary concerns over food security were anticipated by those promoting bioenergy in the 1980s; however, it was expected that these could be managed at the local or national level from where biomass resources would be sourced.Where the territorial vision was breached, it was to imagine Southern countries benefiting by exporting higher-value biofuel to the North, an option that has not materialised with the exception of Brazil.By contrast, the current controversy can be traced in part to the growth of a globally integrated biofuel network in which the poorer parts of the South have featured mainly as feedstock suppliers.Our historical analysis leads us to an important question.If bioenergy was originally meant to be a territorially based technology, could a domestic system of biofuel production with countries growing biomass for fuel within their own territories help re-legitimise biofuels?,Our analysis suggests the need for caution.Space now matters in other important respects that were less recognised in the earlier era of bioenergy; these include conflicts within national territories as well as those sparked by national biofuel systems having significant impacts beyond their territorial boundaries.More work is needed on these spatial linkages which are not exhausted by the North/South connections highlighted in recent literature.For the moment, we outline four key issues.First, land conflicts and uneven environmental impacts of biofuel production are more evident within countries such as Brazil and India as well as across global networks.Second, so long as the agri-food system is global in nature, territorial production of biomass for fuel can still have impacts beyond national borders as seen in the iLUC controversy over the global impact of ‘domestic’ US investment in corn ethanol discussed in section 3.3.Third, the economic and environmental challenges of land transportation of biomass for energy production at some distance from the biomass source is starting to be highlighted by some in the bioenergy community.The significance of space for biofuel sustainability therefore extends to conflicts within territories; it has been suggested, for example, that carbon emissions from trucks used to transport pellets over large distances within the US are more significant than shipping emissions associated with Atlantic trade.Fourth, North/North conflicts around biofuels are also emerging; for example, the European Union imposed anti-dumping duties on imports of US biodiesel citing the unfairness of government subsidies and the European Commission has recently proposed similar restrictions for US ethanol.What then are the lessons for those who are promoting alternative biofuel visions, either from biodegradable wastes or from non-edible feedstocks?,The case for second-generation biofuels has been largely made on the basis that it avoids a food-versus-fuel conflict.However, the first-generation journey shows that the problems of biofuels are more complicated than implied by a generic conflict with food.Rather, they arise from a globalised system with a spatially uneven distribution of sustainability risks and benefits.These challenges are likely to remain insofar as biofuel feedstocks and systems of production are part of the global agrarian economy.Likewise, in response to concerns that it is the inefficiency of biomass processing for liquid fuel that is the real problem, the same controversies that have affected biofuels are likely to arise where biomass is imported for more efficient bioenergy applications.We are now seeing this with UK protests over proposed power stations that would use imported palm oil, other vegetable oils or wood pellets.Nor can we assume
Where the territorial and scalar vision was breached, it was to imagine poorer countries exporting higher-value biofuel to the North rather than the raw material as in the controversial global biomass commodity chains of today.South/South and North/North trade conflicts are also emerging as are questions over biodegradable wastes and agricultural residues as global commodities.
that the use of biodegradable waste such as used cooking oil automatically circumvents controversy.UK biofuel statistics show that this too is a commodity that is being traded across borders.Since UCO can be double-counted towards the biofuel targets set by the amended EU Renewable Energy Directive and since it continues to have UK duty subsidies, prices have risen, and there are concerns about lack of traceability and monitoring procedures and incentives for un-used oils being passed off as waste.The spatial order that second-generation biofuel or fuel-from-waste takes in practice is therefore crucial.New ways of thinking about the food–fuel relationship are emerging that challenge the assumption that there is an intrinsic conflict between the two.The more contested aspects of food production, land use for other non-food goods, or the need for fuel to produce food should help future debate be placed in the wider context of land-use policies as a whole.But spatial arrangements and the rules of global agricultural trade will remain important in this context as a generic food-and-fuel synergy may spotlight the problems of global industrial agriculture even more.In this respect, some alternative visions which appear to challenge the rules of this global system could be significant for addressing legitimacy issues.First is the vision for large-scale sugar and wheat cultivation in Europe for ethanol which has been proposed as a way of changing the inequities of Common Agricultural Policy food subsidies that affect the capacity of small farmers in the South to compete in the global market.Here, producing fuel in place of food is seen as a corrective to trade injustices.It contrast to the US experiment with corn ethanol, this territorial vision for biofuels would require changes to the rules and balance of power within world trade.Second, some US biofuel advocates are trying to stimulate debate on fundamental issues of ownership structures in agriculture and world trade negotiations for a ‘better, decentralised biofuel model’.Third, some working in the global South are exploring small-scale biofuel models for addressing local energy poverty, and conditions under which smallholder projects in Tanzania producing biomass for export could be viable.Given the uncertainties over the capacity of sustainability certification schemes to manage the current problems of biofuels, these alternative visions and experiments might be promising.Finally, what are the implications of our analysis for sustainability assessment which has been the main way through which the biofuel community has responded to the controversy?,Sustainability assessment methods play an important role in identifying key challenges of new technologies across the ‘whole system’, giving an indication of the relative environmental significance of different aspects of production, and giving recognition to a range of different criteria beyond the strictly ‘environmental’ alone.But as Palmer argues, when sustainability metrics have been used to enact biofuel policies, the underlying political questions have been prematurely foreclosed.We have suggested that these questions relate to the legitimacy of globalised industrial agricultural systems as such.Sustainability assessment needs to be informed by and put in context of these wider issues in order to do justice to the challenges.In conclusion, our analysis of biofuels demonstrates the value of looking at how visions, networks and learning around new technologies are articulated and reshaped over time as suggested by sustainable innovation journey research.Following work in human geography, we have also shown the importance of bringing space to bear on the understanding of sustainable innovation journeys.The historical approach adopted in this paper helps bring out the original distinctiveness of bioenergy as a territorialised energy technology, a vision that could be revisited in current debates about biofuel futures.However, the addition of a spatial perspective means even local/national systems may be linked to global networks and have impacts beyond their territorial boundaries, generating North–North conflicts and South–South conflicts could be developed to explore how state and state-like entities channel investment in bioenergy projects which are spatially ordered in particular ways as opposed to others, define how these investments constitute the public good, and justify the inclusion/exclusion of specific publics in their governance.
Working at the interface of sustainable innovation journey research and geographical theories on the spatial unevenness of sustainability transition projects, we show how the biofuels controversy is linked to characteristics of globalised industrial agricultural systems.In the 1970-80s, promoters of bioenergy anticipated current concerns about food security implications but envisioned bioenergy production to be territorially embedded at national or local scales where these issues would be managed.As assumptions of a food-versus-fuel conflict have come to be challenged, legitimacy questions over global agri-business and trade are spotlighted even further.In this context, visions of biofuel development that address these broader issues might be promising.These include large-scale biomass-for-fuel models in Europe that would transform global trade rules to allow small farmers in the global South to compete, and small-scale biofuel systems developed to address local energy needs in the South.
Gauss curvature evolves qualitatively in a similar manner to the mean curvature; the regions with high or low Gauss curvatures at the start of the isothermal hold become reduced.This can also be seen in the quantified distribution of the Gauss curvatures, Fig. 9, where the curves gradually become narrower as the isothermal hold time is increased, together with an increase in the peak value.It is most likely that this occurrence is related to the evolution of saddle-shaped surfaces, with the grooves between the neighbouring branches disappearing due to the remelting of small branches and the coalescence of the adjacent branches.Note that although the analysis of the curvature discussed in Figs. 8 and 9 focused on a dendrite from the Mg-38 wt%Zn alloy cooled at 25°C/min, the same trends apply for both alloy compositions and cooling rates.In this study, the isothermal coarsening of Mg-Zn hcp was directly observed and quantified using in situ fast synchrotron X-ray tomography.The influence of two key parameters, solute composition and initial cooling rate, was investigated.The 3D observations suggested that the coarsening of the hcp dendrites is dominated by the re-melting of small branches, and the coalescence of the neighbouring branches.Zn content was found to have a large impact on the solidification morphology, changing the six-fold symmetry of the 25 wt%Zn to a highly branched or seaweed structure at 38 wt%Zn.The evolution of individual dendrites was quantified in terms of specific surface area, principal curvatures, mean curvature and Gauss curvature to capture the coarsening process.Ss was found to scale inversely with time, with a relationship of ∼t−1/3, and was path independent for the Mg-25 wt%Zn samples with dendritic microstructure as the initial cooling rate during solidification did not strongly influence the coarsening rate.However, path independence was not observed for the Mg-38 wt%Zn samples because of the change in solidification morphology to a seaweed microstructure.This led to large differences in Ss and its evolution both between the two alloy compositions and within the Mg-38 wt%Zn for the different cooling rates.As coarsening advanced, the mean curvature was observed to shift gradually from its initial position towards zero, while the frequency of the Gauss curvature with zero value, representing the distribution peak, increased in size.The experimental results acquired in this work can be used to both inform and validate numerical models of Mg alloy semi-solid dendritic coarsening.A representative sample of research data from the experiments along with the plot data for the graphs in this manuscript is provided in supplementary material available at http://dx.doi.org/10.1016/j.actamat.2016.10.022.
The scale of solidification microstructures directly impacts micro-segregation, grain size, and other factors which control strength.Using in situ high speed synchrotron X-ray tomography we have directly quantified the evolution of dendritic microstructure length scales during the coarsening of Mg-Zn hcp alloys in three spatial dimensions plus time (4D).The influence of two key parameters, solute composition and cooling rate, was investigated.Key responses, including specific surface area, dendrite mean and Gauss curvatures, were quantified as a function of time and compared to existing analytic models.The 3D observations suggest that the coarsening of these hcp dendrites is dominated by both the re-melting of small branches and the coalescence of the neighbouring branches.The results show that solute concentration has a great impact on the resulting microstructural morphologies, leading to both dendritic and seaweed-type grains.It was found that the specific solid/liquid surface and its evolution can be reasonably scaled to time with a relationship of ∼ t−1/3.This term is path independent for the Mg-25 wt%Zn; that is, the initial cooling rate during solidification does not greatly influence the coarsening rate.However, path independence was not observed for the Mg-38 wt%Zn samples because of the seaweed microstructure.This led to large differences in the specific surface area (Ss) and its evolution both between the two alloy compositions and within the Mg-38 wt%Zn for the different cooling rates.These findings allow for microstructure models to be informed and validated to improve predictions of solidification microstructural length scales and hence strength.
“Four hostile newspapers are more to be feared than a thousand bayonets.,– Napoléon Bonaparte , p. 11."Norway is the world's largest producer of farmed Atlantic salmon, and the aquaculture industry1 is one of the largest export industries in the country.The industry is an important contributor of value creation and employment in the coastal areas of Norway.To maintain and increase current production levels, the industry is dependent on access to favorable production sites.Local communities are first-line gatekeepers approving or denying access to sites in local coastal waters, and public acceptance and good standing in local communities therefore is important.The industry also is dependent on its image or reputation, as represented in news media and manifested in the general public opinion to be able to market and sell its product."Furthermore, media coverage and public opinion on aquaculture may influence politicians and regulatory authorities, impacting on the industry's framework conditions as conditioned by a supportive governance system.Public opinion is a challenging object to study.In relation to aquaculture, however, media representations have been used to study public perception and to uncover different media framings .On the relatively specialized topic of aquaculture, it is useful to know what information is available to the public."Understanding the content of newspaper articles cannot inform us about people's view on aquaculture, but it can provide an idea of the issues people may think about when considering the aquaculture industry.In the case of aquaculture, this does not suggest that the media have the impact to tell people exactly what to think, but the media can be quite successful in telling the people what to think about."When it comes to fish farming in Norway, the media's issue agenda and coverage of aquaculture industry is central in informing the public of prominent issues and debates.This is strengthened further by the fact that most people do not have the opportunity to learn about aquaculture from firsthand experience because the industry is located in rural areas with production out in open waters.Mass media plays a key role in structuring and dominating the public sphere and is one of the most used and preferred information sources as well as being characterized as the “watch dog” or the “fourth power” of government .Media information related to farmed salmon, such as food health issues, can influence public opinion and consumers’ decisions and perception with respect to the aquaculture industry .There are several examples of media controversy over foods, and farmed salmon is no exception .Consumers are exposed to numerous, and often contradictory, messages with respect to issues such as food safety and environmental conflicts."Competing claims can put consumers in difficult positions when weighing risks and benefits of aquaculture production , and there are concerns about the mass media's role as “meaning-makers” . "A demonstration of the media's influence on people's perception is the media storm that erupted after a research study stated that farmed salmon contained more health-threatening pollutants than wild salmon.This had an immediate impact in the media, in addition to impact on the public.Later, however, experts reached different conclusions, but these results were not as publicized as those from the other study .This demonstrates the power of what Flyvbjerg describes as tension points, meaning points of potential conflict, that are particularly interesting to the public and media.Tension points are of great interest to media as these conflicts tend to make good stories when focusing on power and dubious practices.In addition to public opinion and perception of the aquaculture industry, citizens’ political priorities can be determined by media agendas and this has ramifications .If people believe the industry has negative impacts on the environment and human health, the public will demand a better regulated industry."The media coverage of an issue therefore may have an impact on the public's demand for solving an issue.The media also are involved in indirect attempts to influence policy .Goldenberg, cited in Ashmoore, Evensen, Clarke, Krakower, and Simon , p. 239, said, “Through the media, issues are frequently brought to the attention of the public and governmental officials.,News coverage is used for many purposes, and to gain a hearing in the political process and attain the political agenda is one of them.In such ways the media are the key access point to public officials for all groups.It is recognized that media agenda is the journalists and newspapers way to inject their voices into the news, although the media agenda itself is also a subject influenced by politicians, government officials, stakeholders, the public, and scientists attempting to shape or manipulate the media .On the other hand, the market also creates a tension between media civic responsibilities and media profit motive.As a result of this tension the media could be forced to value audience size over news content, resulting in content that sells rather than content that informs .Influences from the market as well as from various stakeholders are important in shaping the media agenda.However, within the scope of this article the media content is seen as an expression of media agenda, independent from various stakeholder agendas and their possible influence on media.Newspaper articles from nine newspapers in 2012–2014 were examined and the content analyses show how the media represent salmon aquaculture and how this coverage potentially could influence the public.This study focuses on the information made available to the public as a signal of what the public might think about aquaculture; the design of the study and the dataset is able only to
Norway is the world's largest producer of farmed salmon.Aquaculture is the country's second largest export industry and thus vital for employment in coastal areas of Norway.The industry is dependent on public acceptance and good standing in local communities in order to gain access to new sites and to be able to sell its product.Public opinion (and assumptions about public opinion) on aquaculture may influence the industry's framework conditions and policy.Being located in coastal and rural areas, the industry must rely on the media to spread information to the public about the industry.Therefore, the media are an important source of information about farmed salmon, and the way the media present aquaculture issues has an impact on public opinion as well as authorities.
are being collected from different countries, the Ambulatory Blood Pressure Monitor being used is different which might give rise to systematic errors.However, both centers used systems of ABPM monitoring that are well validated and should minimize observer bias in the collection of individual blood pressure values.Both countries are very different in terms of culture, demographics as well as dieting style and that the Asian population tends to be lower in BMI as compared to the Caucasian population.In Singapore, the food consumed is generally healthier and tend to be less salty which might result in a lowering of blood pressure as compared to that of the Irish population.Unlike Ireland, Singapore does not have winter season, thus the amount of food consumption are also generally more constant as people tend to consume greater amount and saltier food during the winter as compared to summer.This would inevitably affect the overall blood pressure as well.However, despite the differences in diet, race and culture, the difference in BMI between both countries was modest.This will allow the exclusion of these difference affecting any changes in blood pressure between the two populations.This is further enhanced by the similarity in the sexes group being used with males being the predominant one due to the fact males are more susceptible to hypertension and cardiovascular diseases.The population of hypertensive subjects studied in the summer and winter cohorts in each country differed.This is a reflection of the collection of this data from a real world population.Although it would be preferable to study the same population in both phases this was not practical in view of the clinical demand for ABPM and the need to apply it in an environment of routine care.As the comparison of an entire cohort in the two countries seems to show how the season will affect the dipping status in people, this would perhaps, be amplified if the same cohort were used.Another limitation will stem from the fact that Singaporeans in general are mostly indoors in air-conditioned areas.During a hotter weather the air-conditioners will be used to reduce temperature.Irish on the other hand, will use heaters during the winter season and reduce the change in temperature due to any seasonal changes."In our population of non-night shift works the nocturnal BP recordings will almost exclusively be recorded in the patients' homes.As a result, the changes in blood pressure might not accurately reflect the actual changes in the temperature changes due to different seasons as the air-conditioning and heating might lower the difference in changes of temperature due to the changes of weather.This might cause an underestimation of the actual change in blood pressure.In summary, it is difficult to obtain a single variable from two separate countries with different cultural influences.However, this research shows a clinical significance in the seasonal changes in nocturnal dip and should be taken into consideration during the treatment of diastolic pressure of hypertensive patients.In the usual practice now, not much of an attention is given to the fact that how the seasonal changes will affect the usual blood pressure changes.By taking the changes into consideration, clinicians will be able to control the changes in blood pressure more effectively especially even when the seasons change through the control of dosages of medications prescribed.The change in blood pressure should also be taken into considerations when the patient travels into countries with different climatic environment.The project is fully funded independently.All the authors involved report no disclosure.
Background Normal blood pressure (BP) follows a circadian rhythm, with dipping of BP at night.However, knowledge is limited in how the nocturnal dipping in hypertensive patients changes with the seasons.The study aims to examine the pattern of seasonal changes of nocturnal dip in an Irish population and furthermore, to compare it to the pattern observed near the equator where such seasonal variations are minimal, by also studying a Singaporean population.Methods Ambulatory Blood Pressure Monitor recordings were obtained from 220 patients, half were from Mercy University Hospital, Cork, Ireland and half from the National Heart Centre, Singapore during the summer period from May to June and the winter period from October to December.Results Irish seasonal changes resulted in an increase in nocturnal dipping in the hypertensive patients, especially for diastolic pressure (95% CI, 0.72 to 6.03, 3.37mmHg; p<0.05) and a change in the duration of dipping at night (95% CI, 0.045 to 1.01, 0.53h; p < 0.05).In Singapore, slight differences in dipping in systolic pressure were apparent despite the presence of only minor alterations in temperature (95% CI, 0.38 to 4.83, 2.61mmHg; P<0.05) or duration of daylight.This has implications on how hypertensive patients should be treated during different seasons and when they are traveling to countries of different climatic environment.
In recent years, all inorganic perovskite quantum dots have attracted significant attention due to their superior performance in optoelectronic devices such as light-emitting diodes , lasers , photodetectors solar cells and Soft-X-Ray detectors .In particular, all inorganic CsPbBr3 perovskite QDs synthesized by Kovalenko and co-workers , and Li and co-workers were extensively studied because they exhibit ultrahigh photoluminescence quantum yield and low threshold lasing, making them potential emitters for electroluminescent displays.However, the toxicity of heavy metal lead in CsPbBr3 perovskite raises great concerns about the environmental pollution.Thus, Jellicoe et al. developed CsSnBr3 perovskite QDs using the hot injection method .Unfortunately, the existence of divalent tin makes CsSnBr3 perovskite QDs extremely unstable due to Sn2+ ions are easily oxidized to Sn4+ ones, which results in low PLQY.On the other hand, the stable Cs2SnI6 perovskite QDs prepared by Wang et al. also exhibit a low PLQY owing to the poor ion conductivity of Sn .In fact, in the halide perovskite family, divalent lead plays an important role in stabilizing perovskite and providing matching energy levels, thus they display the outstanding optoelectronic performance.In this sense, the partial substitution of lead with other ions is a good strategy to obtain stable and high PLQY perovskite QDs.In 2016, Zhang and his co-workers synthesized CsPb1-xSnxBr3 perovskite QDs by partial replacement of Pb2+ with Sn2+ using the HI method .They considered that the partial lead substitution not only reduces the toxicity of material, but also improves the device performance of LEDs.Note that with the increase of Sn content x, the PLQY of the QDs decreases rapidly, which is mainly ascribed to the unstable Sn oxidation in air.Recently, Wang and co-workers fabricated CsPb1-xSnxBr3 perovskite QDs by partially replacing Pb2+ with highly unstable Sn2+ .For the Sn2+ substituted QDs with the best ratio of x = 0.33, the absolute PLQY is as high as 83% because that a small number of Sn2+ doping effectively suppresses the formation of trions.However, the above mentioned CsPb1-xSnxBr3 perovskite QDs were prepared using the HI method, which inevitably needs high temperature and inert atmosphere, and thus leads to high cost and limited output.Therefore, it is necessary to develop milder methods of synthesizing perovskite QDs.Although CsPbX3 perovskite QDs have been synthesized at room temperature , the Sn2+ doped CsPb1-xSnxBr3 QDs fabricated by means of the RT method are still not reported.The most probable reason should be that Sn2+ is easily oxidized to Sn4+ at room temperature in air.In this work, we present a facile RT method to synthesize CsPb1-xSnxBr3 perovskite QDs with Sn2+ replacement based on mixed metal cation.Our experiments demonstrate that the RT-QDs of CsPb1-xSnxBr3 with Sn2+ substitution exhibit significantly improved PLQY compared with the HI analogues.When the relative amount of Sn is about 10%, the CsPb0.9Sn0.1Br3 RT-QDs achieve the highest PLQY of more than 91%, which is higher than that of CsPb1-xSnxBr3 HI-QDs respectively replaced with Sn2+ and Sn4+ ions.Moreover, the film based on this QDs shows extremely high stability because it keeps more than 80% of its original fluorescence strength after 120 days of exposure to atmosphere.The RT-QDs of CsPb1-xSnxBr3 are employed as light emitter in the LEDs.The device based CsPb0.9Sn0.1Br3 RT-QDs exhibits encouraging performance.Partial lead replacement at room temperature reduces manufacturing costs and ultimately improves device performance, which provides a potential method and opens up a world of new optoelectronic materials for low-cost and low-toxic perovskite QDs LEDs.All materials were used as received and used without purification, unless otherwise noted.Cesium bromide, lead bromide, Tin Bromide, oleic acid Aesar, tech, 90%), oleylamine, toluene, hexane.PEDOT: PSS was purchased from Heraeus, dimethylsulfoxide and N, N-dimethylformamide were purchased from Sigma-Aldrich.PVK, TPBI, and LiF were purchased from Luminescence Technology, respectively.The schematic diagram of synthesis processes are provided in the Supporting Information.The synthesis of CsPbBr3 perovskite QDs was carried out by modified ligand-assisted reprecipitation approach , through injection of 0.2 mL of precursor mixture into a “bad solvent” toluene under vigorous stirring.The precursor was prepared by mixing 0.2 mmol CsBr, 0.2 mmol PbBr2, 0.05 mL OAm, and 0.1 mL OA in “good solvent” DMF and DMSO, respectively.The perovskite QDs solution was centrifuged, then the precipitate was washed by toluene/ethylacetate solution.After that, the solution was centrifuged at 5000 rpm for 5 min.Finally, the QDs were dried for 12 h in vacuum drying box, and then the obtained powders were used to characterize its physical properties by the XRD, XPS, etc., such as the structure and morphology.The optical properties were characterized by re-dissolving the powders into n-hexane to form colloids for further characterization.Varied Sn content was controlled by changing the concentration of SnBr2 in the precursor to obtain the CsPb1-xSnxBr3 QDs.In order to ensure good reproducibility, the CsPb1-xSnxBr3 QDs were synthesized repeatedly.All samples were synthesized at room temperature with a range of 20–25 °C.The as-prepared inorganic perovskite QDs with mixed-metal cations were investigated by the powder XRD and transmission electronic microscopy and the results were shown in Fig. 1.As shown in Fig. 1g, the XRD patterns show that the CsPb1-xSnxBr3 perovskite QDs have cubic structure, which is in good agreement with that of cubic phase PDF#54-0752 reported by the previous works .However, the shape of the QDs become irregular with increasing Sn2+ concentration.Further, it can be found that the CsPb1-xSnxBr3 QDs have the main XRD characteristic peaks of CsPbBr3 perovskite QDs, indicating that Sn doping can basically maintain the framework of CsPbBr3.However, many impurity p- eaks appear with the increase of Sn content, which is due to the formation
Compared with the CsPb1-xSnxBr3 HI-QDs reported in literatures, the CsPb1-xSnxBr3 RT-QDs show higher photoluminescence quantum yield (PLQY) and better stability: the CsPb0.9Sn0.1Br3 RT-QDs obtain the highest PLQY of more than 91%, and the stability of the film made with this QDs still maintain more than 80% of its original fluorescence strength after 120 days in air environment.
CsPb0.9Sn0.1Br3 perovskite QDs, the ultrafast femtosecond transient absorption spectroscopy, which can provide detailed radiative and non-radiative processes of excited states, was performed.Fig. 4a shows the decay associated spectra and retrieved decay times according to the global fitting.Obviously, TA dynamics were decomposed into three components, the ultrafast 2 ps component, middle 228 ps component and ultra-long-lifetime 1 ns component.This indicates that there are three main decay processes for the excited carriers of materials .Usually, Auger recombination process can be presented as a ultrafast decay component under high pump intensity .Therefore, the observed ultrafast component should be assigned as a combination of Auger recombination and charge transfer from the excited state to the trapping state.For the 228 ps component, it is considered to be the lifetime of typical electron-hole pair intrinsic radiative decay in semiconductor materials .As mentioned above, the PL lifetime of CsPb0.9Sn0.1Br3 perovskite QDs is the longest, which is attributed to the fact that a suitable amount of tin doping greatly improves the lattice and reduces the density of defect states.The ultra-long-lifetime component features a negative tail below the band gap energy toward 650 nm, which indicates the existence of sub-band gap states transition proved by Wu and Zheng .That is to say, the ultra-long-lifetime component can be attributed to excitonic trapping states associated decay pathways.Combining the above results we can formulate the excited state dynamics model of CsPb0.9Sn0.1Br3 perovskite QDs, which includes Auger recombination and excited state charge transfer, intrinsic radiative, and the decay of the long-lived trapping states.The high PLQY of the CsPb0.9Sn0.1Br3 RT-QDs can be attributed to the fact that the Sn2+ doping improves the QDs lattice and partially passivates the trap states.The stability of perovskite QDs is critical for their practical applications.The room temperature fluorescence intensity of CsPb0.9Sn0.1Br3 perovskite QDs as a function of time are shown in Fig. 5.As can be seen from Fig. 5a, the PL position and shape of CsPb0.9Sn0.1Br3 perovskite QDs in solution did not change significantly after 120 days exposure to air at room temperature, indicating that these QDs have outstanding stability under ambient conditions.From the quantitative variation of fluorescence intensity, one can see that the PL intensity can be maintained above 90% of its original value after 120 days.This is far superior to other reported CsPb1-xSnxBr3 HI-QDs .Further, we tested the moisture stability of film made with this RT-QDs.The result shows that the film can maintain more than 80% of its initial FL intensity after 120 days exposure to air.Obviously, the RT synthesized CsPb0.9Sn0.1Br3 perovskite QDs exhibit outstanding stability in solution and film in ambient air.In view of the excellent optical properties and stability of the perovskite RT-QD CsPb0.9Sn0.1Br3, we explored its application in LEDs RT-QDs were not used for LEDs because they are unstable).The structure of LEDs based on this QDs is displayed in Fig. 6.The device based on the CsPb0.9Sn0.1Br3 RT-QDs consisting of the spin-coated layers of poly poly, poly, perovskite layer, and evaporated layers of 1,3,5-tris benzene and LiF/Al, was manufactured and the fabrication details is presented in the Supporting Information.For comparison, the LEDs device based on the CsPbBr3 QDs was also fabricated.The energy levels of the different components in the LEDs are given in Fig. 6b.The performance of device with CsPb0.9Sn0.1Br3 QDs is shown in Fig. 7.At a certain voltage, the best device based on CsPb0.9Sn0.1Br3 QDs exhibits a low on-voltage of 3.6 V and a maximum brightness of 1600 cdm-2.The small turn-on voltage indicates that upon partial Sn2+ substitution, the injection of charge carriers is easier.The maximum current efficiency, external quantum efficiency, power efficiency are 4.89 cdA-1, 1.8% and 6.41 lmw−1, respectively.Note that two prepared separately 10% Sn2+ doped QDs were used to fabricate the LEDs devices and their performances were tested under the same conditions.The results show that they have similar EQE of about 1.8% and other properties, indicating good reproducibility.Fig. 7d shows the electroluminescence spectrum and the corresponding PL spectrum of the CsPb0.9Sn0.1Br3 QDs LED operating at 12 V.The central emission wavelength of EL spectrum is about 523 nm.Compared with the PL spectrum, the red shift is only 4 nm, which is caused by aggregation of the QDs during spin-coating the film.In addition, the profile of peak is well symmetrical and no other impurity peaks were observed.This means that at higher current density, the resulting material retains its original electronic structure .As shown in Fig. S6b, the 1931 Commission Internationale del’Eclairage color coordinates of CsPb0.9Sn0.1Br3 perovskite QDs LED is, which are almost located at the edge of the CIE graph, indicating the high color purity of this device with the CsPb0.9Sn0.1Br3 RT-QDs.According to the above experimental results, one can find that the CsPb1-xSnxBr3 perovskite QDs synthesized at room temperature have broad application prospects in high efficiency LEDs.In summary, we synthesized the Sn2+ doped perovskite QDs CsPb1-xSnxBr3 at room temperature.TCSPC and fs-TA measurements were used to study the excited state dynamics of the as-synthesized CsPb1-xSnxBr3 RT-QDs.Three decay processes of the excited state were assigned.The high PLQY of the CsPb0.9Sn0.1Br3 perovskite QDs is attributed to the improved lattice of the RT-QDs and the partial passivation of trap states by appropriate Sn2+ doping.The LED device based the CsPb0.9Sn0.1Br3 perovskite QDs displays a luminance of 1600 cdm-2, a CE of 4.89 cdA-1, an EQE of 1.8%, a PE of 6.41 lmw−1, and a low turn-on voltage of 3.6 V.The present work demonstrates that mixed metal cation illuminate materials with high PLQY and excellent stability can be obtained
Because of the superior PLQY, light-emitting diodes (LEDs) based on the RT-QDs is constructed, and it exhibits an external quantum efficiency (EQE) of 1.8%, a luminance of 1600 cdm-2, a current efficiency of 4.89 cdA-1, a power efficiency of 6.41 lmw−1, and a low on-voltage of 3.6 V. The present work provides a feasible method for large-scale industrial synthesis of perovskite QDs at room temperature and shows that the CsPb1-xSnxBr3 RT-QDs are promising for highly efficient LEDs.
.Another behavioral test used to achieve the aim of the study was the PA test.It assessed short-term and long-term emotional memory.This type of memory is associated with anxiety, which is generated by an aversive stimulus and involves Pavlovian conditioning based on the acquisition, storage and maintenance of avoidance reactions.It is suggested that this memory depends on the function of the hippocampus and the prefrontal cortex .Zonisamide administered in single doses, high or low, did not impair memory in rats in PA.However, the drug given repeatedly decreased the latency to enter the dark compartment, which indicates a memory disturbance in the animals.This effect occurred both after 7 days and 14 days of the drug administration.There is no information about the effect of zonisamide on the learning process in PA but the influence of other novel antiepileptic drugs was evaluated in preclinical studies.It has been demonstrated that an acute dose of levetiracetam caused adverse effect on emotional memory in mice .Furthermore, levetiracetam and topiramate administered for over 45 days disturbed the acquisition of avoidance reactions in rats .The available information about the impact of zonisamide on cognitive processes come from clinical trials involving a small number of patients.During a long-term therapy with zonisamide at a mean daily dose of 225 mg, patients with epilepsy reported varying degrees of cognitive deficits, including memory loss and attention-deficit .The same research team observed in another study memory impairment in some patients with epilepsy who were treated with zonisamide at doses of 100–400 mg/day.Cognitive and mood tests were performed twice during the study: before and after one year of treatment.Zonisamide reduced the frequency of epileptic seizures and did not induce significant mood changes.However, the drug impaired delayed word recall, Trail Making Test Part B, and verbal fluency in some patients, and this effect was dose-dependent .Wandschneider et al. compared the effects of zonisamide, topiramate, and levetiracetam on verbal fluidity and working memory using the functional magnetic resonance imaging language task in patients with focal epilepsy.It has been observed that zonisamide caused the dysfunction of verbal fluidity and working memory.In addition, similar disorders were observed in patients treated with topiramate.The most favorable results were obtained following the administration of levetiracetam, which disturbed the assessed parameters to the smallest degree .To summarize, the results obtained in this study indicate that zonisamide may impair memory and learning processes in rats; however, the results are varied and depend on the type of memory.It was observed that the drug disturbs the spatial memory mainly if given in a high, acute dose, whereas when administered repeatedly, its effect was noted only in the initial phase of the study.On the other hand, emotion-related memory disturbances were observed only during repeated administration of zonisamide."However, there are some limitations to this study that warrant attention, which are associated with a relatively short time of drug administration and the lack of the assessment of zonisamide's effect on the hippocampal cells.Despite this, it seems that memory disorders after zonisamide administration are associated with the dosage and treatment schedule.This observation is important from the clinical point of view, but further extended studies are needed to determine the risk of cognitive disorders during the zonisamide therapy.This study was supported by Medical University of Lodz, Poland.The funding source had no other role other than financial support.
Objective: Zonisamide is an antiepileptic drug with a perspective of a broader use.Although it is regarded as a relatively safe drug, zonisamide might cause disorders of the central nervous system.The study assessed the influence of zonisamide on spatial and emotional memory in adult Wistar rats.Methods: Morris water maze test was used to examine the effect of zonisamide administered p.o.as single dose (50 mg/kg or 100 mg/kg) or repeatedly (50 mg/kg) on spatial memory.The impact of zonisamide administered as above on emotional memory was assessed in the Passive avoidance test.Results: Zonisamide mainly in a high acute dose impaired the spatial memory, whereas when administered repeatedly, its effect was observed only in the initial phase of the study.Emotional memory disturbances were noted only during repeated administration of zonisamide.Conclusion: Zonisamide may impair memory and learning processes in rats but the results are varied and depend on the type of memory.
producers state that the batteries removed from new energy vehicles retain 70–80% valid energy and appear competitive in costs, there are still many challenges when energy storage is focused in the field of battery reuse” ."Two years later, a large electric power company started construction of a 268.6 MWh energy storage plant in the east China's Jiangsu Province.The PV + storage plant will use retired EV batteries of 75,000 kWh residual capacity, with additional storage capacity of 193,600 kWh from new LIBs .This single example renders the pace of innovation in energy storage, and reinforces the need to broaden and renew the education of energy managers \u200b, particularly in the field of solar energy whose 2018 photovoltaic output in China grew by 50 per cent in one year only, to the outstanding figure of over 177 TWh \u200b."Regardless of ongoing reports for which, citing data going back to 2010, lithium-ion batteries would be “currently recycled at a meagre rate of less than 5% in the European Union” \u200b, this study not only refers to actual figures for which, globally, 58% of the world's spent LIBs will be recycled only in 2019 \u200b, but also shows evidence of a global boom of LIB industrial recycling lately extending to numerous countries beyond China. "This is not a research policy study but it cannot be omitted to notice how, reflecting global dominance of China's battery manufacturing and recycling industries, most research articles on the recovery of valued metals from spent LIBs were financially supported by China's government through the National Natural Science Foundation of China, and through Provinces interested in preventing pollution and in supporting the huge new battery manufacturing and battery EV industries. "There is not shortage of lithium \u200b, but there is shortage of highly pure lithium carbonate and lithium hydroxide as lately shown, for example, by the scarcity of battery grade lithium lately recorded by the Germany's company willing to start large-scale electric bus manufacturing .In brief, the reuse and the recycling of LIBs is no longer an option but an inevitable need for both battery and battery EV manufacturers.Helping to further streamline and automate the recycling process, the circular economy companies recycling lithium-ion batteries already work with battery makers to adopt easily dismantled product designs, and will shortly uptake the new green chemistry processes lately developed for the green recovery of all valued battery components.Energy storage in lithium-ion battery is essential to expand the uptake of clean and renewable electricity for all energy needs including and foremost for powering electric vehicles.Providing an updated global perspective on lithium-ion battery reuse and recycling, this study will be useful to scholars, for example to update content of their teaching, as well as to policy makers devising new policies to promote the energy transition .Mario Pagliaro, Francesco Meneguzzo: All authors listed have significantly contributed to the development and the writing of this article.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.The authors declare no conflict of interest.No additional information is available for this paper.
Driven by the rapid uptake of battery electric vehicles, Li-ion power batteries are increasingly reused in stationary energy storage systems, and eventually recycled to recover all the valued components.Offering an updated global perspective, this study provides a circular economy insight on lithium-ion battery reuse and recycling.
The frequency of opioid use and its complications have increased substantially throughout the U.S. in recent years.From 1999 to 2015, the amount of opioid pain relievers prescribed per person grew three-fold in the US, reaching over 226 million prescriptions in 2015 – a rate of nearly 71 opioid prescriptions per 100 persons in the U.S.The rise of opioid use has been associated with neonatal opioid withdrawal, opioid-related hospital and emergency department utilization, and overdose deaths.In 2016 alone, the economic burden of the opioid epidemic grew to almost $96 billion and resulted in more than 42,000 overdose deaths.States have responded to the opioid epidemic with a variety of state-level law and policy interventions, including implementing prescription drug monitoring programs, naloxone distribution programs, and regulation of pain management clinics.In particular, PDMPs, databases that collect and store information from pharmacies dispensing controlled substances, have emerged as a common state intervention.In most states, data collected from PDMPs can be used to identify improper and potentially dangerous prescriber behaviors and patient behaviors.While research has assessed effectiveness of PDMPs at the state-level, in depth examinations of the nuances and process of opioid-related policy implementation are limited.The objective of this paper is to characterize state-level laws and policies aimed at reducing opioid-related harms within a purposive sample of 10 states, focusing on the implementation of PDMPs and naloxone access, and provide insights into successes and challenges of legislation and implementation from these 10 states.Ten states were selected a priori to achieve a variation in state policies aimed at opioid misuse, timing of opioid law and policy implementation, and differences in population opioid complications: Florida, Kentucky, Massachusetts, Michigan, Missouri, New York, North Carolina, Tennessee, Washington, and West Virginia.The sample was selected to reflect variation in state policy and opioid-related complications.Sample states represent the range of PDMP experience: four have decade-long established PDMPs, while one state had not yet implemented a PDMP at the time of the study.They also vary in opioid-related complications including overdose death which ranged from 5.0 to 22.3 per 100,000 population in our sample in 2013.We obtained contact information for key informant interviewees in each state from PDMPassist.This list was enhanced by web searches of state offices and departments responsible for the implementation of PDMP and naloxone efforts.We utilized snowball sampling as necessary to identify additional individuals in the state knowledgeable about opioid-related policy efforts.Current state-level opioid-related laws and policies for each of the 10 states were identified by reviewing publicly available documents from the Association of State and Territorial Health Officials and Prescription Drug Abuse Policy System websites.From these compiled excerpts of state laws, state summaries of opioid-related legislation and amendments, passage dates, and policy implementation dates were created for data tables and use in the key informant interviews.Subsequent interviews allowed for verification of this information and provided additional implementation information that could not be gleaned from the laws and regulations."A semi-structured key informant interview guide was developed and used for each interview to address several domains of law and policy passage and implementation: 1) identifying specific opioid-related legislation passed in their state along with passage and implementation dates for confirmation; 2) characteristics of the state's PDMP and naloxone programs; 3) perceived successes associated with policy implementation; and 4) recognizable challenges of implementing these policies.The qualitative semi-structured interview approach was used as a grounded theory emergence strategy allowing participants to articulate their experience with minimal guidance.The semi-structured interview protocol allowed themes around the success and challenges of legislation and implementation to emerge from the participants rather than asking about specific categories.We included probes about potential issues for use if necessary, but focused on “top of mind” responses.We began key informant recruitment with state officials and PDMP administrators given the likelihood that they would have unique insight and institutional knowledge of the practical implementation challenges of PDMPs.Additional interview participants knowledgeable of state-level opioid-related legislation and implementation came from state and county health departments, state agencies, pharmacy boards, and universities.Stakeholders were directly invited via email from the principal investigator of the study to participate in an in-depth telephone interview, frequently with multiple participants from a selected state in a single interview.To encourage frank and open discussion of successes and challenges encountered, interview participants were assured of confidentiality.Therefore, we do not attribute specific qualitative themes or quotes to named individuals or states.Each 30-minute to one-hour interview was conducted by one team member serving as interviewer with other team members taking detailed notes.The resulting notes were reviewed and edited by both the note takers and interviewer to ensure common understandings were reflected with any discrepancies resolved jointly by the interview team.Team members coded the key informant narratives using the structure of the interview tool as the framework.All responses to a particular domain were compiled and pre-identified categories drawn from probes in the interview tool used to code responses as applicable, with iterative post-coding of subcategories as appropriate.The information was then analyzed to identify commonalities, grouping them into themes within each domain.Study team members reviewed analysis sections for fidelity to the content of the interview notes.Quantifiable data from the interviews on state-specific legislation, including passage and implementation dates, and PDMP and naloxone program characteristics were managed using REDCap.This project was considered exempt from human subjects review by the Vanderbilt University Institutional Review Board.Between March 2016 and September 2016, we conducted 22 phone interviews with 31 key informants in the ten states.Among those contacted, we had a 96% participation rate.Two
A set of 10 states (Florida, Kentucky, Massachusetts, Michigan, Missouri, New York, North Carolina, Tennessee, Washington, and West Virginia) was chosen a priori to achieve a varied sample of state policies and timing, as well as population opioid complications.Archival research was conducted to identify state-level policies aimed at the opioid epidemic and semi-structured interviews were conducted with 31 key stakeholders between March and September 2016.
PDMPs continue to evolve to better serve stakeholder's needs.However, mandatory checking of the PDMP by prescribers is not the rule in many of the states we studied, with initial provider pushback being a significant factor.Kentucky, Massachusetts, New York, and Tennessee mandated prescriber use of the PDMP.For other states in our sample, it remains a challenge to track whether the PDMP is being utilized appropriately without requiring physician registration and mandating use.As in earlier studies, technical challenges remain significant barriers to PDMP use among our study sample.While the timely administration of naloxone has undeniably affected the opioid epidemic and states and smaller jurisdictions are steadily moving to allow its use among first responders and community members, New York was the only state in our sample to appropriate funding for naloxone.We found that many states voiced concern regarding the lack of funding, and many interviewees reported having to use a wide variety of sources for naloxone funding including grants, forfeiture funds, hospital budgets, city budgets, state budgets, and funding from local, private individuals.States included in our study reported the buy-in and collaboration of state government offices and agencies, a champion able to cross party lines, and sharing personal stories as keys to the success of their opioid policy implementation.Collaboration of stakeholders was a shared theme of identified successes.The pervasiveness of opioid use and ability to connect one-on-one via these personal experiences was cited by many interviewees as key to gaining legislator buy-in, particularly in crossing partisan lines in state legislatures.Just as a unified legislature has been critical to successful of opioid policy implementation for states in our study, the lack of a coordinated lawmaking effort in many states has slowed, if not stopped the passage of legislation and implementation of opioid control efforts in several states.Identifying the full range of interested stakeholders, educating them, and providing continuing feedback are ongoing challenges in all states.Concern about a fluid epidemic and prescription opioid use potentially shifting toward greater use of heroin and fentanyl are also not unfounded.There were 1960 overdose deaths attributed to heroin in 1999 and 12,989 in 2015, a six-fold increase.Our study has important limitations.Interviews were limited to a subset of states perhaps limiting its generalizability.While we report several clear trends, they may not be representative of all state-level opioid epidemic interventions.Another limitation is the rapidly changing environment and availability of cheaper illicit opioids making it difficult for state health departments to anticipate additional legislation or new data elements necessary for their efforts to evolve with the “moving target” of opioid use among their populations."Creating a champion-led task force and using stakeholder's personal stories to garner buy-in are reported as critical aspects of implementing policies aimed at opioid misuse, with divided legislatures and physician pushback creating the most common challenges.Involving the full range of interested stakeholders, educating them, and providing continuing feedback as well as finding funding for naloxone are ongoing challenges in the implementation of opioid use policies.There remains a need for research to address the evolution of the epidemic to inform development of comprehensive policy solutions.Research reported in this publication was supported by the National Institute for Health Care Management Foundation and the U.S. National Institute on Drug Abuse, National Institutes of Health under awards number K23DA038720 and R01DA045729.The content is solely the responsibility of the authors and does not necessarily represent the official views of the U.S. National Institutes of Health.The sponsors had no role in the design and conduct of the study; in the collection, analysis, and interpretation of the data; or in the preparation, review, or approval of the manuscript or the decision to submit.Dr. Whitmore and Ms. White had full access to all the data utilized in the study and take responsibility for the integrity of the data and accuracy of the data analysis.Study concept and design: Patrick, Whitmore,Acquisition of data: Whitmore, White, Fry, Calamari,Interpretation of results: Whitmore, White, Patrick, Buntin,Drafting of the manuscript: Whitmore, White, Patrick,Critical revision of the manuscript for important intellectual content: Patrick, Whitmore, White, Buntin,Data analysis: Whitmore, White,Study supervision: Patrick, Buntin,All authors have reviewed and approved the final manuscript.
As the magnitude of the opioid epidemic grew in recent years, individual states across the United States of America enacted myriad policies to address its complications.We conducted a qualitative examination of the structure, successes, and challenges of enacted state laws and policies aimed at the opioid epidemic, with an in-depth focus on prescription drug monitoring programs (PDMPs) and naloxone access efforts.The lack of a unified legislature and physician pushback were challenges many states faced in implementing policies.Champion-led task forces, stakeholders’ personal stories garnering buy-in, ongoing education and feedback to PDMP users, and inclusive stakeholder engagement are critical aspects of passing and implementing state policies aimed at combating the opioid epidemic.Engaging all interested stakeholders and providing continuing feedback are ongoing challenges in all states.Leveraging stakeholders’ personal stories of how opioids affected their lives helped propel state efforts.
Deep Neck Infections are defined as suppurative infectious processes of deep visceral spaces of the neck that usually originates as soft tissue fasciitis and may lead to an abscess.1,Direct extension of an upper aerodigestive infection through fascial planes is the most common cause.DNI are a frequent emergency in Otolaryngology which can be life-threatening as it may lead to airway obstruction, mediastinitis or jugular vein thrombosis.2,The aim of this study is to review different factors that may be the predisposing ones to an increase of infection risk and may have an important role in prognosis.We performed a retrospective study of patients diagnosed of cervical infection who were admitted in the emergency room of our hospital from January 2005 to December 2015.We excluded patients with superficial skin infections, limited intraoral infections and cervical necrotizing fasciitis.Finally, 330 patients were enrolled in our study.Although peritonsillar infections are not truly DNI, we decided to include them in our review because of its high incidence and sometimes coexistence with other deep neck space infection.We used excel and SPSS to perform statistical analysis and Pearson X2 test were calculated to obtain p-values.There were 176 men and 154 women.Our population ages ranged from 6 months to 87 years, the mean age was 32.89 ± 18.198 years.81.51% of them were adults and 19.49% were children.50% were older than 31 years old.The mean number of patients with a neck infection admitted in our hospital per year during 11 years was 29.82 people.The distribution by years is shown separately in Fig. 1.Autumn was the period where more patients presented a DNI, 8.55 ± 4.82 cases.This implies that between the end of September and the first half of December, 2.85 patients were admitted per month due to this pathology.The distribution in seasons is displayed in Fig. 2.The mean hospital stay was 4.54 days.7.3% of the population had allergy to some antibiotics, penicillin was the most common followed by aminoglycosides and quinolones.62 patients had had previously a DNI, and 14 had had tonsillectomy done years before.There were 28 patients with underlying systemic diseases.Diabetes Mellitus was the most prevalent in our population.The etiology of the infection was identified in 296 patients.The most common cause was pharyngotonsillar infections, followed by odontogenic infections.Rest of the causes exposed in Table 2.The peritonsillar space was the most commonly affected.Distribution of localizations is shown in Table 3.The most common symptom reported by the patients was odynophagia in 98.2% of patients while the most common sign was the presence of trismus in 55.5%, followed by cervical lymphadenopathies in 53.6%.244 patients had not received antibiotics prior to admission in our hospital.Those who have been treated had been taking penicillins in most of the cases.The rest of the patients had received macrolides, usually in a 3 day monodose treatment.A Fine Needle Aspiration was realized in 277 patients, in 22.74% of the cases purulent material was obtained, classifying it as an abscess.In routinary blood test, abnormal blood cell count was found with an increase in neutrophils in 313 cases.When the physical examination and the FNA were not enough to reach a diagnose, an imaging technique was realized.Cervical CT with iodinated contrast was the gold standard, DNI was described as diffuse inflammation area or a hypodense area with the presence of a “rind”, an air/fluid level or scattered small gas bubbles.CT was needed in 194 cases, and 48 of them required a second one due to a bad clinical evolution during hospital stay.Usually a second image test was performed after 48 h without any improvement with treatment.Cervical ecography was realized in 4 patients, they were one child under 1 year old and three adults with a severe renal failure in order not to expose them to iodinated contrast material.In two children with suspicion of retropharyngeal infection we preferred a cervical lateral radiography to avoid unnecessary radiation in infants.In these cases an increase of soft tissue in the retropharyngeal space was shown.Bacterial cultures were just possible in 221 patients however a positive result was obtained in 61.99% of them.The isolated pathogens and their incidence are shown in Table 4.All of our patients received antibiotics and corticosteroids.In 304 cases we chose a β lactamic associated with an inhibitor of β lactamases.Those who were allergic to β lactamics, were treated with an aminoglycoside or a quinolone in monotherapy or associated with an antibiotic against anaerobic microorganisms.There were three patients who required a drug change because of antibiotic resistance or torpid evolution; in those cases we preferred carbapenems.We had one DNI in our population caused by Mycobacterium tuberculosis, so it was treated with tuberculostatics drugs in the same way a respiratory infection is handled.Patients were treated with antibiotics during a mean time of 10.92 ± 3.73 days.Those who needed intensive care unit stay were the ones who required a longer antibiotic treatment.245 of them needed surgical drainage, 196 needed a transoral approach while 36 required a cervicotomy.In 4 patients we opted for a combined approach, it was usually used in multispace infection when the affected area were not adjacent.When there was tonsillar necrosis or intratonsillar abscess, we performed a tonsillectomy at the time of surgical drainage.16 of 245 patients who had been operated on, needed a second surgery because of bad clinical evolution.13 of our patients had complications.Mediastinitis was the most frequent one followed by airway obstruction, cellulitis, pneumonia, acute renal failure and sepsis.Tracheostomy was performed in 6 patients, 3 of them due to acute airway compromise and
Introduction: Deep neck infections are defined as suppurative infectious processes of deep visceral spaces of the neck.Objective: The aim of this study is to review different factors that may influence peritonsillar and deep neck infections and may play a role as bad prognosis predictors.Methods: We present a retrospective study of 330 patients with deep neck infections and peritonsillar infections who were admitted between January 2005 and December 2015 in a tertiary referral hospital.
the other 3 secondary to prolonged orotracheal intubation.We observed a vocal cord paralysis and a Horner syndrome in two patients after surgery.5 of 13 required intensive care unit attentions, with a mean stay of 49 days.One patient died from septic shock.The factors that were related with complications were analyzed.Male patients and those allergic to penicillins had a higher rate of complications and ICU stay.All factors are shown in Table 5.In our review pharyngotonsillar infections were the most common cause of peritonsillar and DNI.This result is consistent with some studies in the literature,1–4 although for the majority, odontogenic infections are the main cause, especially in studies carried out in Asia and Eastern Europe.5–7,This may be related to different oral hygiene conditions between different countries.Although peritonsillar infections are not strictly DNI we chose to consider them in our review, as well as other studies did,2,6,8 because in many cases it was the start of a proper DNI or because it had severe complications as a DNI can have.If we quantified just strict DNI we had a population of 91 parapharyngeal and 11 retropharyngeal infections in 10 years.In patients with DNI is more common to find cases who had not had a tonsillectomy, this may be explained as tonsils have an increased bacterial load living within crypts.7,We would like to enhance that we found an increase in DNI incidence in the second period studied; this could be due to an aging population or the fear to over prescribe antibiotics and develop resistant microorganisms.In fact, 3 out of 4 people had not taken any medication prior to the emergency consult.In this study we found that systemic comorbidities like diabetes mellitus3,4,9,10 or hepatopathy and allergy to penicillins are common in cases of DNI who suffer complications or require ICU stay.DM results in a defect of polymorphonuclear neutrophil function, cellular immunity and complement activation.Consequently, hyperglycemia and high glycosylated hemoglobin are predictors of worse prognosis,10 due to it, our diabetic patients were studied by the Endocrinology department.The prevalence of penicillin allergy in our review was lower than the global population one,11 however it was much higher in patients who required ICU stay or who suffered complications.S. viridans was the most common pathogen in our population, as well as in other studies.3,6,12,We did not find Klebsiella pneumoniae in our environment, which differs from studies in Asia.4,8,10,They usually find a high prevalence of this microorganism, specially in diabetic patients.13,We had two ways of obtaining material for culture, either a FNA in the consult or a sample obtained during surgical drainage.Sometimes none of them could be performed.Some patients had a severe trismus which hindered the FNA.On the other hand, 245 patients received surgical drainage, which is the best moment to take a sample of the infected material, but it was not always possible, as in some cases the material obtained from the infected area was not enough or was not in suitable conditions.Besides, even when the sample was enough cultures were not always positive.This may be explained by antibiotics taken prior to sample extraction or an incorrect sample management.According to the treatment, we confirm what most studies have already said.Every patient received antibiotics and corticosteroids.1–11,14,Surgical drainage still is the option when medical treatment is not enough, when there is already a well formed hypodense area with margins well defined or an air/fluid level or signs of complications such as mediastinitis or involvement of multiple regions.15,Complications may appear as a consequence of extension of the infection through neck spaces.Mediastinitis and airway obstruction were the most common ones as previous studies have shown before.1–3,In cases of mediastinitis, thoracic surgeons performed the drainage in the same surgical time as our team did a cervicotomy.Traqueostomy was needed in a lower percentage than other studies,3 around 1% like some Indian review.6,The use of corticosteroids decreases tissue edema and the probability of pus gush into the airway while endotracheal intubation, making the procedure safer and more successful.16,17,Even there has been an increase in DNI incidence, mortality remains low as it have been previously shown in other studies.17–19,DNI are still common and can develop serious complications.Immunocompromised patients with systemic comorbidities are susceptible of worse prognosis.In spite of the increase in DNI, mortality has decreased thanks to multidisciplinary attention and improvements in imaging techniques or antibiotics and surgery, which have enabled an earlier diagnosis and treatment.The authors declare no conflicts of interest.
Statistical analysis of comorbidities, diagnostic and therapeutic aspects was performed with Excel and SPSS.Results: There has been an increase in incidence of peritonsilar and deep neck infections.Systemic comorbidities such as diabetes or hepatopathy are bad prognosis factors.The most common pathogen was S. viridans (32.1% of positive cultures).100% of the patients received antibiotics and corticosteroids, 74.24% needed surgical treatment.The most common complications were mediastinitis (1.2%) and airway obstruction (0.9%).Conclusion: Systemic comorbidities are bad prognosis predictors.Nowadays mortality has decreased thanks to multidisciplinary attention and improvements in diagnosis and treatment.